Configuring VMware vCloud Director
Configure vCloud for OSM
vCloud director initial preparation
- In order to get vim-tenant_name from vCloud Director or/and tenant UUID execute.
./vmwarecli.py -u admin -p 12345 -c vcloud_host_name -U Administrator -P 123456 -o test list vdc
+--------------------------------------+----------+ | vdc uuid | vdc name | +--------------------------------------+----------+ | 605ad9e8-04c5-402d-a3b7-0b6c1bacda75 | test | | a5056f85-418c-4bfd-8041-adb0f48be9d9 | TEF | +--------------------------------------+----------+
- In this example two VDC (tenants) are available for organization test
- Create default network by either using Web UI of vCloud director or vmwarecli.py
./vmwarecli.py -u admin -p 123456 -c vcloud_host_name -U Administrator -P 123456 -o test -v TEF create network test
Crated new network test and uuid: bac9f9c6-6d1b-4af2-8211-b6258659dfb1
- View organization/dataceter.
./vmwarecli.py -u admin -p 123456 -c vcloud_host_name -U Administrator -P 123456 -o test view org test
+--------------------------------------+----------+ | vdc uuid | vdc name | +--------------------------------------+----------+ | 605ad9e8-04c5-402d-a3b7-0b6c1bacda75 | test | | a5056f85-418c-4bfd-8041-adb0f48be9d9 | TEF | +--------------------------------------+----------+ +--------------------------------------+-------------------------------------------+ | network uuid | network name | +--------------------------------------+-------------------------------------------+ | f2e8a499-c3c4-411f-9cb5-38c0df7ccf8e | default | | 0730eb83-bfda-43f9-bcbc-d3650a247015 | test | +--------------------------------------+-------------------------------------------+ +--------------------------------------+--------------+ | catalog uuid | catalog name | +--------------------------------------+--------------+ | 811d67dd-dd48-4e79-bb90-9ba2199fb340 | cirros | | 147492d7-d25b-465c-8eb1-b181779f6f4c | ubuntuserver | +--------------------------------------+--------------+
Image preparation for VMware
If a user needs on-board image that is not a VMware compatible disk image format such as qcow. User need to convert qcow image to an OVF.
- The first step is convert qcow disk image to vmdk.
- qemu-img convert -f qcow2 cirros-disk.img -O vmdk cirros-0.3.4-x86_64-disk.vmdk
- Second step.
- Click "New in VMware Fusion , Vmware workstation or vCenter and create a VM from VMDK file created in step one.
- Third step
- Adjust hardware setting for VM. For example, if target VMs should have only one vNIC delete all vNIC.
- Openmano will set up and attach vNIC based on VNF file.
- Make sure hardware version for VM set to 11 or below.
- Export VM as OVF and upload file to Openmano.
- Example of folder structure inside VNF directory. Each exported image placed inside ovfs directory.
drwxr-xr-x 2 spyroot staff 68 Oct 4 19:31 cirros -rw-r--r-- 1 spyroot staff 13287936 May 7 2015 cirros-0.3.4-x86_64-disk.img -rw-r--r-- 1 spyroot staff 21757952 Oct 4 19:38 cirros-0.3.4-x86_64-disk.vmdk -rwxr-xr-x 1 spyroot staff 57 Oct 4 18:58 convert.sh drwxr-xr-x 10 spyroot staff 340 Oct 4 07:24 examples drwxr-xr-x 3 spyroot staff 102 Oct 4 19:41 ovfs -rw-r--r-- 1 spyroot staff 11251 Oct 4 07:24 vnf-template-2vm.yaml -rw-r--r-- 1 spyroot staff 5931 Oct 4 07:24 vnf-template.yaml bash$ ls -l ovfs/cirros/ total 25360 -rw-r--r-- 1 spyroot staff 12968960 Oct 4 19:41 cirros-disk1.vmdk -rw-r--r-- 1 spyroot staff 125 Oct 4 19:41 cirros.mf -rw-r--r-- 1 spyroot staff 5770 Oct 4 19:41 cirros.ovf
Note: You should create OVF image only once if all images of same VNF/OS share same hardware specs.
The VM image used as reference VM in vCloud director. Each respected VM that Openmano instantiate used that image as reference.
- VNF preparation step.
If image is uploaded at vCloud, reference it using the image name at VNFD descriptor.
If not, use a path of an existing image at host where openmano is running
Add vCloud using OSM Client
osm vim-create --name vmware-site --user osm --password osm4u --auth_url https://10.10.10.12 --tenant vmware-tenant --account_type vmware --config '{admin_username: user, admin_password: passwd, orgname: organization, nsx_manager: "http://10.10.10.12", nsx_user: user, nsx_password: userpwd,"vcenter_port": port, "vcenter_user":user, "vcenter_password":password, "vcenter_ip": 10.10.10.14}'
There is a parameter called --config used to suply additional configuration:
- orgname: (Optional) Organization name where tenant belong to. Can be ignored if --vim-tenant-name uses <orgname: tenant>
- admin_username: (Mandatory)Admin user
- admin_password: (Mandatory) Admin password
- nsx_manager: (Mandatory). NSX manager host name
- nsx_user: (Mandatory). nsx_user
- nsx_password: (Mandatory). nsx_password
- vcenter_port: (Mandatory).vCenter port
- vcenter_user: (Mandatory) vCenter username
- vcenter_password: (Mandatory). vCenter password
- vcenter_ip: (Mandatory). vCenter IP
- management_network_id, management_network_name: VIM management network id/name to use for the management VLD of NS descriptors. By default it uses same vim network name as VLD name. It can be set also at instantiation time.
The content of config is a yaml format text. The recommendation is to use a comma separated list between curly brackets {} and quotes, e.g.:
--config '{nsx_manager: https://10.10.10.12, nsx_user: user, nsx_password: pasword}'