4. Setup of Virtual Infrastructure Managers (VIMs)

4.1. OpenStack

4.1.1. Preparing your OpenStack to be used by OSM

This sections enumerates the steps to make an OpenStack accessible for its use from OSM.

4.1.1.1. 1. Guarantee that OpenStack API endpoints are reachable from OSM

(particularly, it should be reachable from RO container)

4.1.1.2. 2. Create a management network, with DHCP enabled, reachable from OSM

(particularly, it should be reachable from VCA container)

You need to create a management network, with DHCP enabled, and guarantee that this management network is reachable from OSM. The network is used by the VCA (Juju) for configuring the VNFs once they are running. It is recommended to create a provider network, isolated from OpenStack. For instance, in order to create a provider network using physical interface em1 and VLAN 500 and with CIDR 10.208.0.0/24, you should run the following commands:

neutron net-create mgmt --provider:network_type=vlan --provider:physical_network=physnet_em1 --provider:segmentation_id=500 --shared
neutron subnet-create --name subnet-mgmt mgmt 10.208.0.0/24 --allocation-pool start=10.208.0.2,end=10.208.0.254

4.1.1.3. 3. Create a valid tenant/user

You need to create a tenant/user with rights to create/delete flavors. The easiest way is to create a user and assign it the role admin.

Another option is to change the general flavor management policies (usually at config file /etc/nova/policy.json) to allow flavor creation per user.

4.1.1.4. 4. Modify default security group or create a new one

By default OpenStack applies the default security group that blocks any incoming traffic to the VM. However, ssh access might be needed by VCA.

Therefore, you will need to modify the default security group to allow TCP port 22, or create a new security group and configure OSM to use this security group when the VIM target is addeed (see below Adding an OpenStack VIM target to OSM).

4.1.1.5. 5. Remember to upload any images required (optional)

For the time being, it is required to upload the images of the VNFs to your VIMs, so that they are available before an instantiation.

This can happen anytime, and it is an optional step during the preparation phase, but you should always make sure that all the images that are required for a given NS/NSI are available in the VIM target before instantiation (otherwise, OSM will throw the corresponding error message).

In the case of OpenStack, you would typically use a variant of the following command to upload an image:

openstack image create --file="./cirros-0.3.4-x86_64-disk.img" --container-format=bare --disk-format=qcow2 --public cirros034

4.1.2. Adding an OpenStack VIM target to OSM

Here is an example on how to use the OSM Client to add an OpenStack VIM:

osm vim-create --name openstack-site --user admin --password userpwd --auth_url http://10.10.10.11:5000/v2.0 --tenant admin --account_type openstack --config='{security_groups: default, keypair: mykey}'

As it can be seen above, there is a parameter called --config used to supply general configuration options, which can be used when creating the VIM target using the OSM client.

A number of configuration options are supported:

  • management_network_id, management_network_name: VIM management network id/name to use for the management VLD of NS descriptors. By default it uses same vim network name as VLD name. It can be set also at instantiation time.

  • security_groups: To be used for the deployment

  • availability_zone: To be used for the deployment. It can be:

    • a single availability zone (all deployments will land in that zone), e.g. availability_zone: controller

    • several availability zones, which enables affinity and anti-affinity deployments, e.g. availability_zone: [zone1, zone2]

  • region_name: The region where the VM must be deployed.

  • insecure: (By default false). When true it allows authorization over a non trusted certificate over https

  • ca_cert: (incompatible with insecure). root certificate file to use for validating the OpenStack certificate

  • use_existing_flavors: (By default, False). Set to True to use the closer flavor with enough resources instead of creating a new flavor with the exact requirements. This option does not work for EPA (cpu pinning, huge pages, …) where RO still tries to create a flavor with the needed extra expects. Use this options when you do not have admin credentials (Available from future v2.0.2 version)

  • vim_type: Set to “VIO” to use VMware Integrated OpenStack as VIM

  • use_internal_endpoint: Set to True to force using internal endpoints

For OpenStack API v3, the following parameters are required:

  • project_domain_id, project_domain_name: If not provided, default is used for project_domain_id

  • user_domain_id, user_domain_name: If not provided, default is used for user_domain_id

  • APIversion. Only required if the auth-url URL does not end with v3. Set it to “v3.3” or “3” to use this OpenStack API version.

ADVANCED configuration:

  • keypair: To be added in addition to the keypair allocated on the VNF descriptor. Provide the name of an openstack keypair

  • dataplane_physical_net: The physical network label used in Openstack both to identify SRIOV and passthrough interfaces (nova configuration) and also to specify the VLAN ranges used by SR-IOV interfaces (neutron configuration). In case of VMware Integrated Openstack (VIO) provide moref ID of distributed virtual switch.

  • use_floating_ip: (By default false). When boolean true, a management interface of a VNFD is automatically assigned a floating_ip -if possible-. If there is not any floating ip avalialble, it tries to get one if there is only one pool. You can use a string pool_id (public network id) instead of boolean true, to indicate the pool to use in case there are several. Note that deployment is not aborted if failing to asign a floating ip. The preferred method instead of this is to use a provider network

  • dataplane_net_vlan_range: In case of VMware Integrated Openstack (VIO) provide vlan ranges for the SRIOV (binding direct) networks in format [‘start_ID - end_ID’]

  • microversion: This is an Openstack-only parameter that allows to specify a specific microversion to be used in nova. When using microversion: 2.32, it enables the use of Virtual Device Role Tagging, which allows to identify each VM interface with a tag (the tag will be the name of the interface in the VNFD) and convey that information to the VM as metadata. This implementation approach is due to the warning message in https://developer.openstack.org/api-guide/compute/microversions.html where it is stated that microversion backwards compatibility is not guaranteed and clients should always require a specific microversion. This functionality was introduced would not work with Openstack versions previous to Newton.

  • no_port_security_extension: Use for those OpenStack that do not have the port_security_extension. This property allows neutron to disable the port security. If this option is set, port security will never be disabled regardless of the descriptor indication.

  • disable_network_port_security: Use for those OpenStack VIMs that do not support port security enabled at network level (although port_security_extension is present). This property allows neutron to disable the port security by default at the time of network creation for all ports created on the created networks.

The content of the config is a text formatted in yaml. The recommendation is to use a comma separated list between curly brackets {} and quotes "".

4.1.2.1. Adding a VMware Integrated OpenStack (VIO) as VIM target

Although VIO works as a regular OpenStack in practice, additional options may be needed to add a VIO VIM to OSM, so that OSM has all the credentials that it needs.

For instance, you can use the following command, which includes some extra parameters, to add a VIM target with VIO (e.g. site name: openstack-site-vio4, IP address: 10.10.10.12, VIM tenant: admin, user: admin, password: passwd)

osm vim-create --name VIO --user admin --password passwd --auth_url https://10.10.10.12:5000/v3 --tenant admin \
    --account_type openstack --config '{use_floating_ip: True, insecure: true, vim_type: VIO, APIversion: v3.3,
    dataplane_physical_net: dvs-46, "use_internal_endpoint":true,"dataplane_net_vlan_range":["31-35","37-39"]}'

With respect to a regular Openstack, these additional configuration parameteres are added:

  • vim_type: Set to VIO to use VMware Integrated openstack as VIM.

  • use_internal_endpoint: When true it allows use of private API endpoints.

  • dataplane_physical_net: (only when applicable) The configured network_vlan_ranges at neutron for SR-IOV (binding direct) and passthrough (binding direct-physical) networks, e.g. physnet_sriov in the above configuration. In case of VMware Integrated Openstack (VIO) provides more ID of distributed virtual switch, e.g ‘dvs-46’ in above configuration..

  • dataplane_net_vlan_range: In case of VMware Integrated Openstack (VIO) provide vlan ranges for the SRIOV (binding direct) networks in format ['start_ID - end_ID'].

For common options, you may refer to the general OpenStack Setup Guide.

4.2. OpenVIM

4.2.1. Setting up an OpenVIM environment

A full step-by step guide for installing an OpenVIM environment from scratch can be found in a specific chapter.

4.2.2. Adding OpenVIM as VIM target to OSM

To add and OpenVIM account as VIM target, you shoulld execute the following command, using the appropriate parameters (e.g. site name: openvim-site, IP address: 10.10.10.10, VIM tenant: osm)

osm vim-create --name openvim-site --auth_url http://10.10.10.10:9080/openvim --account_type openvim \
   --description "Openvim site" --tenant osm --user dummy --password dummy

4.3. VMware’s vCloud Director

4.3.1. Preparing your vCloud Director to be used by OSM

  • In order to get the vim-tenant_name from vCloud Director and/or the tenant UUID, you should execute:

./vmwarecli.py -u admin -p 12345 -c vcloud_host_name -U Administrator -P 123456 -o test list vdc
+--------------------------------------+----------+
|               vdc uuid               | vdc name |
+--------------------------------------+----------+
| 605ad9e8-04c5-402d-a3b7-0b6c1bacda75 |   test   |
| a5056f85-418c-4bfd-8041-adb0f48be9d9 |   TEF    |
+--------------------------------------+----------+
  • In this example two VDC (tenants) are available for organization test

  • Create default network by either using Web UI of vCloud director or vmwarecli.py

./vmwarecli.py -u admin -p 123456 -c vcloud_host_name -U Administrator -P 123456 -o test -v TEF create network test
Crated new network test and uuid: bac9f9c6-6d1b-4af2-8211-b6258659dfb1
  • View organization/dataceter.

./vmwarecli.py -u admin -p 123456 -c vcloud_host_name -U Administrator -P 123456 -o test view org test
+--------------------------------------+----------+
|               vdc uuid               | vdc name |
+--------------------------------------+----------+
| 605ad9e8-04c5-402d-a3b7-0b6c1bacda75 |   test   |
| a5056f85-418c-4bfd-8041-adb0f48be9d9 |   TEF    |
+--------------------------------------+----------+
+--------------------------------------+-------------------------------------------+
|             network uuid             |                network name               |
+--------------------------------------+-------------------------------------------+
| f2e8a499-c3c4-411f-9cb5-38c0df7ccf8e |                  default                  |
| 0730eb83-bfda-43f9-bcbc-d3650a247015 |                    test                   |
+--------------------------------------+-------------------------------------------+
+--------------------------------------+--------------+
|             catalog uuid             | catalog name |
+--------------------------------------+--------------+
| 811d67dd-dd48-4e79-bb90-9ba2199fb340 |    cirros    |
| 147492d7-d25b-465c-8eb1-b181779f6f4c | ubuntuserver |
+--------------------------------------+--------------+

4.3.1.1. Image preparation for VMware

If a user needs on-board image that is not a VMware compatible disk image format such as qcow. User need to convert qcow image to an OVF.

  • The first step is convert qcow disk image to vmdk.

qemu-img convert -f qcow2 cirros-disk.img -O vmdk cirros-0.3.4-x86_64-disk.vmdk
  • Second step.

    • Click “New in VMware Fusion , Vmware workstation or vCenter and create a VM from VMDK file created in step one.

  • Third step

    • Adjust hardware setting for VM. For example, if target VMs should have only one vNIC delete all vNIC.

    • OSM will set up and attach vNIC based on VNF file.

    • Make sure hardware version for VM set to 11 or below.

    • Export VM as OVF and upload file to OSM.

      • Example of folder structure inside VNF directory. Each exported image placed inside ovfs directory.

drwxr-xr-x   2 spyroot  staff        68 Oct  4 19:31 cirros
-rw-r--r--   1 spyroot  staff  13287936 May  7  2015 cirros-0.3.4-x86_64-disk.img
-rw-r--r--   1 spyroot  staff  21757952 Oct  4 19:38 cirros-0.3.4-x86_64-disk.vmdk
-rwxr-xr-x   1 spyroot  staff        57 Oct  4 18:58 convert.sh
drwxr-xr-x  10 spyroot  staff       340 Oct  4 07:24 examples
drwxr-xr-x   3 spyroot  staff       102 Oct  4 19:41 ovfs
-rw-r--r--   1 spyroot  staff     11251 Oct  4 07:24 vnf-template-2vm.yaml
-rw-r--r--   1 spyroot  staff      5931 Oct  4 07:24 vnf-template.yaml

bash$ ls -l ovfs/cirros/
total 25360
-rw-r--r--  1 spyroot  staff  12968960 Oct  4 19:41 cirros-disk1.vmdk
-rw-r--r--  1 spyroot  staff       125 Oct  4 19:41 cirros.mf
-rw-r--r--  1 spyroot  staff      5770 Oct  4 19:41 cirros.ovf

Note: You should create OVF image only once if all images of same VNF/OS share same hardware specs. The VM image used as reference VM in vCloud director. Each VM that OSM instantiates will use that image as reference.

  • VNF preparation step.

If image is uploaded at vCloud, reference it using the image name at VNFD descriptor.

If not, use a path of an existing image at host where OSM is running

4.3.2. Adding vCD as VIM target to OSM

osm vim-create --name vmware-site --user osm --password osm4u --auth_url https://10.10.10.12 --tenant vmware-tenant  --account_type vmware          --config '{admin_username: user, admin_password: passwd, orgname: organization, nsx_manager: "http://10.10.10.12", nsx_user: user, nsx_password: userpwd,"vcenter_port": port, "vcenter_user":user, "vcenter_password":password, "vcenter_ip": 10.10.10.14}'

There is a parameter called --config used to suply additional configuration:

  • orgname: (Optional) Organization name where tenant belong to. Can be ignored if –vim-tenant-name uses <orgname: tenant>

  • admin_username: (Mandatory)Admin user

  • admin_password: (Mandatory) Admin password

  • nsx_manager: (Mandatory). NSX manager host name

  • nsx_user: (Mandatory). nsx_user

  • nsx_password: (Mandatory). nsx_password

  • vcenter_port: (Mandatory).vCenter port

  • vcenter_user: (Mandatory) vCenter username

  • vcenter_password: (Mandatory). vCenter password

  • vcenter_ip: (Mandatory). vCenter IP

  • management_network_id, management_network_name: VIM management network id/name to use for the management VLD of NS descriptors. By default it uses same vim network name as VLD name. It can be set also at instantiation time.

The content of config is a yaml format text. The recommendation is to use a comma separated list between curly brackets {} and quotes "", e.g.:

--config '{nsx_manager: https://10.10.10.12, nsx_user: user, nsx_password: pasword}'

4.4. Amazon Web Services (AWS)

4.4.1. Preparation for using AWS in OSM

4.4.1.1. 1. Get AWS_ACCESS_KEY_ID and AWS_SECRET_KEY for your AWS account

Check https://aws.amazon.com/

AWS User-ID/Secret-key will be required at the time of creation of the data-center. These credentials need not to be updated after creation of the data-center.

4.4.1.2. 2. Create/get key-pairs from your AWS management console

These key-pairs will be required later for deploying a NS.

SSH key-pairs need to be specified in the VNF descriptors. This update can be done via OSM CLI as well as OSM UI. SSH key-pars can only be created using the AWS mgmt console. OSM will get updated of any changes that occur in AWS console. OSM user is required to keep record of these key-pairs, for use in later cases.

4.4.1.3. 3. Create a management network using AWS management console

If the user does not specify any default mgmt interface, OSM will create a default network that will be used for managing AWS instances.

Once the NS is deployed it will require a management interface (subnet) to apply configurations on the instances. User can manually create this mgmt interface using AWS console or leave it for OSM connector. The procedure of creating the interface from AWS is to create a subnet by specifying the appropriate CIDR block. This subnet is required to have DHCP enabled. AWS being a public cloud is accessible from OSM. The network is used by the VCA for configuring the VNFs once they are running.

4.4.1.4. 4. Create a valid user

Default user in AWS has the rights to perform all operations of AWS instances, subnets, VPCs, key-pairs, etc. In case, you want to create a secondary user with limited set of rights, you can use AWS mgmt console. NOTE: Each user in AWS has a separate access-Key/secret-key which must be kept secure else new credentials must be generated. The preferred way is to create a user and assign it the role “admin”. Another option is ensure that the user has all the rights required to operate in AWS environment.

4.4.1.5. 5. Find and select images

AWS has a repository with many images available to be used for instances. In case you require to create a custom image, you can use AWS console and create your own images. In case you decide to use a pre-built image you will need to specify full mage path in VNF descriptor.

4.4.1.6. 6. Security group

AWS provides a default security_group defining a set of rules that allow connection access to the instances that have this security_group. In case, you require a new security_group you can create a new group defining the conditions that are required by your use case.

Default security_group doesn’t allow user to SSH into the instances. This behavior is not recommended by OSM, as VCA requires a path to interact with instances. Hence, it is recommended that you create a new group that contains the rules/conditions required to SSH into the instances deployed by this NS. You can also modify the default security group to allow TCP port 22 or, however, creation of a custom security_group is recommended.

4.4.2. Adding AWS as VIM target to OSM

You will need to specify some options at VIM target creation/association by using the --config parameter. For instance:

osm vim-create —name aws-site —account_type aws \
   —auth_url https://aws.amazon.com \
   —user MyUser —password MyPassword —tenant admin \
   —description "AWS site, with your user" \
   —config '{region_name: eu-central-1, flavor_info: "{t2.nano: {cpus: 1, disk: 100, ram: 512}, t2.micro: {cpus: 1, disk: 100, ram: 1024}, t2.small: {cpus: 1, disk: 100, ram: 2048}}"}'

The following configuration can be added:

  • management_network_id, management_network_name: VIM management network id/name to use for the management VLD of NS descriptors. By default it uses same vim network name as VLD name. It can be set also at instantiation time.

  • region_name: Region to be used for the deployment

  • vpc_cidr_block: Default CIDR block for VPC

  • security_groups: Default security group for newly created instances

ADVANCED configuration:

  • key_pair: Key_pair specified here will be used default key_pair for newly created instances

  • flavor_info: AWS doesn’t provide a mechanism to extract information regarding supported flavors. In order to get flavor information, user must specify a YAML file with the path such as: “@/usr/data/flavour_info.yaml” or user can specify a dictionary containing details of flavors to be used.

For specification of flavor info at time of datacenter creation use a parameter at --config called e.g. flavor_info:. The content must be a string. It can be a file starting with ‘@’ that contains the info in YAML format, or directly the yaml content.

NOTE: Details on AWS flavors/instance types can be found at Amazon Web Services docs (https://aws.amazon.com/ec2/instance-types/). Flavors/instance types in AWS vary depending on the region of AWS account. Above mentioned link provides details on all possible instance types. However to get details on the instance types available for your region, use your AWS management console.

4.5. Microsoft Azure

4.5.1. Preparation for using Azure in OSM

4.5.1.1. 1. Obtain Azure credentials and tenant from Microsoft Azure

In order to use a VIM target based on Azure, the following information needs to be gathered:

  • Azure subscription Id.

  • Azure application Id, to be used as client Id

  • The authentication key to be used as client secret.

  • The tenant Id, to be created or obtained in the Microsoft portal.

4.5.1.2. 2. Create Microsoft Azure Resource Group

All Azure resources for a VIM target will be created in the same resource group. This resource group should be created before adding the VIM target and will be provided as a configuration parameter. In case it has not been previously created, this resource group will be created implicitly.

4.5.1.3. 3. Create Microsoft Azure Virtual Network

The virtual networks created for the Azure VIM will all be created as subnets from a base virtual network. This base virtual network should be created before adding the VIM target and will also be provided as a configuration parameter.

In case it has not been previously created this resource group will be created implicitly.

It is also recommended to create a management network for the VIM network services.

4.5.1.4. 4. Image selection

Azure does not allow the creation of custom images, so you need to make sure that your VNF packages include a reference to an appropriate alternative image in Microsoft Azure’s image repository.

NOTE: In case you are creating a VNF Package from scratch, please note you should use the full Azure image name: publisher:offer:sku:version (e.g. Canonical:UbuntuServer:18.04-LTS:18.04.201809110).

4.5.1.5. 5. Flavor selection and machine tier

Microsoft Azure has a number of pre-created flavors available that cannot be changed. Hence, OSM will determine the flavor to be used based of the VDU requirements in the package, in terms of number of CPUs, RAM and disk.

In the Azure portal there are also different virtual machine tiers available, intended for different purposes: e.g cheaper machine serie Basic with no guaranteed throughput or more expensive machines with guaranteed throughput. For that reason, OSM allows to specificy such machine tiers in the VIM target definition by using the flavors_pattern parameter. For example, a Basic cheaper tier can be selected when defining the VIM target of a development environment, and specify a more advanced tier for the VIM target of the production environment.

4.5.2. Adding Microsoft Azure as VIM target in OSM

To sum up, in order to defice a VIM target with Azure, the following command and options should be used:

osm vim-create --name azure --account_type azure --auth_url http://www.azure.com --tenant "tenantid"
    --user "XXX" --password "azurepwd" --description "Azure site"
    --config "{region_name: westeurope, resource_group: 'osmRG', subscription_id: 'azuresubs',
     vnet_name: 'osm_vnet', flavors_pattern: 'flavors_regex'}"

Azure credentials and tenant configuration:

  • user: Azure application Id

  • password: Azure authentication Key

  • subscription_id: Azure subscription Id

  • tenant: Azure tenant Id

Additional required configuration:

  • region_name: Region to be used for the deployment

  • resource_group: Resource group to be used as base for all resources created for this VIM

  • vnet_name: Virtual name used as a base for this VIM subnets

Additional optional configuration:

  • flavors_pattern: Regular expression to be used during flavor selection. This allows to select the desired virtual machine tier.

4.6. Fog05

Eclipse fog05 (can be read as fog-O-five or fog-O-S) is a different kind of VIM, designed to manage a fog/edge environment, thus it is completely distributed (no controller/master node) and pluggable, and available as FOSS from Eclipse: https://github.com/eclipse/fog05

It stores information in a distributed key-value store that then is able to provide location transparency to the user, and all the state information are stored in it.

4.6.1. Configure fog05 for OSM

In order to make OSM able to contact fog05, you need to configure the REST proxy on one node that has access to the whole fog05 deployment, this node will act as a proxy and thus allow OSM to interact with fog05. This requires python3-flask, fog05 python api to be installed in the node.

$ cd fog05/src/utils/python/rest_proxy
$ sudo make install

The installation will install also a systemd service, then you have to configure this service by editing the JSON configuration file under /etc/fos/rest/service.json following your fog05 installation.

{
   "host": "<ip address you node where you want the service to listen>",
   "port": 8080,
   "debug": false,
   "yaks": "<ip address of one of the yaks server in the fog05 system>",
   "sysid": "0",
   "tenantid": "0",
   "image_path": "imgs"
}

Then you can simply start the service using systemd.

$ sudo systemd start fosrest

4.6.1.1. Upload Images

You can use the REST proxy also as image service, image upload can be done by using the python rest API. First generate the descriptor of your image:

{
   "name": "<image name>",
   "uri": "",
   "checksum": "<sha256sum of image file",
   "format": "<image format eg. qcow2, iso, tar.gz>"
}

the using the python api the upload of the image can be done:

>>>api.image.add(img_descriptor, img_file_path)
{'result': '92274e2e-129f-40a3-be7e-a35ea596d439'}
where the value of result is the uuid of the image

4.6.2. Adding Eclipse fog05 as VIM target of OSM

Alike the rest of VIM types, you should provide the appropiate paramenters in --config when creating the VIM target.

osm vim-create --name fos --auth_url <rest proxy ip>:8080 --account_type fos --tenant osm --user dummy --password dummy --config '{hypervisor: LXD}'

The following configuration can be added:

  • arch: cpu architecture used when creating the VDUs for this VIM account eg. x86_64, aarch64, default is x86_64.

  • hypervisor: hypervisor supported by this VIM account, can be one of: LXD, KVM, BARE, DOCKER, XEN, at least one node of the system as to be able to manage the selected hypervisor, default is LXD.

  • nodes: if you want this VIM account to be able to manage only a subset of the nodes in the system you can pass a list of node uuids, by default is an empty list that means all nodes.

4.6.2.1. VLAN configuration (optional)

If you want your fog05 installation to be able to use VLANs for virtual networks instead of overlay VxLANs you need to change the configuration on all nodes. You need to update the configuration file /etc/fos/plugins/linuxbridge/linuxbridge_plugin.json.

{
 "name": "linuxbridge",
 "version": 1,
 "uuid": "d42b4163-af35-423a-acb4-a228290cf0be",
 "type": "network",
 "requirements": [
   "jinja2"
 ],
 "description": "linux Bridge network plugin",
 "configuration": {
   "ylocator": "tcp/<your yaks ip>:7887",
   "nodeid": "<your node id>",
   "dataplane_interface": "<interface for overlay networks>",
   "use_vlan": true,
   "vlan_interface": "<interface for vlans>",
   "vlan_range": [
     <start vlan id>,
     <end vlan id>
   ]
 }
}

After that you have to restart the fog05 network and runtime plugins in the nodes.

4.6.2.2. Example NS

Here you can find and example of network service that can be instantiated to the Eclipse fog05 VIM using OSM. The network service is composed by a single VNF based on an Alpine Linux LXD image.

alpinevnfd.yaml:

vnfd:vnfd-catalog:
   vnfd:
   -   id: alpine_vnfd
       name: alpine_vnf
       short-name: alpine_vnf
       description: Simple VNF example with a Alpine
       vendor: OSM
       version: '1.0'
       logo: alpine.jpg
       connection-point:
           -   name: eth0
               type: VPORT
       vdu:
       -   id: alpine_vnfd-LXD
           name: alpine_vnfd-LXD
           description: alpine_vnfd-LXD
           count: 1
           vm-flavor:
               vcpu-count: 1
               memory-mb: 256
               storage-gb: 1
           image: 2db8b83a-62ea-4543-83c7-1818f403f6f4
           interface:
           -   name: eth0
               type: EXTERNAL
               virtual-interface:
                   type: VIRTIO
                   bandwidth: '0'
                   vpci: 0000:00:0a.0
               external-connection-point-ref: eth0
       mgmt-interface:
           cp: eth0

alpinens.yaml:

nsd:nsd-catalog:
   nsd:
   -   id: alpine_nsd
       name: alpine_ns
       short-name: alpine_ns
       description: Generated by OSM pacakage generator
       vendor: OSM
       version: '1.0'
       constituent-vnfd:
       -   member-vnf-index: 1
           vnfd-id-ref: alpine_vnfd
       vld:
       ### Networks for the VNFs
           -   id: alpine_nsd_vld1
               name: alpine_nsd_vld1
               short-name: alpine_nsd_vld1
               type: ELAN
               mgmt-network: 'true'
               vnfd-connection-point-ref:
               -   member-vnf-index-ref: 1
                   vnfd-id-ref: alpine_vnfd
                   vnfd-connection-point-ref: eth0

4.7. What if I do not have a VIM at hand? Use of sandboxes

Sometimes, casual testers of OSM may not have a fully fledged VIM at hand, either on premises or in a public cloud. In those cases, it is also possible using a VIM sandbox which, although limited and hence not appropriate for production services, it can be an option good enough for beginners.

In the coming sections, a few options are described in detail.

4.7.1. ETSI OSM VNF Onboarding Sandbox for VNF Providers

ETSI OSM can provide to their members/participants a VNF Onboarding Sandbox for VNF Providers, based on its own testing infrastructure.

In practive, it will work as an account in a fully fledged VIM, so this alternative presents the advantage that it does not present the functional limitations that local sandboxes usually have.

More details about this option are in ellaboration and will be shared soon.

4.7.2. VIM Emulator

4.7.2.1. Vim-emu: A NFV multi-PoP emulation platform

This emulation platform was created to support network service developers to locally prototype and test their network services in realistic end-to-end multi-PoP scenarios. It allows the execution of real network functions, packaged as Docker containers, in emulated network topologies running locally on the developer’s machine. The emulation platform also offers OpenStack-like APIs for each emulated PoP so that it can integrate with MANO solutions, like OSM. The core of the emulation platform is based on Containernet.

The emulation platform vim-emu was previously developed as part of the EU H2020 project SONATA and is now developed as part of OSM’s DevOps MDG.

4.7.2.1.1. Cite this work

If you plan to use this emulation platform for academic publications, please cite the following paper:

4.7.2.1.2. Scope

The following figure shows the scope of the emulator solution and its mapping to a simplified ETSI NFV reference architecture in which it replaces the network function virtualisation infrastructure (NFVI) and the virtualised infrastructure manager (VIM). The design of vim-emu is based on a tool called Containernet which extends the well-known Mininet emulation framework and allows us to use standard Docker containers as VNFs within the emulated network. It also allows adding and removing containers from the emulated network at runtime which is not possible in Mininet. This concept allows us to use the emulator like a cloud infrastructure in which we can start and stop compute resources (in the form of Docker containers) at any point in time.

Vim-emu-etsi-mapping.png]

4.7.2.1.3. Architecture

The vim-emu system design follows a highly customizable approach that offers plugin interfaces for most of its components, like cloud API endpoints, container resource limitation models, or topology generators.

In contrast to classical Mininet topologies, vim-emu topologies do not describe single network hosts connected to the emulated network. Instead, they define available PoPs which are logical cloud data centers in which compute resources can be started at emulation time. In the most simplified version, the internal network of each PoP is represented by a single SDN switch to which compute resources can be connected. This can be done as the focus is on emulating multi-PoP environments in which a MANO system has full control over the placement of VNFs on different PoPs but limited insights about PoP internals. We extended Mininet’s Python-based topology API with methods to describe and add PoPs. The use of a Python-based API has the benefit that developers can use scripts to define or algorithmically generate topologies.

Besides an API to define emulation topologies, an API to start and stop compute resources within the emulated PoPs is available. Von-emu uses the concept of flexible cloud API endpoints. A cloud API endpoint is an interface to one or multiple PoPs that provides typical infrastructure-as-a-service (IaaS) semantics to manage compute resources. Such an endpoint can be an OpenStack Nova or HEAT like interface, or a simplified REST interface for the emulator CLI. These endpoints can be easily implemented by writing small, Python-based modules that translate incoming requests (e.g., an OpenStack Nova start compute) to emulator specific requests (e.g., start Docker container in PoP1).

As illustrated in the following figure, our platform automatically starts OpenStack-like control interfaces for each of the emulated PoPs which allow MANO systems to start, stop and manage VNFs. Specifically, our system provides the core functionalities of OpenStack’s Nova, Heat, Keystone, Glance, and Neutron APIs. Even though not all of these APIs are directly required to manage VNFs, all of them are needed to let the MANO systems believe that each emulated PoP in our platform is a real OpenStack deployment. From the perspective of the MANO systems, this setup looks like a real-world multi-VIM deployment, i.e., the MANO system’s southbound interfaces can connect to the OpenStack-like VIM interfaces of each emulated PoP. A demonstration of this setup was presented at IEEE NetSoft 2017.

Vim-emu-setup.png

4.7.2.1.4. Example: OSM using vim-emu

This section gives an end-to-end usage example that shows how to connect OSM to a vim-emu instance and how to on-board and instantiate an example network service with two VNFs on the emulated infrastructure. All given paths are relative to the vim-emu repository root. The same example is also available for the classic build of OSM: vim-emu classic build walkthrough.

4.7.2.1.4.1. Example service: pingpong

####### Source descriptors

  • Ping VNF (default ubuntu:trusty Docker container): vim-emu/examples/vnfs/ping_vnf/

  • Pong VNF (default ubuntu:trusty Docker container): vim-emu/examples/vnfs/pong_vnf/

  • Network service descriptor (NSD): vim-emu/examples/services/pingpong_ns/

####### Pre-packed VNF and NS packages

  • Ping VNF: vim-emu/examples/vnfs/ping.tar.gz

  • Pong VNF: vim-emu/examples/vnfs/pong.tar.gz

  • NSD: vim-emu/examples/services/pingpong_nsd.tar.gz

4.7.2.1.4.2. Walkthrough

####### Step 1: Install OSM and vim-emu

Install OSM together with the emulator.

$ ./install_osm.sh --vimemu

######## Step 1.1: Start the emulator

Check if the emulator is running:

$ docker ps | grep vim-emu

If not, start it with the following command:

$ docker run --name vim-emu -t -d --rm --privileged --pid='host' --network=netosm -v /var/run/docker.sock:/var/run/docker.sock vim-emu-img python examples/osm_default_daemon_topology_2_pop.py

######## Step 1.2: Configure environment

You need to set the correct environment variables, i.e., you need to get the IP address of the vim-emu container to be able to add it as a VIM to your OSM installation:

$ export VIMEMU_HOSTNAME=$(sudo docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' vim-emu)

####### Step 2: Attach OSM to vim-emu

# connect OSM to emulated VIM
$ osm vim-create --name emu-vim1 --user username --password password --auth_url http://$VIMEMU_HOSTNAME:6001/v2.0 --tenant tenantName --account_type openstack

# list vims
$ osm vim-list
+----------+--------------------------------------+
| vim name | uuid                                 |
+----------+--------------------------------------+
| emu-vim1 | a8175948-efcf-11e7-94ad-00163eba993f |
+----------+--------------------------------------+

####### Step 3: On-board example pingpong service

The example can be found in the vim-emu git repository: https://osm.etsi.org/gitweb/?p=osm/vim-emu.git;a=summary.

# Clone the vim-emu repository containing the pingpong example
$ git clone https://osm.etsi.org/gerrit/osm/vim-emu.git
# VNFs
$ osm vnfd-create vim-emu/examples/vnfs/ping.tar.gz
$ osm vnfd-create vim-emu/examples/vnfs/pong.tar.gz

# NS
$ osm nsd-create vim-emu/examples/services/pingpong_nsd.tar.gz

# You can now check OSM's GUI to see the VNFs and NS in the catalog. Or:
$ osm vnfd-list
+-----------+--------------------------------------+
| vnfd name | id                                   |
+-----------+--------------------------------------+
| ping      | 2c632bc7-15f6-4997-a581-b9032ea4672c |
| pong      | e6fe076d-9d1f-4f05-a641-44b3e09df961 |
+-----------+--------------------------------------+

$ osm nsd-list
+----------+--------------------------------------+
| nsd name | id                                   |
+----------+--------------------------------------+
| pingpong | 776746fe-7c48-4f0c-8509-67da1f8c0678 |
+----------+--------------------------------------+

####### Step 4: Instantiate example pingpong service

$ osm ns-create --nsd_name pingpong --ns_name test --vim_account emu-vim1

####### Step 5: Check service instance

# using OSM client

$ osm ns-list
+------------------+--------------------------------------+--------------------+---------------+-----------------+
| ns instance name | id                                   | operational status | config status | detailed status |
+------------------+--------------------------------------+--------------------+---------------+-----------------+
| test             | 566e6c36-5f42-4f3d-89c7-dadcca01ae0d | running            | configured    | done            |
+------------------+--------------------------------------+--------------------+---------------+-----------------+

####### Step 6: Interact with deployed VNFs

# connect to ping VNF container (in another terminal window):
$ sudo docker exec -it mn.dc1_test-1-ubuntu-1 /bin/bash
# show network config
#root@dc1_test-nsi:/# ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:ac:11:00:03
          inet addr:172.17.0.3  Bcast:0.0.0.0  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:8 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:648 (648.0 B)  TX bytes:0 (0.0 B)

ping0-0   Link encap:Ethernet  HWaddr 4a:57:93:a0:d4:9d
          inet addr:192.168.100.3  Bcast:192.168.100.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

# ping the pong VNF over the attached management network
#root@dc1_test-1-ubuntu-1:/# ping 192.168.100.4
PING 192.168.100.4 (192.168.100.4) 56(84) bytes of data.
64 bytes from 192.168.100.4: icmp_seq=1 ttl=64 time=0.596 ms
64 bytes from 192.168.100.4: icmp_seq=2 ttl=64 time=0.070 ms
--- 192.168.100.4 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.048/0.059/0.070/0.011 ms

####### Step 7: Shut down

# delete service instance
$ osm ns-delete test

####### (optional) Step 8: Check vim-emu and its status

# connect to vim-emu Docker container to see its logs ( do in another terminal window)
$ sudo docker logs -f vim-emu

# check if the emulator is running in the container
$ sudo docker exec vim-emu vim-emu datacenter list
+---------+-----------------+----------+----------------+--------------------+
#| Label   | Internal Name   | Switch   |   # Containers |   # Metadata Items |
+=========+=================+==========+================+====================+
| dc2     | dc2             | dc2.s1   |              0 |                  0 |
+---------+-----------------+----------+----------------+--------------------+
| dc1     | dc1             | dc1.s1   |              0 |                  0 |
+---------+-----------------+----------+----------------+--------------------+
# check running service
$ sudo docker exec vim-emu vim-emu compute list
+--------------+----------------------------+---------------+------------------+-------------------------+
| Datacenter   | Container                  | Image         | Interface list   | Datacenter interfaces   |
+==============+============================+===============+==================+=========================+
| dc1          | dc1_test.ping.1.ubuntu     | ubuntu:trusty | ping0-0          | dc1.s1-eth2             |
+--------------+----------------------------+---------------+------------------+-------------------------+
| dc1          | dc1_test.pong.2.ubuntu     | ubuntu:trusty | pong0-0          | dc1.s1-eth3             |
+--------------+----------------------------+---------------+------------------+-------------------------+
4.7.2.1.5. Build & Installation

There are multiple ways to install and use the emulation platform. The easiest way is the automated installation using the OSM installer. The bare-metal installation requires a freshly installed Ubuntu 16.04 LTS and is done by an ansible playbook. Another option is to use a nested Docker environment to run the emulator inside a Docker container.

4.7.2.1.5.1. Automated installation (with OSM)

The following command will install OSM as well as the emulator (as a Docker container) on a local machine. It is recommended to use a machine with Ubuntu 16.04.

$ ./install_osm.sh --vimemu
4.7.2.1.5.2. Manual installation (vim-emu only)

####### Option 1: Bare-metal installation

  • Requires: Ubuntu 16.04 LTS

$ sudo apt-get install ansible git aptitude

######## Step 1: Containernet installation

$ cd
$ git clone https://github.com/containernet/containernet.git
$ cd ~/containernet/ansible
$ sudo ansible-playbook -i "localhost," -c local install.yml

######## Step 2: vim-emu installation

$ cd
$ git clone https://osm.etsi.org/gerrit/osm/vim-emu.git
$ cd ~/vim-emu/ansible
$ sudo ansible-playbook -i "localhost," -c local install.yml

####### Option 2: Nested Docker Deployment

This option requires a Docker installation on the host machine on which the emulator should be deployed.

$ git clone https://osm.etsi.org/gerrit/osm/vim-emu.git
$ cd ~/vim-emu
# build the container:
$ docker build -t vim-emu-img .
# run the (interactive) container:
$ docker run --name vim-emu -it --rm --privileged --pid='host' --network=netosm -v /var/run/docker.sock:/var/run/docker.sock vim-emu-img /bin/bash

# alternative: run container with emulator in service mode
$ docker run --name vim-emu -t -d --rm --privileged --pid='host' --network=netosm -v /var/run/docker.sock:/var/run/docker.sock vim-emu-img python examples/osm_default_daemon_topology_2_pop.py
4.7.2.1.7. Contact

If you have questions, please use the OSM TECH mailing list: OSM_TECH@LIST.ETSI.ORG

Direct contact: Manuel Peuster (Paderborn University) manuel@peuster.de

4.7.2.2. Known limitations of VIM Emulator

  • VIM Emulator requires special VM images, suitable for running in a VIM Emulator environment.

  • Day-1 and Day-2 procedures of OSM are a work in progress in VIM Emulator, and hence are not available as of the date of this publication.

4.7.3. DevStack

DevStack is a series of extensible scripts used to quickly bring up a complete OpenStack environment based on the latest versions of everything from git master. It is used interactively as a development environment and as the basis for much of the OpenStack project’s functional testing.

The OpenStack Community provides fairly detailed documentation on DevStack and its different configurations.

Due to its simplicity, a configuration particularly interesting for running a sandbox with limited resources (e.g. in a laptop) is the All-In-One Single Machine installation.

4.7.3.1. Known limitations of DevStack

TODO: Under elaboration.

4.7.4. MicroStack

MicroStack is a single-machine OpenStack cloud sandbox, developed by Canonical, and deployable with a single snap packages. Currently provided OpenStack services are: Nova, Keystone, Glance, Horizon, and Neutron, which should suffice for any basic OSM testing.

Detailed documentation is available at https://snapcraft.io/microstack and https://opendev.org/x/microstack.

4.8. Advanced setups for high I/O performance: EPA and SDN Assist

4.8.1. Overview

OSM supports EPA (Enhanced Platform Awareness) since Rel ZERO (May 2016). EPA features like use of hugepages memory, CPU pinning, NUMA pinning, and the use of passthrough and SR-IOV interfaces, can be used in OSM’s VNF descriptors since then.

If your VIM supports EPA, then you don’t need to do anything extra to use it from OSM. VIM connectors in OSM take advantage of EPA capabilities if the VIM supports it. All you need to do is build your descriptors and deploy.

However, ot all VIMs support EPA necessarily. To overcome this limitation, OSM has added the following two features:

  • Since OSM Release ONE (October 2016), OSM includes OpenVIM as a reference VIM, with full support of EPA. You can follow the instructions in this link to install and use OpenVIM.

  • Since OSM Release TWO (April 2017), OSM includes a new capability in the Resource Orchestrator called SDN Assist. Through this capability, OSM can manage the dataplane underlay conectivity through an external SDN controller. The only requirement for the VIM is that it must able to use SR-IOV and/or passthrough interfaces, and expose the assigned interfaces so that the RO can use them to create the underlay connectivity. By default, the SDN Assist capability is disabled when a datacenter or VIM is added to OSM, but you can instruct OSM to enable it per VIM target.

4.8.2. SDN Assist

4.8.2.1. Why SDN Assist

SDN Assist works as follows to overcome the limitations of the VIM with respect to the underlay:

  1. OSM deploys the VMs of a NS in the requested VIM target with Passthrough and/or SRIOV interfaces.

  2. Then it retrieves from the VIM the information about the compute node where the VM was deployed and the physical interfaces assigned to the VM (identified by their PCI addressess).

  3. Then, OSM maps those interfaces to the appropriate ports in the switch making use of the mapping that you should have introduced in the system.

  4. Finally OSM creates the dataplane networks by instructing the SDN controller and connecting the appropriate ports to the same network.

The module in charge of this worflow OSM’s RO (Resource Orchestrator), which is provided transparently to the user. It uses an internal library to manage the underlay connectivity via SDN. The current library includes plugins for FloodLight, ONOS and OpenDayLight.

4.8.2.2. General requirements

The general requirements are:

  • A dataplane switch (until Release SIX, with Openflow capabilities) that will connect the physical interfaces of the VIM compute nodes.

  • An external SDN controller controlling the previous dataplane switch.

  • The mapping between the switch ports (identified by name) and the compute node interfaces (identified by host-id and PCI address)

  • Some VIMs as Openstack requires admin credentials in order to be able to get the physical place of the SRIOV/passthrough VM interfaces

In addition to the general requirements, every VIM will have to be properly configured.

4.8.2.3. VIM configuration for SDN Assist

You should do extra configuration to configure your VIM for running VNFs which use SR-IOV or passthrough interfaces.

You can find a thorough configuration guide for OpenStack VIMs with EPA later in this same chapter. For other types of VIMs, this guide can be also taken as model to understand the EPA properties that might be expected.

4.8.3. Using SDN Assist

4.8.3.1. Adding a SDN controller to OSM

This is done through CLI.

Add to OSM the SDN controller, provide credentials.

osm sdnc-create --name sdn-name --type arista \
--url https://10.95.134.225:443 --user osm --password osm4u \
--config '{mapping_not_needed: True, switch_id: ID}'
# The config section is optional.

# list sdn controllers with:
osm sdnc-list
# get details with:
osm sdnc-show sdn-name
# delete with
osm sdnc-delete sdn-name

Note that at SDN creation, connectivity and credentials are not checked.

Available SDN controller plugins (--type option) are onos_vpls, onosof, floodlightof, , dynpac; and comming odlof, arista, ietfl2vpn.

Depending on the pluging, the SDN controller needs to be fed with the mapping between the VIM compute node interfaces and the switch ports. In case this is not needed use the --config '{mapping_not_needed: True}'.

4.8.3.2. Associate the SDN controller to a VIM

To associate the SDN controller to a concrete VIM, the VIM must be updated. In this step the compute-node/swith-port mapping is provided:

osm vim-update vim-name --sdn_controller sdn-name  --sdn_port_mapping port-mapping-file.yaml

This is an example of the port-mapping-file.yaml content:

- compute_node: nfv54
  ports:
  - pci: "0000:5d:00.1"
    switch_id: Leaf1
    switch_port: "Ethernet13/1"
  - pci: "0000:5d:0a.0"
    switch_id: Leaf1
    switch_port: "Ethernet13/1"
  - pci: "0000:5d:0a.1"
    switch_id: Leaf1
    switch_port: "Ethernet13/1"
  - pci: "0000:5d:00.0"
    switch_id: Leaf2
    switch_port: "Ethernet13/2"
  - pci: "0000:5d:02.0"
    switch_id: Leaf2
    switch_port: "Ethernet13/2"
  - pci: "0000:5d:02.1"
    switch_id: Leaf2
    switch_port: "Ethernet13/2"
- compute_node: nfv55
  # ...

NOTE: several PCI addresses can be connected to the same switch port. This is because a physical interface has several SR-IOV virtual interfaces, each one with different PCI address.

NOTE: The optional switch_id provided at --config is taken as a default if missing in the port-mapping file. This is useful if there is only one switch.

To overwrite the port mapping the same instruction can be used after modifying the port-mapping file.

You can check the associated SDN controller by:

osm vim-show vim-name
+-----------------+-----------------------------------------------------------------+
| key             | attribute                                                       |
+-----------------+-----------------------------------------------------------------+
| _id             | "bf900941-a6d3-4eba-9017-0cec657b9490"                          |
| name            | "sdn-name"                                                      |
| vim_type        | "openstack"                                                     |
| description     | "some description"                                              |
| vim_url         | "https://192.168.1.1:5000/v3"                                   |
| vim_user        | "osm"                                                           |
| vim_password    | "********"                                                      |
| vim_tenant_name | "osm"                                                           |
| config          | {                                                               |
|                 |   "insecure": true,                                             |
|                 |   "sdn-controller": "82b067c8-cb9a-481d-b311-59f68b75acae",     |
|                 |   "sdn-port-mapping": [                                         |
|                 |     {                                                           |
|                 |       "compute_node": "compute-66e153a8-c45",                   |
|                 |       "ports": [                                                |
|                 |         {                                                       |
|                 |           "pci": "002f-0000:83:00.0-000",                       |
|                 |           "switch_id": "Leaf1",                                 |
|                 |           "switch_port": "Ethernet52/1"                         |

You can disassociatede SDN controller from a VIM by:

osm vim-update vim-name --sdn_controller '' # attach an empty string

Note: detaching a SDN controller from the VIM is mandatory before deleting the SDN controller

4.8.4. Configure Openstack for full EPA support in OSM

Besides the instructions above for any Openstack, you should do extra configuration to configure Openstack for running VNFs which use SR-IOV interfaces.

DO NOT consider this as the final configuration guide for Openstack with EPA. Please check Openstack docs and Openstack downstream distros’ docs to get a further understanding and the up-to-date required configuration.

Note: The configuration shown below works with Openstack Newton, and it might not work with later versions.

  • The compute nodes need to have a whitelist for the interfaces with SRIOV and passthrough enabled, and those interface need to be associated to a physical network label e.g. physnet. This can be done in the file /etc/nova/nova.conf:

pci_passthrough_whitelist=[{"devname": "p3p1", "physical_network": "physnet"}, {"devname": "p3p2", "physical_network": "physnet"}]
  • The neutron controller needs to be updated to add sriovnicswitch to the mechanism_drivers. This can be done in the file /etc/neutron/plugins/ml2/ml2_conf.ini

mechanism_drivers=openvswitch,sriovnicswitch
  • The neutron controller needs to be updated to set the vlans to be used for the defined physical network label. This can be done in the file /etc/neutron/plugins/ml2/ml2_conf.ini. For instance, to set the vlans from 2000 to 3000:

network_vlan_ranges =physnet:2000:3000
  • The neutron controller needs to be updated to allow the supported NIC vendor’s product ID. This can be done in the file /etc/neutron/plugins/ml2/ml2_conf.ini to

[ml2_sriov]
supported_pci_vendor_devs = 8086:10ed
  • The nova controller needs to be updated to allow proper scheduling of SR-IOV and Passthrough devices, by adding the PciPassthroughFilter filter to the list of filters. This can be done in the file /etc/nova/nova.conf:

scheduler_available_filters=nova.scheduler.filters.all_filters
scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,CoreFilter, PciPassthroughFilter

The previous configuration has taken as a reference the documents in the links below. Please check them in case your needed more details: