OSM E2E tests: Difference between revisions

From OSM Public Wiki
Jump to: navigation, search
Line 40: Line 40:
Images:
Images:


* cirros034
* cirros034.qcow2


Descriptors:
Descriptors:

Revision as of 12:35, 8 September 2017

OSM packages and images for E2E tests

All VNF and NS packages as well as VM images required for the tests can be found here: http://osm-download.etsi.org/ftp/e2e-tests/

Test 1. Sanity check with simple NS

Objectives:

  • Sanity check of correct E2E behaviour.
  • Validate VM image management.
  • Test access to the console from OSM UI

Steps:

Images:

  • cirros034.qcow2

Descriptors:

  • cirros_vnf
  • cirros_2vnf_ns

Test 2a. Failed deployment of scenario when the checksum is invalid

Objective:

  • Testing that a wrong checksum prevents a successful deployment

Steps:

  • Modify the checksum in the VNF descriptor (using the UI VNF catalog) to add a wrong but format-valid checksum (e.g.: “aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa”). Same images are used.
  • Deploy the same NS as in test1
  • Check that the system refuses to deploy the NS due to a checksum error (“VIM Exception vimconnException Image not found at VIM with filter…”)

Images:

  • cirros034.qcow2

Descriptors:

  • cirros_vnf
  • cirros_2vnf_ns

Test 2b. Successful deployment of scenario when the descriptor has a checksum

Objective:

  • Testing that a valid checksum in the VNF descriptor leads to a successful deployment

Steps:

  • Modify the checksum in the VNF descriptor (using the UI VNF catalog) to add the valid checksum for the image (“ee1eca47dc88f4879d8a229cc70a07c6” for the cirros034 image).
  • Deploy the same NS as in test1
  • Check that the NS is successfully instantiated.
  • Access the console via OSM UI (user: “cirros”, pwd: “cubswin:)”)
  • Check that the VMs are up and running and connected via the common link.

Images:

  • cirros034

Descriptors:

  • cirros_vnf
  • cirros_2vnf_ns

Test 3a. Instantiation time of large NS based on Cirros images

Objective:

  • Check that instantiation time is bounded to avoid spurious timeouts.
  • Measure delay in the deployment, and evaluate potential issues in the connector.

Steps:

  • Onboard a “large NS” consisting of:
    • 2 types of VNFs based on Cirros VM. VNF#1 should have 5 interfaces (+management), while VNF#2 would only require 1 interface (+management)
    • Star topology, with 1 VNF in the middle and 5 instances of the other VNF connected to that one (+ the corresponding management interfaces)
  • Launch NS instantiation, specifying the right mgmt network to be used
  • Check that the UI reports a successful deployment
  • Connect to each VNF via SSH (user: “cirros”, pwd: “cubswin:)”)

Images:

  • cirros034

Descriptors:

  • cirros_2ifaces_vnf
  • cirros_6ifaces_vnf
  • test3a_ns

Test 3b. Instantiation time of large NS based on Cirros images using IP profiles

Objective:

  • Check that instantiation time is bounded to avoid spurious timeouts.
  • Measure delay in the deployment, and evaluate potential issues in the connector.
  • Check that IP profiles work properly in a large NS

Steps:

  • Onboard a “large NS” consisting of:
    • 2 types of VNFs based on Cirros VM. VNF#1 should have 5 interfaces (+management), while VNF#2 would only require 1 interface (+management)
    • Star topology, with 1 VNF in the middle and 5 instances of the other VNF connected to that one (+ the corresponding management interfaces)
    • Networks will have an IP profile so that DHCP is enabled
  • Launch NS instantiation, specifying the right mgmt network to be used
  • Check that the UI reports a successful deployment
  • Connect to each VNF via SSH (user: “cirros”, pwd: “cubswin:)”) and configure the interfaces to use DHCP, e.g. by changing /etc/network/interfaces and running “ifup ethX”
  • Check that connectivity is appropriate via ping from the different VMs

Images:

  • cirros034

Descriptors:

  • cirros_2ifaces_vnf
  • cirros_6ifaces_vnf
  • test3b_ns

Test 3c. Instantiation time of large NS based on Ubuntu images using IP profiles

Objective:

  • Check that instantiation time is bounded to avoid spurious timeouts, even with large images (Ubuntu vs CirrOS).
  • Measure delay in the deployment, and evaluate potential issues in the connector.
  • Check that IP profiles work properly in a large NS

Steps:

  • Onboard a “large NS” consisting of:
    • 2 types of VNFs based on Ubuntu VM. VNF#1 should have 5 interfaces (+management), while VNF#2 would only require 1 interface (+management)
    • Star topology, with 1 VNF in the middle and 5 instances of the other VNF connected to that one (+ the corresponding management interfaces)
    • Networks will have an IP profile so that DHCP is enabled
  • Launch NS instantiation, specifying the right mgmt network to be used
  • Check that the UI reports a successful deployment
  • Connect to each VNF via SSH (user: “osm”, pwd: “osm4u”).
  • Check that connectivity is appropriate via ping from the different VMs

Images:

  • US1604

Descriptors:

  • ubuntu_2ifaces_vnf
  • ubuntu_6ifaces_vnf
  • test3c_ns

Test 4a. Day 0 configuration: SSH key injection to the default user

Objective:

  • Testing SSH key injection to the default user

Steps:

  • Onboard a variant of the NS of Test #1, but with Ubuntu VNFs.
  • Add an SSH key through the UI (Launchpad > SSH Keys), where the key to be added is the public key.
  • Instantiate the NS via UI, requesting the injection of a given SSH key for the default user. Specify also the right mgmt network to be used.
  • Check that the UI reports a successful deployment
  • Check that the VMs are accessible via SSH, using the private SSH key, from the management network.

Images:

  • ubuntu1604

Descriptors:

  • ubuntu_1iface_vnf
  • test4a_ns

Test 4b. Day 0 configuration: user addition

Objective:

  • Testing creation of new user and SSH key injection to that user

Steps:

  • Onboard a the variant of the NS of Test #4a (test4b_ns), where the NS includes a user “osm” and SSH public key to be injected to every VNF.
  • Launch NS instantiation via UI, specifying the right mgmt network to be used.
  • Check that the UI reports a successful deployment.
  • Check that the VMs are accessible from the management network via the new user “osm” using its private SSH key (the private key is stored in the folder "test4b_ns/keys" inside the NS package).

Images:

  • ubuntu1604

Descriptors:

  • ubuntu_1iface_vnf
  • test4b_ns

Test 4c. Day 0 configuration: custom user script with cloud-init

Objective:

  • Testing injection of cloud-init custom user script

Steps:

  • Onboard a variant of the NS of Test #4a (test4c_ns), where the VNF includes a cloud-config custom script that creates a file in the VM and injects a SSH public key to the default user.
  • Launch NS instantiation via UI, specifying the right mgmt network to be used
  • Check that the UI reports a successful deployment.
  • Access the VM via SSH and check that the file has been successfully created. The private key is "test4.pem", stored in the folder "ubuntu_1iface_cloudinit_newfile_vnf/keys" inside the VNF package.

Images:

  • ubuntu1604

Descriptors:

  • ubuntu_1iface_cloudinit_newfile_vnf
  • test4c_ns

Test 5. Port security disabled

Objective:

  • Testing the ability to disable port security on demand

Steps:

  • Onboard a variant of the NS of Test #4 (test5_ns), but with a VNF whose single interface has port security disabled.
  • Launch NS instantiation via UI, specifying the right mgmt network to be used
  • Check that the UI reports a successful deployment.
  • Connect to each VNF via SSH (user: “osm”, pwd: “osm4u”).
  • Configure both VNFs with an additional IP address of the same subnet, e.g.: 192.168.50.X/24
    • Do not remove the mgmt IP address.
    • Add an additional IP address to the single interfaces using the command "ip addr add 192.168.50.X/24 dev eth0" and ping from one VNF to the other one.
  • If port security and security groups have been properly disabled, the ping between both VNFs using the added IP addresses should work.

Images:

  • US1604

Descriptors:

  • ubuntu_1iface_noportsecurity_vnf
  • test5_ns

Test 6a. Assignment of public IP addresses to management interfaces of single-interface VNFs

Objective:

  • Testing the assignment of IP addresses from a pool to VNF management interfaces

Prerequisites:

  • Configure the VIM to allow the dynamic assignment of public addresses from a pool
  • Configure a VIM network (e.g. “public”) to use the appropriate pool, to allow external access via “public” IP addresses.
  • Configure the datacenter in the RO to assign public IP addresses to VNF management interfaces (use_floating_ip: true)

Steps:

  • Onboard and deploy a NS consisting of 2 Ubuntu VNFs interconnected by a single network (mgmt).
  • Instantiate the NS via UI, specifying that the NS network “mgmt” must be mapped to the VIM network name “public”, so that a “public” IP address will be assigned from the pool.
  • Check that the UI reports a successful deployment.
  • Connect to each VNF via SSH (user: “osm”, pwd: “osm4u”) using the public IP address.

Images:

  • US1604

Descriptors:

  • ubuntu_1iface_userosm_vnf
  • test6a_ns

Test 6b. Assignment of public IP addresses to management interfaces of multi-interface VNFs

Objective:

  • Testing the assignment of IP addresses from a pool to VNF management interfaces in the case of multi-interface VNFs. The intention is to check that a single default route is injected.

Prerequisites:

  • Configure the VIM to allow the dynamic assignment of public addresses from a pool
  • Configure a VIM network (e.g. “public”) to use the appropriate pool, to allow external access via “public” IP addresses.
  • Configure the datacenter in the RO to assign public IP addresses to VNF management interfaces (use_floating_ip: true)

Steps:

  • Onboard and deploy a NS consisting of 2 Ubuntu VNFs interconnected by two networks (management and data).
  • Instantiate the NS via UI, specifying that the NS network “mgmt” must be mapped to the VIM network name “public”, so that a “public” IP address will be assigned from the pool.
  • Check that the UI reports a successful deployment.
  • Connect to each VNF via SSH (user: “osm”, pwd: “osm4u”) using the public IP address.

Images:

  • US1604

Descriptors:

  • ubuntu_2ifaces_vnf
  • test6b_ns

Test 6c. Assignment of public IP addresses to management interfaces of multi-interface VNFs even when IP profiles are used

Objective:

  • Testing the assignment of IP addresses from a pool to VNF management interfaces in the case of multi-interface VNFs even when IP profiles are used. The intention is to check again that a single default route is injected and that IP profiles do not affect that single route.

Prerequisites:

  • Configure the VIM to allow the dynamic assignment of public addresses from a pool
  • Configure a VIM network (e.g. “public”) to use the appropriate pool, to allow external access via “public” IP addresses.
  • Configure the datacenter in the RO to assign public IP addresses to VNF management interfaces (use_floating_ip: true)

Steps:

  • Onboard and deploy the NS used in Test 3c, consisting of a star topology, with 1 VNF in the middle and 5 instances of the other VNF connected to that one (+ the corresponding management interfaces), where all the inter-VNF networks have an IP profile so that DHCP is enabled, but with no default gateway.
  • Instantiate the NS via UI, specifying that the NS network “mgmt” must be mapped to the VIM network name “public”, so that a “public” IP address will be assigned from the pool. This is the only change with respect to Test 3c.
  • Check that the UI reports a successful deployment.
  • Connect to each VNF via SSH (user: “osm”, pwd: “osm4u”) using the public IP address.

Images:

  • US1604

Descriptors:

  • ubuntu_2ifaces_vnf
  • ubuntu_6ifaces_vnf
  • test3c_ns

Test 7a. EPA tests - phase 1

Objectives:

  • Testing that the VIM can map properly vCPUs to pairs of physical HW threads
  • Testing that the VIM can assign hugepages memory
  • Testing that the VIM can assign SRIOV interfaces
  • Testing that the order of interfaces is correct

Steps:

  • Onboard pktgen_4psriov VNF, which requires:
    • CPU pinning of paired threads
    • Hugepages
    • SRIOV interfaces
  • Onboard a NS with 5 instances of pktgen_4psriov, in a star topology, with one of the VNFs in the middle (Emitter) and 4 VNFs attached (Receiver1-4).
  • Check that all VNFs are accessible by SSH via mgmt interface (user: "pktgen", pwd: "pktgen")
  • Check (at the VIM or the host) that the CPU pinning is correct.
  • Check (at the VIM or the host) that hugepages have been assigned to the guest.
  • Check with pktgen that the interfaces are correctly attached to SRIOV interfaces and in the right order:
    • Emitter port 0 -> Receiver1 port 0
    • Emitter port 1 -> Receiver2 port 1
    • Emitter port 2 -> Receiver3 port 2
    • Emitter port 3 -> Receiver4 port 3

Images:

  • pktgen

Descriptors:

  • pktgen_4psriov_vnfd
  • test7a_ns

Test 7b. EPA tests - phase 2

Objectives:

  • Testing that the VIM can map properly vCPUs to physical cores
  • Testing that the VIM can assign passthrough interfaces
  • Testing that the order of interfaces is correct

Steps:

  • Equivalent to the previous test, but using a variant to the NS which requires:
    • Full cores (instead of HW threads)
    • Passthrough interfaces (instead of SR-IOV)

Images:

  • pktgen

Descriptors:

  • pktgen_4ppassthrough_vnfd
  • test7b_ns