OSM E2E tests: Difference between revisions

From OSM Public Wiki
Jump to: navigation, search
(Created page with "= Test 1. Sanity check with simple NS = Objectives: * Sanity check of correct E2E behaviour. * Validate VM image management. * Test access to the console from OSM UI Steps:...")
 
No edit summary
Line 1: Line 1:
__TOC__
'''Note: VNF and NS packages, as well as VM images required for the tests can be found here: [http://osm-download.etsi.org/ftp/e2e-tests/ http://osm-download.etsi.org/ftp/e2e-tests/]'''
= Test 1. Sanity check with simple NS =
= Test 1. Sanity check with simple NS =
Objectives:
Objectives:
Line 102: Line 106:
** Star topology, with 1 VNF in the middle and 5 instances of the other VNF connected to that one (+ the corresponding management interfaces)
** Star topology, with 1 VNF in the middle and 5 instances of the other VNF connected to that one (+ the corresponding management interfaces)
** Networks will have an IP profile so that DHCP is enabled
** Networks will have an IP profile so that DHCP is enabled
* Launch NS instantiation
* Launch NS instantiation, specifying the right mgmt network to be used
* Check that the UI reports a successful deployment
* Check that the UI reports a successful deployment
* Connect to each VNF via SSH (user: “<tt>cirros</tt>”, pwd: “<tt>cubswin:)</tt>”) and configure the interfaces to use DHCP, e.g. by changing /etc/network/interfaces and running “ifconfig ethX up”
* Connect to each VNF via SSH (user: “<tt>cirros</tt>”, pwd: “<tt>cubswin:)</tt>”) and configure the interfaces to use DHCP, e.g. by changing /etc/network/interfaces and running “ifup ethX”
* Check that connectivity is appropriate via ping from the different VMs
* Check that connectivity is appropriate via ping from the different VMs


Line 122: Line 126:
* Check that instantiation time is bounded to avoid spurious timeouts, even with large images (Ubuntu vs CirrOS).
* Check that instantiation time is bounded to avoid spurious timeouts, even with large images (Ubuntu vs CirrOS).
* Measure delay in the deployment, and evaluate potential issues in the connector.
* Measure delay in the deployment, and evaluate potential issues in the connector.
* Check that IP profiles work properly in a large NS
* Check that IP profiles work properly in a large NS


Line 131: Line 134:
** Star topology, with 1 VNF in the middle and 5 instances of the other VNF connected to that one (+ the corresponding management interfaces)
** Star topology, with 1 VNF in the middle and 5 instances of the other VNF connected to that one (+ the corresponding management interfaces)
** Networks will have an IP profile so that DHCP is enabled
** Networks will have an IP profile so that DHCP is enabled
* Launch NS instantiation
* Launch NS instantiation, specifying the right mgmt network to be used
* Check that the UI reports a successful deployment
* Check that the UI reports a successful deployment
* Connect to each VNF via SSH (user: “<tt>osm</tt>”, pwd: “<tt>osm4u</tt>”).
* Connect to each VNF via SSH (user: “<tt>osm</tt>”, pwd: “<tt>osm4u</tt>”).
Line 155: Line 158:
* Onboard a variant of the NS of Test #1, but with Ubuntu VNFs.
* Onboard a variant of the NS of Test #1, but with Ubuntu VNFs.
* Add an SSH key through the UI (Launchpad > SSH Keys), where the key to be added is the public key.
* Add an SSH key through the UI (Launchpad > SSH Keys), where the key to be added is the public key.
* Instantiate the NS via UI, requesting the injection of a given SSH key for the default user
* Instantiate the NS via UI, requesting the injection of a given SSH key for the default user. Specify also the right mgmt network to be used.
* Check that the UI reports a successful deployment
* Check that the UI reports a successful deployment
* Check that the VMs are accessible via SSH, using the private SSH key, from the management network.
* Check that the VMs are accessible via SSH, using the private SSH key, from the management network.
Line 176: Line 179:


* Onboard a the variant of the NS of Test #4a (test4b_ns), where the NS includes a user “osm” and SSH public key to be injected to every VNF.
* Onboard a the variant of the NS of Test #4a (test4b_ns), where the NS includes a user “osm” and SSH public key to be injected to every VNF.
* Instantiate the NS via UI.
* Launch NS instantiation via UI, specifying the right mgmt network to be used.
* Check that the UI reports a successful deployment.
* Check that the UI reports a successful deployment.
* Check that the VMs are accessible from the management network via the new user “osm” using its private SSH key (the private key is stored in the folder keys inside the NS package).
* Check that the VMs are accessible from the management network via the new user “osm” using its private SSH key (the private key is stored in the folder keys inside the NS package).
Line 197: Line 200:


* Onboard a variant of the NS of Test #4a (test4c_ns), where the VNF includes a cloud-config custom script that creates a file in the VM and injects a SSH public key to the default user.
* Onboard a variant of the NS of Test #4a (test4c_ns), where the VNF includes a cloud-config custom script that creates a file in the VM and injects a SSH public key to the default user.
* Instantiate the NS via UI
* Launch NS instantiation via UI, specifying the right mgmt network to be used
* Check that the UI reports a successful deployment.
* Check that the UI reports a successful deployment.
* Access the VM via SSH and check that the file has been successfully created.
* Access the VM via SSH and check that the file has been successfully created.
Line 209: Line 212:
* ubuntu_1iface_cloudinit_newfile_vnf
* ubuntu_1iface_cloudinit_newfile_vnf
* test4c_ns
* test4c_ns
= Test 4d. Day 0 configuration: custom user script =
Objective:
* Testing injection of custom user script
Steps:
* Onboard a variant of the NS of Test #4a (test4d_ns), where the VNF includes a file to be copied into the VM.
* Instantiate the NS via UI, requesting the injection of a given SSH key for the default user
* Check that the UI reports a successful deployment.
* Access the VM via SSH and check that the file has been successfully created.
Images:
* ubuntu1604
Descriptors:
* ubuntu_1iface_newfile_vnf
* test4d_ns


= Test 5. Port security disabled =
= Test 5. Port security disabled =
Line 239: Line 221:


* Onboard a variant of the NS of Test #4 (test5_ns), but with a VNF whose single interface has port security disabled.
* Onboard a variant of the NS of Test #4 (test5_ns), but with a VNF whose single interface has port security disabled.
* Instantiate the NS via UI.
* Launch NS instantiation via UI, specifying the right mgmt network to be used
* Check that the UI reports a successful deployment.
* Check that the UI reports a successful deployment.
* Connect to each VNF via SSH (user: “<tt>osm</tt>”, pwd: “<tt>osm4u</tt>”).
* Connect to each VNF via SSH (user: “<tt>osm</tt>”, pwd: “<tt>osm4u</tt>”).
Line 337: Line 319:
* ubuntu_6ifaces_vnf
* ubuntu_6ifaces_vnf
* test3c_ns
* test3c_ns
= Test 6d. Assignment of public IP addresses to a non-management interface =
Objective:
* Testing the assignment of IP addresses from a pool.
Prerequisites:
* Configure the VIM to allow the dynamic assignment of public addresses from a pool
* Configure a VIM network (e.g. “public”) to use the appropriate pool, to allow external access via “public” IP addresses.
* It’s NOT required to configure the datacenter in the RO to assign public IP addresses to VNF management interfaces (use_floating_ip: true), since we are not going to use that feature.
Steps:
* Onboard and deploy a NS consisting of a single VNF, consisting of two VDUs, connected to two networks (mgmt and public). The VNF descriptor specifies that a public IP address should be assigned to the public external interface,
* Instantiate the NS via UI, specifying that the NS network “public” must be mapped to the VIM network name “public”, so that a “public” IP address will be assigned from the pool.
* Check that the UI reports a successful deployment.
* Access to the VNF via ssh to the assigned “public” IP address (user: “<tt>osm</tt>”, pwd: “<tt>osm4u</tt>”).
*
Images:
* US1604
Descriptors:
* tef_twovdus_publiciface_vnf
* test6d_ns


= Test 7a. EPA tests - phase 1 =
= Test 7a. EPA tests - phase 1 =
Line 381: Line 335:
** SRIOV interfaces
** SRIOV interfaces
* Onboard a NS with 5 instances of pktgen_4psriov, in a star topology, with one of the VNFs in the middle (Emitter) and 4 VNFs attached (Receiver1-4).
* Onboard a NS with 5 instances of pktgen_4psriov, in a star topology, with one of the VNFs in the middle (Emitter) and 4 VNFs attached (Receiver1-4).
* Check that all VNFs are accessible by SSH via mgmt interface.
* Check that all VNFs are accessible by SSH via mgmt interface (user: "pktgen", pwd: "pktgen")
* Check (at the VIM or the host) that the CPU pinning is correct.
* Check (at the VIM or the host) that the CPU pinning is correct.
* Check (at the VIM or the host) that hugepages have been assigned to the guest.
* Check (at the VIM or the host) that hugepages have been assigned to the guest.

Revision as of 13:31, 5 September 2017

Note: VNF and NS packages, as well as VM images required for the tests can be found here: http://osm-download.etsi.org/ftp/e2e-tests/

Test 1. Sanity check with simple NS

Objectives:

  • Sanity check of correct E2E behaviour.
  • Validate VM image management.
  • Test access to the console from OSM UI

Steps:

Images:

  • cirros034

Descriptors:

  • cirros_vnf
  • cirros_2vnf_ns

Test 2a. Failed deployment of scenario when the checksum is invalid

Objective:

  • Testing that a wrong checksum prevents a successful deployment

Steps:

  • Modify the checksum in the VNF descriptor (using the UI VNF catalog) to add a wrong but format-valid checksum (e.g.: “aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa”). Same images are used.
  • Deploy the same NS as in test1
  • Check that the system refuses to deploy the NS due to a checksum error (“VIM Exception vimconnException Image not found at VIM with filter…”)

Images:

  • cirros034

Descriptors:

  • cirros_vnf
  • cirros_2vnf_ns

Test 2b. Successful deployment of scenario when the descriptor has a checksum

Objective:

  • Testing that a valid checksum in the VNF descriptor leads to a successful deployment

Steps:

  • Modify the checksum in the VNF descriptor (using the UI VNF catalog) to add the valid checksum for the image (“ee1eca47dc88f4879d8a229cc70a07c6” for the cirros034 image).
  • Deploy the same NS as in test1
  • Check that the NS is successfully instantiated.
  • Access the console via OSM UI (user: “cirros”, pwd: “cubswin:)”)
  • Check that the VMs are up and running and connected via the common link.

Images:

  • cirros034

Descriptors:

  • cirros_vnf
  • cirros_2vnf_ns

Test 3a. Instantiation time of large NS based on Cirros images

Objective:

  • Check that instantiation time is bounded to avoid spurious timeouts.
  • Measure delay in the deployment, and evaluate potential issues in the connector.

Steps:

  • Onboard a “large NS” consisting of:
    • 2 types of VNFs based on Cirros VM. VNF#1 should have 5 interfaces (+management), while VNF#2 would only require 1 interface (+management)
    • Star topology, with 1 VNF in the middle and 5 instances of the other VNF connected to that one (+ the corresponding management interfaces)
  • Launch NS instantiation, specifying the right mgmt network to be used
  • Check that the UI reports a successful deployment
  • Connect to each VNF via SSH (user: “cirros”, pwd: “cubswin:)”)

Images:

  • cirros034

Descriptors:

  • cirros_2ifaces_vnf
  • cirros_6ifaces_vnf
  • test3a_ns

Test 3b. Instantiation time of large NS based on Cirros images using IP profiles

Objective:

  • Check that instantiation time is bounded to avoid spurious timeouts.
  • Measure delay in the deployment, and evaluate potential issues in the connector.
  • Check that IP profiles work properly in a large NS

Steps:

  • Onboard a “large NS” consisting of:
    • 2 types of VNFs based on Cirros VM. VNF#1 should have 5 interfaces (+management), while VNF#2 would only require 1 interface (+management)
    • Star topology, with 1 VNF in the middle and 5 instances of the other VNF connected to that one (+ the corresponding management interfaces)
    • Networks will have an IP profile so that DHCP is enabled
  • Launch NS instantiation, specifying the right mgmt network to be used
  • Check that the UI reports a successful deployment
  • Connect to each VNF via SSH (user: “cirros”, pwd: “cubswin:)”) and configure the interfaces to use DHCP, e.g. by changing /etc/network/interfaces and running “ifup ethX”
  • Check that connectivity is appropriate via ping from the different VMs

Images:

  • cirros034

Descriptors:

  • cirros_2ifaces_vnf
  • cirros_6ifaces_vnf
  • test3b_ns

Test 3c. Instantiation time of large NS based on Ubuntu images using IP profiles

Objective:

  • Check that instantiation time is bounded to avoid spurious timeouts, even with large images (Ubuntu vs CirrOS).
  • Measure delay in the deployment, and evaluate potential issues in the connector.
  • Check that IP profiles work properly in a large NS

Steps:

  • Onboard a “large NS” consisting of:
    • 2 types of VNFs based on Ubuntu VM. VNF#1 should have 5 interfaces (+management), while VNF#2 would only require 1 interface (+management)
    • Star topology, with 1 VNF in the middle and 5 instances of the other VNF connected to that one (+ the corresponding management interfaces)
    • Networks will have an IP profile so that DHCP is enabled
  • Launch NS instantiation, specifying the right mgmt network to be used
  • Check that the UI reports a successful deployment
  • Connect to each VNF via SSH (user: “osm”, pwd: “osm4u”).
  • Check that connectivity is appropriate via ping from the different VMs

Images:

  • US1604

Descriptors:

  • ubuntu_2ifaces_vnf
  • ubuntu_6ifaces_vnf
  • test3c_ns

Test 4a. Day 0 configuration: SSH key injection to the default user

Objective:

  • Testing SSH key injection to the default user

Steps:

  • Onboard a variant of the NS of Test #1, but with Ubuntu VNFs.
  • Add an SSH key through the UI (Launchpad > SSH Keys), where the key to be added is the public key.
  • Instantiate the NS via UI, requesting the injection of a given SSH key for the default user. Specify also the right mgmt network to be used.
  • Check that the UI reports a successful deployment
  • Check that the VMs are accessible via SSH, using the private SSH key, from the management network.

Images:

  • ubuntu1604

Descriptors:

  • ubuntu_1iface_vnf
  • test4a_ns

Test 4b. Day 0 configuration: user addition

Objective:

  • Testing creation of new user and SSH key injection to that user

Steps:

  • Onboard a the variant of the NS of Test #4a (test4b_ns), where the NS includes a user “osm” and SSH public key to be injected to every VNF.
  • Launch NS instantiation via UI, specifying the right mgmt network to be used.
  • Check that the UI reports a successful deployment.
  • Check that the VMs are accessible from the management network via the new user “osm” using its private SSH key (the private key is stored in the folder keys inside the NS package).

Images:

  • ubuntu1604

Descriptors:

  • ubuntu_1iface_vnf
  • test4b_ns

Test 4c. Day 0 configuration: custom user script with cloud-init

Objective:

  • Testing injection of cloud-init custom user script

Steps:

  • Onboard a variant of the NS of Test #4a (test4c_ns), where the VNF includes a cloud-config custom script that creates a file in the VM and injects a SSH public key to the default user.
  • Launch NS instantiation via UI, specifying the right mgmt network to be used
  • Check that the UI reports a successful deployment.
  • Access the VM via SSH and check that the file has been successfully created.

Images:

  • ubuntu1604

Descriptors:

  • ubuntu_1iface_cloudinit_newfile_vnf
  • test4c_ns

Test 5. Port security disabled

Objective:

  • Testing the ability to disable port security on demand

Steps:

  • Onboard a variant of the NS of Test #4 (test5_ns), but with a VNF whose single interface has port security disabled.
  • Launch NS instantiation via UI, specifying the right mgmt network to be used
  • Check that the UI reports a successful deployment.
  • Connect to each VNF via SSH (user: “osm”, pwd: “osm4u”).
  • Check that port security is disabled between both VNFs, using netcat:
    • In one side (server): nc -l 3333
    • In the other side (client): nc <IP_SERVER> 3333
    • Then, write something in the client and you should see it in the server

Images:

  • US1604

Descriptors:

  • ubuntu_1iface_noportsecurity_vnf
  • test5_ns

Test 6a. Assignment of public IP addresses to management interfaces of single-interface VNFs

Objective:

  • Testing the assignment of IP addresses from a pool to VNF management interfaces

Prerequisites:

  • Configure the VIM to allow the dynamic assignment of public addresses from a pool
  • Configure a VIM network (e.g. “public”) to use the appropriate pool, to allow external access via “public” IP addresses.
  • Configure the datacenter in the RO to assign public IP addresses to VNF management interfaces (use_floating_ip: true)

Steps:

  • Onboard and deploy a NS consisting of 2 Ubuntu VNFs interconnected by a single network (mgmt).
  • Instantiate the NS via UI, specifying that the NS network “mgmt” must be mapped to the VIM network name “public”, so that a “public” IP address will be assigned from the pool.
  • Check that the UI reports a successful deployment.
  • Connect to each VNF via SSH (user: “osm”, pwd: “osm4u”) using the public IP address.

Images:

  • US1604

Descriptors:

  • ubuntu_1iface_userosm_vnf
  • test6a_ns

Test 6b. Assignment of public IP addresses to management interfaces of multi-interface VNFs

Objective:

  • Testing the assignment of IP addresses from a pool to VNF management interfaces in the case of multi-interface VNFs. The intention is to check that a single default route is injected.

Prerequisites:

  • Configure the VIM to allow the dynamic assignment of public addresses from a pool
  • Configure a VIM network (e.g. “public”) to use the appropriate pool, to allow external access via “public” IP addresses.
  • Configure the datacenter in the RO to assign public IP addresses to VNF management interfaces (use_floating_ip: true)

Steps:

  • Onboard and deploy a NS consisting of 2 Ubuntu VNFs interconnected by two networks (management and data).
  • Instantiate the NS via UI, specifying that the NS network “mgmt” must be mapped to the VIM network name “public”, so that a “public” IP address will be assigned from the pool.
  • Check that the UI reports a successful deployment.
  • Connect to each VNF via SSH (user: “osm”, pwd: “osm4u”) using the public IP address.

Images:

  • US1604

Descriptors:

  • ubuntu_2ifaces_vnf
  • test6b_ns

Test 6c. Assignment of public IP addresses to management interfaces of multi-interface VNFs even when IP profiles are used

Objective:

  • Testing the assignment of IP addresses from a pool to VNF management interfaces in the case of multi-interface VNFs even when IP profiles are used. The intention is to check again that a single default route is injected and that IP profiles do not affect that single route.

Prerequisites:

  • Configure the VIM to allow the dynamic assignment of public addresses from a pool
  • Configure a VIM network (e.g. “public”) to use the appropriate pool, to allow external access via “public” IP addresses.
  • Configure the datacenter in the RO to assign public IP addresses to VNF management interfaces (use_floating_ip: true)

Steps:

  • Onboard and deploy the NS used in Test 3c, consisting of a star topology, with 1 VNF in the middle and 5 instances of the other VNF connected to that one (+ the corresponding management interfaces), where all the inter-VNF networks have an IP profile so that DHCP is enabled, but with no default gateway.
  • Instantiate the NS via UI, specifying that the NS network “mgmt” must be mapped to the VIM network name “public”, so that a “public” IP address will be assigned from the pool. This is the only change with respect to Test 3c.
  • Check that the UI reports a successful deployment.
  • Connect to each VNF via SSH (user: “osm”, pwd: “osm4u”) using the public IP address.

Images:

  • US1604

Descriptors:

  • ubuntu_2ifaces_vnf
  • ubuntu_6ifaces_vnf
  • test3c_ns

Test 7a. EPA tests - phase 1

Objectives:

  • Testing that the VIM can map properly vCPUs to pairs of physical HW threads
  • Testing that the VIM can assign hugepages memory
  • Testing that the VIM can assign SRIOV interfaces
  • Testing that the order of interfaces is correct

Steps:

  • Onboard pktgen_4psriov VNF, which requires:
    • CPU pinning of paired threads
    • Hugepages
    • SRIOV interfaces
  • Onboard a NS with 5 instances of pktgen_4psriov, in a star topology, with one of the VNFs in the middle (Emitter) and 4 VNFs attached (Receiver1-4).
  • Check that all VNFs are accessible by SSH via mgmt interface (user: "pktgen", pwd: "pktgen")
  • Check (at the VIM or the host) that the CPU pinning is correct.
  • Check (at the VIM or the host) that hugepages have been assigned to the guest.
  • Check with pktgen that the interfaces are correctly attached to SRIOV interfaces and in the right order:
    • Emitter port 0 -> Receiver1 port 0
    • Emitter port 1 -> Receiver2 port 1
    • Emitter port 2 -> Receiver3 port 2
    • Emitter port 3 -> Receiver4 port 3

Images:

  • pktgen

Descriptors:

  • pktgen_4psriov_vnfd
  • test7a_ns

Test 7b. EPA tests - phase 2

Objectives:

  • Testing that the VIM can map properly vCPUs to physical cores
  • Testing that the VIM can assign passthrough interfaces
  • Testing that the order of interfaces is correct

Steps:

  • Equivalent to the previous test, but using a variant to the NS which requires:
    • Full cores (instead of HW threads)
    • Passthrough interfaces (instead of SR-IOV)

Images:

  • pktgen

Descriptors:

  • pktgen_4ppassthrough_vnfd
  • test7b_ns