+*** Comments ***
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# See the License for the specific language governing permissions and
# limitations under the License.
+
*** Settings ***
-Documentation [HEAL-01] Healing of a multi-volume VDU
+Documentation [HEAL-01] Healing of a multi-volume VDU
Library OperatingSystem
Library String
Library Process
Library SSHLibrary
-Resource %{ROBOT_DEVOPS_FOLDER}/lib/vnfd_lib.robot
-Resource %{ROBOT_DEVOPS_FOLDER}/lib/vnf_lib.robot
-Resource %{ROBOT_DEVOPS_FOLDER}/lib/nsd_lib.robot
-Resource %{ROBOT_DEVOPS_FOLDER}/lib/ns_lib.robot
-Resource %{ROBOT_DEVOPS_FOLDER}/lib/ssh_lib.robot
-Resource %{ROBOT_DEVOPS_FOLDER}/lib/openstack_lib.robot
-
-Force Tags heal_01 cluster_main daily regression
+Resource ../lib/vnfd_lib.resource
+Resource ../lib/vnf_lib.resource
+Resource ../lib/nsd_lib.resource
+Resource ../lib/ns_lib.resource
+Resource ../lib/ns_operation_lib.resource
+Resource ../lib/ssh_lib.resource
+Resource ../lib/openstack_lib.resource
+Test Tags heal_01 cluster_heal daily regression
Suite Teardown Run Keyword And Ignore Error Suite Cleanup
*** Variables ***
# NS and VNF descriptor package folder and ids
-${vnfd_volumes_pkg} several_volumes_vnf
-${vnfd_volumes_name} several_volumes-vnf
-${vdu_volumes_name} several_volumes-VM
-${vnf_several_index} several_volumes_vnf
-${vnfd_charm_pkg} charm-packages/native_manual_scale_charm_vnf
-${vnfd_charm_name} native_manual_scale_charm-vnf
-${nsd_pkg} volumes_nativecharm_ns
-${nsd_name} volumes_nativecharm-ns
+${VNFD_VOLUMES_PKG} several_volumes_vnf
+${VNFD_VOLUMES_NAME} several_volumes-vnf
+${VDU_VOLUMES_NAME} several_volumes-VM
+${VNF_SEVERAL_INDEX} several_volumes_vnf
+${VNFD_MANUALSCALE_PKG} manual_scale_vnf
+${VNFD_MANUALSCALE_NAME} manual_scale-vnf
+${NSD_PKG} volumes_healing_ns
+${NSD_NAME} volumes_healing-ns
# NS instance name and configuration
-${ns_name} heal_01
-${ns_config} {vld: [ {name: mgmtnet, vim-network-name: %{VIM_MGMT_NET}} ] }
+${NS_NAME} heal_01
+${NS_CONFIG} {vld: [ {name: mgmtnet, vim-network-name: %{VIM_MGMT_NET}} ] }
+${NS_TIMEOUT} 6min
# SSH keys and username to be used
-${publickey} %{HOME}/.ssh/id_rsa.pub
-${privatekey} %{HOME}/.ssh/id_rsa
-${username} ubuntu
-${password} ${EMPTY}
+${PUBLICKEY} %{HOME}/.ssh/id_rsa.pub
+${PRIVATEKEY} %{HOME}/.ssh/id_rsa
+${USERNAME} ubuntu
+${PASSWORD} ${EMPTY}
-${success_return_code} 0
+${SUCCESS_RETURN_CODE} 0
+
+@{VIM_VDUS} @{EMPTY}
+@{VIM_VOLUMES} @{EMPTY}
*** Test Cases ***
Create VNF Descriptors
- [Tags] prepare
- Create VNFD '%{PACKAGES_FOLDER}/${vnfd_charm_pkg}'
- Create VNFD '%{PACKAGES_FOLDER}/${vnfd_volumes_pkg}'
-
+ [Documentation] Upload VNF packages for the testsuite.
+ Create VNFD '%{PACKAGES_FOLDER}/${VNFD_MANUALSCALE_PKG}'
+ Create VNFD '%{PACKAGES_FOLDER}/${VNFD_VOLUMES_PKG}'
Create NS Descriptor
- [Tags] prepare
- Create NSD '%{PACKAGES_FOLDER}/${nsd_pkg}'
-
+ [Documentation] Upload NS package for the testsuite.
+ Create NSD '%{PACKAGES_FOLDER}/${NSD_PKG}'
Network Service Instance Test
- [Tags] prepare
- ${id}= Create Network Service ${nsd_name} %{VIM_TARGET} ${ns_name} ${ns_config} ${publickey}
- Set Suite Variable ${ns_id} ${id}
-
+ [Documentation] Instantiate NS for the testsuite.
+ ${id}= Create Network Service ${NSD_NAME} %{VIM_TARGET} ${NS_NAME} ${NS_CONFIG} ${PUBLICKEY} ${NS_TIMEOUT}
+ Set Suite Variable ${NS_ID} ${id}
Get NS Id
- [Tags] verify cleanup
- ${variables} Get Variables
+ [Documentation] Get NS identifier.
+ [Tags] cleanup
+ ${variables}= Get Variables
IF not "\${ns_id}" in "${variables}"
- ${id}= Get Ns Id ${ns_name}
- Set Suite Variable ${ns_id} ${id}
+ ${id}= Get Ns Id ${NS_NAME}
+ Set Suite Variable ${NS_ID} ${id}
END
+Get VIM Objects
+ [Documentation] Retrieve all VMs and volumes from the NS and stores them in VIM_VDUS and VIM_VOLUMES lists.
+ Variable Should Exist ${NS_ID} msg=NS is not available
+ @{vnf_id_list}= Get Ns Vnf List ${NS_ID}
+ Log ${vnf_id_list}
+ FOR ${vnf_id} IN @{vnf_id_list}
+ Log ${vnf_id}
+ ${id}= Get VNF VIM ID ${vnf_id}
+ @{vdu_ids}= Split String ${id}
+ Append To List ${VIM_VDUS} @{vdu_ids}
+ END
+ FOR ${vdu_id} IN @{VIM_VDUS}
+ ${volumes_attached}= Get Server Property ${vdu_id} volumes_attached
+ ${match}= Get Regexp Matches ${volumes_attached} '([0-9a-f\-]+)' 1
+ IF ${match} != @{EMPTY}
+ IF not "${match}[0]" in "@{VIM_VOLUMES}"
+ Append To List ${VIM_VOLUMES} ${match}[0]
+ END
+ END
+ END
+ Log Many @{VIM_VDUS}
+ Log Many @{VIM_VOLUMES}
Get Volume VNF Info
- [Tags] verify
- Variable Should Exist ${ns_id} msg=NS is not available
- ${ip_addr}= Get Vnf Management Ip Address ${ns_id} ${vnf_several_index}
- log ${ip_addr}
- Set Suite Variable ${vnf_volumes_ip_addr} ${ip_addr}
-
- ${vnf_id}= Get Vnf Id ${ns_id} ${vnf_several_index}
- Set Suite Variable ${vnf_volumes_id} ${vnf_id}
- ${id}= Get VNF VIM ID ${vnf_id}
- Set Suite Variable ${vdu_volumes_id} ${id}
- log ${vdu_volumes_id}
-
- @{volumes_ip_list}= Get Vnf Vdur IPs ${vnf_volumes_id}
- Set Suite Variable @{volumes_ip_list} @{volumes_ip_list}
- log @{volumes_ip_list}
-
+ [Documentation] Get VDU ID, IP addresses and volumes of the VNF and stores them in suite variables to be used later on.
+ Variable Should Exist ${NS_ID} msg=NS is not available
+ ${ip_addr}= Get Vnf Management Ip Address ${NS_ID} ${VNF_SEVERAL_INDEX}
+ Log ${ip_addr}
+ Set Suite Variable ${VNF_VOLUMES_IP_ADDR} ${ip_addr}
+ ${vnf_id}= Get Vnf Id ${NS_ID} ${VNF_SEVERAL_INDEX}
+ Set Suite Variable ${VNF_VOLUMES_ID} ${vnf_id}
+ ${id}= Get VNF VIM ID ${vnf_id}
+ Set Suite Variable ${VDU_VOLUMES_ID} ${id}
+ Log ${VDU_VOLUMES_ID}
+ @{VOLUMES_IP_LIST}= Get Vnf Vdur IPs ${VNF_VOLUMES_ID}
+ Set Suite Variable @{VOLUMES_IP_LIST} @{VOLUMES_IP_LIST}
+ Log @{VOLUMES_IP_LIST}
Get Volumes Info
- [Tags] verify
- ${rc} ${stdout}= Run and Return RC and Output osm vnfpkg-show ${vnfd_volumes_name} --literal | yq '.vdu[0]."virtual-storage-desc" | length'
- Should Be Equal As Integers ${rc} ${success_return_code} msg=${stdout} values=False
+ [Documentation] Get number of volumes from the VNF descriptor and get the attached volumes from the VDU instance.
+ ${rc} ${stdout}= Run And Return RC And Output osm vnfpkg-show ${VNFD_VOLUMES_NAME} --literal | yq '.vdu[0]."virtual-storage-desc" | length'
+ Should Be Equal As Integers ${rc} ${SUCCESS_RETURN_CODE} msg=${stdout} values=False
${num_virtual_storage}= Convert To Integer ${stdout}
- Set Suite Variable ${vnf_num_volumes} ${num_virtual_storage}
- log ${vnf_num_volumes}
- ${volumes_attached}= Get Server Property ${vdu_volumes_id} volumes_attached
- ${match}= Get Regexp Matches ${volumes_attached} '([0-9a-f\-]+)' 1
- Set Suite Variable ${volume_id} ${match}[0]
-
+ Set Suite Variable ${VNF_NUM_VOLUMES} ${num_virtual_storage}
+ Log ${VNF_NUM_VOLUMES}
+ ${volumes_attached}= Get Server Property ${VDU_VOLUMES_ID} volumes_attached
+ ${match}= Get Regexp Matches ${volumes_attached} '([0-9a-f\-]+)' 1
+ Set Suite Variable ${VOLUME_ID} ${match}[0]
Check VDU Disks
- [Tags] verify
- Variable Should Exist ${vnf_volumes_ip_addr} msg=VNF is not available
- Sleep 30 seconds Wait for SSH daemon to be up
- ${stdout}= Execute Remote Command Check Rc Return Output ${vnf_volumes_ip_addr} ${username} ${password} ${privatekey} sudo lsblk -l
- log ${stdout}
+ [Documentation] Check that the number of disks in the VDU meets the expected one.
+ Variable Should Exist ${VNF_VOLUMES_IP_ADDR} msg=VNF is not available
+ Sleep 20 seconds Wait for SSH daemon to be up
+ ${stdout}= Execute Remote Command Check Rc Return Output ${VNF_VOLUMES_IP_ADDR} ${USERNAME} ${PASSWORD} ${PRIVATEKEY} sudo lsblk -l
+ Log ${stdout}
${lines}= Get Lines Containing String ${stdout} disk
${num_lines}= Get Line Count ${lines}
- Run Keyword If ${num_lines} < ${vnf_num_volumes} Fail msg=Number of disks (${num_lines}) is less than specified in VDU (${vnf_num_volumes})
+ IF ${num_lines} < ${VNF_NUM_VOLUMES} Fail msg=Number of disks (${num_lines}) is less than specified in VDU (${VNF_NUM_VOLUMES})
-
-Stop Persistent Volume VDU
- [Tags] verify
- Variable Should Exist ${vdu_volumes_id} msg=VDU is not available
- Stop Server ${vdu_volumes_id}
+Delete Persistent Volume VDU
+ [Documentation] Manually delete the VM in Openstack.
+ Variable Should Exist ${VDU_VOLUMES_ID} msg=VDU is not available
+ Delete Server ${VDU_VOLUMES_ID}
Sleep 20
-
Heal Persistent Volume VDU
- [Tags] verify
- Variable Should Exist ${vnf_volumes_id} msg=VNF is not available
- Heal Network Service ${ns_id} --vnf ${vnf_volumes_id} --cause "Heal VM of volumes_vnf" --vdu ${vdu_volumes_name}
-
+ [Documentation] Manually heal the VNF in order to re-create the deleted VM.
+ Variable Should Exist ${VNF_VOLUMES_ID} msg=VNF is not available
+ Heal Network Service ${NS_ID} --vnf ${VNF_VOLUMES_ID} --cause "Heal VM of volumes_vnf" --vdu ${VDU_VOLUMES_NAME}
Check VNF After Healing
- [Tags] verify
- Variable Should Exist ${vnf_volumes_id} msg=VNF is not available
-
- @{ip_list}= Get Vnf Vdur IPs ${vnf_volumes_id}
- log @{ip_list}
- Should Be Equal ${ip_list} ${volumes_ip_list} IP addresses have changed after healing
-
- ${id}= Get VNF VIM ID ${vnf_volumes_id}
- log ${id}
- Should Not Be Equal ${id} ${vdu_volumes_id} VDU id has not changed after healing
-
- ${volumes_attached}= Get Server Property ${id} volumes_attached
- ${match}= Get Regexp Matches ${volumes_attached} '([0-9a-f\-]+)' 1
- Should Be Equal ${match}[0] ${volume_id} Volume id has changed after healing
-
+ [Documentation] Check that the IDs of the VM and volumes have not changed after healing.
+ Variable Should Exist ${VNF_VOLUMES_ID} msg=VNF is not available
+ @{ip_list}= Get Vnf Vdur IPs ${VNF_VOLUMES_ID}
+ Log @{ip_list}
+ Should Be Equal ${ip_list} ${VOLUMES_IP_LIST} IP addresses have changed after healing
+ ${id}= Get VNF VIM ID ${VNF_VOLUMES_ID}
+ Log ${id}
+ Should Not Be Equal ${id} ${VDU_VOLUMES_ID} VDU id has not changed after healing
+ ${volumes_attached}= Get Server Property ${id} volumes_attached
+ ${match}= Get Regexp Matches ${volumes_attached} '([0-9a-f\-]+)' 1
+ Should Be Equal ${match}[0] ${VOLUME_ID} Volume id has changed after healing
Sleep 30 seconds Wait for SSH daemon to be up
- ${stdout}= Execute Remote Command Check Rc Return Output ${vnf_volumes_ip_addr} ${username} ${password} ${privatekey} sudo lsblk -l
- log ${stdout}
+ ${stdout}= Execute Remote Command Check Rc Return Output ${VNF_VOLUMES_IP_ADDR} ${USERNAME} ${PASSWORD} ${PRIVATEKEY} sudo lsblk -l
+ Log ${stdout}
${lines}= Get Lines Containing String ${stdout} disk
${num_lines}= Get Line Count ${lines}
- Run Keyword If ${num_lines} < ${vnf_num_volumes} Fail msg=Number of disks (${num_lines}) is less than specified in VDU (${vnf_num_volumes})
-
+ IF ${num_lines} < ${VNF_NUM_VOLUMES} Fail msg=Number of disks (${num_lines}) is less than specified in VDU (${VNF_NUM_VOLUMES})
+
+Update VIM Objects
+ [Documentation] Retrieve all VMs and volumes from the NS and stores them in VIM_VDUS and VIM_VOLUMES lists.
+ ... This is done again to guarantee that all objects are cleaned in the VIM in case the heal operation
+ ... added new objects.
+ Variable Should Exist ${NS_ID} msg=NS is not available
+ @{vdu_updated}= Create List
+ @{vnf_id_list}= Get Ns Vnf List ${NS_ID}
+ FOR ${vnf_id} IN @{vnf_id_list}
+ ${id}= Get VNF VIM ID ${vnf_id}
+ @{vdu_ids}= Split String ${id}
+ Append To List ${vdu_updated} @{vdu_ids}
+ FOR ${id} IN @{vdu_ids}
+ IF not "${id}" in "@{VIM_VDUS}"
+ Append To List ${VIM_VDUS} ${id}
+ END
+ END
+ END
+ FOR ${vdu_id} IN @{vdu_updated}
+ ${volumes_attached}= Get Server Property ${vdu_id} volumes_attached
+ ${match}= Get Regexp Matches ${volumes_attached} '([0-9a-f\-]+)' 1
+ IF ${match} != @{EMPTY}
+ IF not "${match}[0]" in "@{VIM_VOLUMES}"
+ Append To List ${VIM_VOLUMES} ${match}[0]
+ END
+ END
+ END
+ Log Many @{VIM_VDUS}
+ Log Many @{VIM_VOLUMES}
Delete NS Instance
+ [Documentation] Delete NS instance.
[Tags] cleanup
- Delete NS ${ns_name}
-
+ Delete NS ${NS_NAME}
Delete NS Descriptor
+ [Documentation] Delete NS package from OSM.
[Tags] cleanup
- Delete NSD ${nsd_name}
-
+ Delete NSD ${NSD_NAME}
Delete VNF Descriptors
+ [Documentation] Delete VNF packages from OSM.
+ [Tags] cleanup
+ Delete VNFD ${VNFD_VOLUMES_NAME}
+ Delete VNFD ${VNFD_MANUALSCALE_NAME}
+
+Delete Remaining Objects in VIM
+ [Documentation] Delete any remaining objects (volumes, VMs, etc.) in the VIM.
[Tags] cleanup
- Delete VNFD ${vnfd_volumes_name}
- Delete VNFD ${vnfd_charm_name}
+ Delete Objects In VIM
*** Keywords ***
Suite Cleanup
- [Documentation] Test Suit Cleanup: Deleting Descriptor, instance and vim
-
- Run Keyword If Any Tests Failed Delete NS ${ns_name}
- Run Keyword If Any Tests Failed Delete NSD ${nsd_name}
- Run Keyword If Any Tests Failed Delete VNFD ${vnfd_volumes_name}
- Run Keyword If Any Tests Failed Delete VNFD ${vnfd_charm_name}
+ [Documentation] Test Suite Cleanup: Deleting Descriptor, instance and vim
+ Run Keyword If Any Tests Failed Delete NS ${NS_NAME}
+ Run Keyword If Any Tests Failed Delete NSD ${NSD_NAME}
+ Run Keyword If Any Tests Failed Delete VNFD ${VNFD_VOLUMES_NAME}
+ Run Keyword If Any Tests Failed Delete VNFD ${VNFD_MANUALSCALE_NAME}
+ Run Keyword If Any Tests Failed Delete Objects In VIM
+
+Delete Objects In VIM
+ [Documentation] Clean up remaining VMs and volumes directly from the VIM.
+ ${error}= Set Variable 0
+ FOR ${vol_id} IN @{VIM_VOLUMES}
+ Log Checking if volume ${vol_id} is still in VIM
+ ${exists}= Check If Volume Exists ${vol_id}
+ IF ${exists}
+ ${error}= Set Variable 1
+ Log Deleting volume ${vol_id}
+ Run Keyword And Ignore Error Delete Volume ${vol_id}
+ END
+ END
+ FOR ${vdu_id} IN @{VIM_VDUS}
+ Log Checking if server ${vdu_id} is still in VIM
+ ${status}= Run Keyword And Ignore Error Get Server Property ${vdu_id} id
+ Log ${status}[0]
+ IF '${status}[0]' == 'PASS'
+ ${error}= Set Variable 1
+ Log Deleting server ${vdu_id}
+ Run Keyword And Ignore Error Delete Server ${vdu_id}
+ END
+ END
+ IF ${error}==1 Fail Some objects created by test were not deleted in VIM