17. Annex 9: LTS Upgrade
17.1. Introduction
Starting with version 10.1.0 of OSM, every even numbered release will receive two years of community support. This document covers the steps needed for upgrading OSM. Depending on the installation method, there are two methods for upgrading OSM to an LTS version.
17.2. Upgrade of 10.1.0 to 10.1.1 LTS
This procedure covers both the upgrade of 10.1.0 to 10.1.1 LTS. There are two installation methods, each with its own set of procedures:
17.2.1. Kubernetes Installation to 10.1.1
17.2.1.1. Back up the Databases
If desired, the databases can be backed up using the following commands:
mysql_pod=$(kubectl get pod -n osm | grep -i mysql | tail -1 | awk -F" " '{print $1}')
kubectl exec -n osm -it $mysql_pod -- bash -c \
'mysqldump -u root -p$MYSQL_ROOT_PASSWORD --single-transaction --all-databases' \
| gzip > backup.sql.gz
mongodb_unit=$(juju status | grep -i mongodb | tail -1 | awk -F" " '{print $1}'| tr -d '[*]')
mongodb_pod=$(kubectl get pod -n osm | grep -i mongodb | grep -v operator | tail -1 | awk -F" " '{print $1}')
juju run-action $mongodb_unit backup --wait -m osm
kubectl cp osm/$mongodb_pod:/data/backup.archive backup.archive
17.2.1.2. Upgrade Juju
The following commands will upgrade the OSM controller.
sudo snap refresh juju --channel 2.9/stable
juju upgrade-controller
Next, for any native or proxy charms, upgrade each model.
for model in $(juju models --format json | jq .models[].name | tr -d \") ; do
juju switch $model
juju upgrade-model
done
17.2.1.3. Upgrade OSM Application
OSM_VERSION="10.1.1"
for module in lcm mon nbi ng-ui pla pol ro; do
kubectl -n osm patch deployment ${module} --patch '{"spec": {"template": {"spec": {"containers": [{"name": "${module}", "image": "opensourcemano/${module}:${OSM_VERSION}"}]}}}}'
kubectl -n osm scale deployment ${module} --replicas=0
kubectl -n osm scale deployment ${module} --replicas=1
done
# In order to make this change persistent after reboots,
# you will have to update the files under /etc/osm/docker/osm_pods to reflect the changes
for module in lcm mon nbi ng-ui pol ro; do
sudo sed -i "s/opensourcemano\/${module}:.*/opensourcemano\/${module}:${OSM_VERSION}/g" /etc/osm/docker/osm_pods/${module}.yaml
done
sudo sed -i "s/opensourcemano\/pla:.*/opensourcemano\/pla:${OSM_VERSION}/g" /etc/osm/docker/osm_pods/osm_pla/${module}.yaml
At this point, OSM has been upgraded.
17.2.2. Charmed Installation to 10.1.1
17.2.2.1. Back up the Databases
If desired, the databases can be backed up using the following commands:
mariadb_unit=$(juju status | grep -i mariadb | tail -1 | awk -F" " '{print $1}' | tr -d '[*]')
mariadb_pod=$(microk8s.kubectl get pod -n osm | grep -i mariadb | tail -1 | awk -F" " '{print $1}')
juju run-action $mariadb_unit backup --wait -m osm
microk8s.kubectl cp osm/$mariadb_pod:/var/lib/mysql/backup.sql.gz backup.sql.gz
mongodb_unit=$(juju status | grep -i mongodb | tail -1 | awk -F" " '{print $1}'| tr -d '[*]')
mongodb_pod=$(microk8s.kubectl get pod -n osm | grep -i mongodb | tail -1 | awk -F" " '{print $1}')
microk8s.kubectl exec -n osm -it $mongodb_pod -- mongodump --gzip --archive=/data/backup.archive
microk8s.kubectl cp osm/$mongodb_pod:/data/backup.archive backup.archive
17.2.2.2. Upgrade Juju
The following commands will upgrade the OSM controller.
sudo snap refresh juju --channel 2.9/stable
juju upgrade-controller
Next, for any native or proxy charms, upgrade each model.
for model in $(juju models --format json | jq .models[].name | tr -d \") ; do
juju switch $model
juju upgrade-model
done
17.2.2.3. Upgrade OSM Application
juju attach-resource -m osm lcm image=opensourcemano/lcm:10.1.1
juju attach-resource -m osm mon image=opensourcemano/mon:10.1.1
juju attach-resource -m osm nbi image=opensourcemano/nbi:10.1.1
juju attach-resource -m osm ng-ui image=opensourcemano/ng-ui:10.1.1
juju attach-resource -m osm pla image=opensourcemano/pla:10.1.1
juju attach-resource -m osm pol image=opensourcemano/pol:10.1.1
juju attach-resource -m osm ro image=opensourcemano/ro:10.1.1
At this point, OSM has been upgraded.
17.3. Upgrade of Pre-LTS to 10.1.0 LTS
This procedure covers both upgrade from 9.1.5 or 10.0.3 to 10.1.0 LTS. Where necessary, additional steps for 9.1.5 are shown. There are two installation methods, each with its own set of procedures:
17.3.1. Kubernetes Installation Option
The following steps are to be followed for an upgrade to LTS:
17.3.1.1. Stop all OSM Services
kubectl -n osm scale deployment/grafana --replicas=0
kubectl -n osm scale statefulset/prometheus --replicas=0
kubectl -n osm scale statefulset/kafka --replicas=0
kubectl -n osm scale deployment/keystone --replicas=0
kubectl -n osm scale deployment/lcm --replicas=0
kubectl -n osm scale deployment/mon --replicas=0
kubectl -n osm scale deployment/nbi --replicas=0
kubectl -n osm scale deployment/ng-ui --replicas=0
kubectl -n osm scale deployment/pla --replicas=0
kubectl -n osm scale deployment/pol --replicas=0
kubectl -n osm scale deployment/ro --replicas=0
kubectl -n osm scale statefulset/zookeeper --replicas=0
Note if PLA was not installed, you can ignore the error about deployments.apps "pla" not found
.
17.3.1.2. Backup the Databases
Once all the deployments and statafulsets show 0 replicas, proceed to performing the database backup.
mysql_pod=$(kubectl get pod -n osm | grep -i mysql | tail -1 | awk -F" " '{print $1}')
kubectl exec -n osm -it $mysql_pod -- bash -c \
'mysqldump -u root -p$MYSQL_ROOT_PASSWORD --single-transaction --all-databases' \
| gzip > backup.sql.gz
mongodb_unit=$(juju status | grep -i mongodb | tail -1 | awk -F" " '{print $1}'| tr -d '[*]')
mongodb_pod=$(kubectl get pod -n osm | grep -i mongodb | grep -v operator | tail -1 | awk -F" " '{print $1}')
juju run-action $mongodb_unit backup --wait -m osm
kubectl cp osm/$mongodb_pod:/data/backup.archive backup.archive
17.3.1.3. Backup existing OSM manifests
cp -bR /etc/osm/docker/osm_pods backup_osm_manifests
17.3.1.4. Remove Deployed Charmed Services
juju destroy-model osm --destroy-storage -y --force
17.3.1.5. Upgrade Juju
sudo snap refresh juju --channel 2.9/stable
juju switch osm-vca:admin/controller
juju upgrade-model
17.3.1.6. Upgrade Kubernetes
Documentation for how to upgrade Kubernetes can be found at https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
17.3.1.7. Deploy Charmed Services
juju add-model osm k8scloud
juju deploy ch:mongodb-k8s -m osm
17.3.1.8. Upgrade OSM to 10.1.0 LTS
sudo apt-get update
sudo apt-get install -y osm-devops python3-osm-im python3-osmclient
sudo cp -R /usr/share/osm-devops/installers/docker/osm_pods /etc/osm/docker/osm_pods
sudo rm /etc/osm/docker/mongo.yaml
kubectl -n osm apply -f /etc/osm/docker/osm_pods
17.3.1.9. Stop all OSM Services
As we need to restore the database, we are going to stop all the OSM services once again.
kubectl -n osm scale deployment/grafana --replicas=0
kubectl -n osm scale statefulset/prometheus --replicas=0
kubectl -n osm scale statefulset/kafka --replicas=0
kubectl -n osm scale deployment/keystone --replicas=0
kubectl -n osm scale deployment/lcm --replicas=0
kubectl -n osm scale deployment/mon --replicas=0
kubectl -n osm scale deployment/nbi --replicas=0
kubectl -n osm scale deployment/ng-ui --replicas=0
kubectl -n osm scale deployment/pla --replicas=0
kubectl -n osm scale deployment/pol --replicas=0
kubectl -n osm scale deployment/ro --replicas=0
kubectl -n osm scale statefulset/zookeeper --replicas=0
17.3.1.10. Restore the Databases
mysql_pod=$(kubectl get pod -n osm | grep -i mysql | tail -1 | awk -F" " '{print $1}')
kubectl cp backup.sql.gz osm/$mysql_pod:/var/lib/mysql/backup.sql.gz
kubectl exec -n osm -it $mysql_pod -- bash -c \
'mysql -uroot -p${MYSQL_ROOT_PASSWORD} --execute "DROP DATABASE keystone"'
kubectl exec -n osm -it $mysql_pod -- bash -c \
'zcat backup.sql.gz | mysql -uroot -p${MYSQL_ROOT_PASSWORD}'
mongodb_pod=$(kubectl get pod -n osm | grep -i mongodb | grep -v operator | tail -1 | awk -F" " '{print $1}')
kubectl cp backup.archive osm/$mongodb_pod:/data/backup.archive
kubectl exec -n osm -it $mongodb_pod -- mongorestore --drop --gzip --archive=/data/backup.archive
17.3.1.11. Perform Database Migration
17.3.1.11.1. Keystone Database Updates
Update the database:
mysql_pod=$(kubectl get pod -n osm | grep -i mysql | tail -1 | awk -F" " '{print $1}')
kubectl exec -n osm -it $mysql_pod -- bash -c \
'mysql -uroot -p${MYSQL_ROOT_PASSWORD} -Dkeystone \
--execute "UPDATE endpoint SET url=\"http://osm-keystone:5000/v3/\" WHERE url=\"http://keystone:5000/v3/\"";'
17.3.1.11.2. OSM Application Database Updates
A helper charm has been created to assist in the database updates. Build, deploy and run the action as follows:
sudo snap install charmcraft --classic
git clone https://github.com/charmed-osm/osm-update-db-operator.git
cd osm-update-db-operator
charmcraft build
juju switch osm
juju deploy ./osm-update-db_ubuntu-20.04-amd64.charm
Run the upgrade as follows.
juju config osm-update-db mongodb-uri=mongodb://mongodb:27017
juju run-action --wait osm-update-db/0 apply-patch bug-number=1837
17.3.1.12. Restart all OSM Services
Now that the databases are migrated to the new version, we can restart the services.
kubectl -n osm scale deployment/grafana --replicas=1
kubectl -n osm scale statefulset/prometheus --replicas=1
kubectl -n osm scale statefulset/kafka --replicas=1
kubectl -n osm scale deployment/keystone --replicas=1
kubectl -n osm scale deployment/lcm --replicas=1
kubectl -n osm scale deployment/mon --replicas=1
kubectl -n osm scale deployment/nbi --replicas=1
kubectl -n osm scale deployment/ng-ui --replicas=1
kubectl -n osm scale deployment/pla --replicas=1
kubectl -n osm scale deployment/pol --replicas=1
kubectl -n osm scale deployment/ro --replicas=1
kubectl -n osm scale statefulset/zookeeper --replicas=1
At this point, OSM LTS is operational and ready to use.
17.3.2. Charmed Installation Option
For Charmed OSM Installation, the procedure is to maintain the database content while redeploying the application using Juju. Rather than adding a series of commands to manually redeploy, we can simply download and run the LTS installer to recreate OSM after removing the non-LTS software.
The following steps will upgrade OSM to the LTS version:
17.3.2.1. Stop all OSM Services
17.3.2.1.1. Version 9.1.5
juju scale-application grafana-k8s 0
juju scale-application prometheus-k8s 0
juju scale-application kafka-k8s 0
juju scale-application keystone 0
juju scale-application lcm-k8s 0
juju scale-application mon-k8s 0
juju scale-application nbi 0
juju scale-application ng-ui 0
juju scale-application pla 0
juju scale-application pol-k8s 0
juju scale-application ro-k8s 0
juju scale-application zookeeper-k8s 0
17.3.2.1.2. Version 10.0.3
juju scale-application grafana 0
juju scale-application prometheus 0
juju scale-application kafka-k8s 0
juju scale-application keystone 0
juju scale-application lcm 0
juju scale-application mon 0
juju scale-application nbi 0
juju scale-application ng-ui 0
juju scale-application pla 0
juju scale-application pol 0
juju scale-application ro 0
juju scale-application zookeeper-k8s 0
Wait for all the applications to scale to 0. The output of juju status
should look similar to the following, with only mariadb-k8s
and mongodb-k8s
units left.
Model Controller Cloud/Region Version SLA Timestamp
osm osm-vca microk8s/localhost 2.8.13 unsupported 20:57:44Z
App Version Status Scale Charm Store Rev OS Address Notes
grafana docker.io/ubuntu/grafana@sh... active 0 grafana jujucharms 4 kubernetes 10.152.183.45
kafka-k8s rocks.canonical.com:443/wur... active 0 kafka-k8s jujucharms 21 kubernetes 10.152.183.248
keystone keystone:10.0.3 active 0 keystone jujucharms 9 kubernetes 10.152.183.114
lcm lcm:10.0.3 active 0 lcm jujucharms 8 kubernetes 10.152.183.70
mariadb-k8s rocks.canonical.com:443/mar... active 1 mariadb-k8s jujucharms 35 kubernetes 10.152.183.177
mon mon:10.0.3 active 0 mon jujucharms 5 kubernetes 10.152.183.227
mongodb-k8s mongo:latest active 1 mongodb-k8s jujucharms 29 kubernetes 10.152.183.63
nbi nbi:10.0.3 active 0 nbi jujucharms 12 kubernetes 10.152.183.163
ng-ui ng-ui:10.0.3 active 0 ng-ui jujucharms 21 kubernetes 10.152.183.180
pla pla:10.0.3 active 0 pla jujucharms 9 kubernetes 10.152.183.7
pol pol:10.0.3 active 0 pol jujucharms 4 kubernetes 10.152.183.104
prometheus docker.io/ed1000/prometheus... active 0 prometheus jujucharms 4 kubernetes 10.152.183.120
ro ro:10.0.3 active 0 ro jujucharms 4 kubernetes 10.152.183.159
zookeeper-k8s rocks.canonical.com:443/k8s... active 0 zookeeper-k8s jujucharms 37 kubernetes 10.152.183.201
Unit Workload Agent Address Ports Message
mariadb-k8s/0* active idle 10.1.244.152 3306/TCP ready
mongodb-k8s/0* active idle 10.1.244.156 27017/TCP ready
17.3.2.2. Backup the Databases
Once all the units show a scale of 0, proceed to performing the database backup.
mariadb_unit=$(juju status | grep -i mariadb | tail -1 | awk -F" " '{print $1}' | tr -d '[*]')
mariadb_pod=$(microk8s.kubectl get pod -n osm | grep -i mariadb | tail -1 | awk -F" " '{print $1}')
juju run-action $mariadb_unit backup --wait -m osm
microk8s.kubectl cp osm/$mariadb_pod:/var/lib/mysql/backup.sql.gz backup.sql.gz
mongodb_unit=$(juju status | grep -i mongodb | tail -1 | awk -F" " '{print $1}'| tr -d '[*]')
mongodb_pod=$(microk8s.kubectl get pod -n osm | grep -i mongodb | tail -1 | awk -F" " '{print $1}')
juju run-action $mongodb_unit backup --wait -m osm
microk8s.kubectl cp osm/$mongodb_pod:/data/backup.archive backup.archive
17.3.2.3. Remove Deployed OSM Application
juju destroy-model osm --destroy-storage -y --force
microk8s.kubectl delete namespaces osm
17.3.2.3.1. Ingress for Upgrade from 9.1.5
If this is an upgrade from 9.1.5, the following commands should be run at this time to flush any legacy ingress descriptors.
microk8s disable ingress
microk8s enable ingress
17.3.2.4. Upgrade Juju
The following commands will upgrade the OSM controller.
sudo snap refresh juju --channel 2.9/stable
juju upgrade-controller
Next, for any native or proxy charms, upgrade each model.
for model in $(juju models --format json | jq .models[].name | tr -d \") ; do
juju switch $model
juju upgrade-model
done
17.3.2.5. Upgrade MicroK8s
sudo snap refresh microk8s --channel=1.23/stable
17.3.2.6. Install OSM 10.1.0 LTS
sudo apt remove -y --purge osm-devops
unset OSM_USERNAME
unset OSM_PASSWORD
wget https://osm-download.etsi.org/ftp/osm-10.0-ten/install_osm.sh
chmod +x ./install_osm.sh
./install_osm.sh --charmed --vca osm-vca --tag 10.1.0
17.3.2.7. Stop New OSM Services
As we need to restore the database, we are going to stop all the OSM services once again.
juju scale-application keystone 0
juju scale-application lcm 0
juju scale-application mon 0
juju scale-application nbi 0
juju scale-application ng-ui 0
juju scale-application pla 0
juju scale-application pol 0
juju scale-application ro 0
17.3.2.8. Restore the Databases
mariadb_unit=$(juju status | grep -i mariadb | tail -1 | awk -F" " '{print $1}' | tr -d '[*]')
mariadb_pod=$(microk8s.kubectl get pod -n osm | grep -i mariadb | tail -1 | awk -F" " '{print $1}')
microk8s.kubectl cp backup.sql.gz osm/$mariadb_pod:/var/lib/mysql/backup.sql.gz
juju run --unit $mariadb_unit -- bash -c 'mysql -p${MARIADB_ROOT_PASSWORD} --execute "DROP DATABASE keystone"'
juju run-action --wait -m osm $mariadb_unit restore
mongodb_pod=$(microk8s.kubectl get pod -n osm | grep -i mongodb | tail -1 | awk -F" " '{print $1}')
microk8s.kubectl cp backup.archive osm/$mongodb_pod:/data/backup.archive
microk8s.kubectl exec -n osm -it $mongodb_pod -- mongorestore --drop --gzip --archive=/data/backup.archive
17.3.2.9. Perform Database Migration
17.3.2.9.1. Keystone Database Updates
Start the Keystone container.
juju scale-application keystone 1
Keystone URL Endpoint update
mariadb_unit=$(juju status | grep -i mariadb | tail -1 | awk -F" " '{print $1}' | tr -d '[*]')
juju run --unit $mariadb_unit -- bash -c 'mysql -p${MARIADB_ROOT_PASSWORD} -Dkeystone \
--execute "UPDATE endpoint SET url=\"http://osm-keystone:5000/v3/\" WHERE url=\"http://keystone:5000/v3/\"";'
DB Sync command from Keystone to update schema to installed version
keystone_unit=$(juju status | grep -i keystone | tail -1 | awk -F" " '{print $1}' | tr -d '[*]')
juju run-action --wait -m osm $keystone_unit db-sync
17.3.2.9.2. OSM Application Database Updates
A helper charm has been created to assist in the database updates. Build, deploy and run the action as follows:
sudo snap install charmcraft --classic
git clone https://github.com/charmed-osm/osm-update-db-operator.git
cd osm-update-db-operator
charmcraft build
juju switch osm
juju deploy ./osm-update-db_ubuntu-20.04-amd64.charm
Run the upgrade as follows.
juju config osm-update-db mongodb-uri=mongodb://mongodb:27017
juju run-action --wait osm-update-db/0 apply-patch bug-number=1837
17.3.2.9.3. Upgrading From v9.0 Versions
If the ugrade is from v9.0 to 10.1.0 LTS, the following must also be run to update the database.
juju run-action --wait osm-update-db/0 update-db \
current-version=9 \
target-version=10 \
mongodb-only=True
The charm can now be removed.
juju remove-application osm-update-db
17.3.2.10. Restart all OSM Services
Now that the databases are migrated to the new version, we can restart the services.
juju scale-application lcm 1
juju scale-application mon 1
juju scale-application nbi 1
juju scale-application ng-ui 1
juju scale-application pla 1
juju scale-application pol 1
juju scale-application ro 1
At this point, OSM LTS is operational and ready to use.
17.4. Testing Upgrade
17.4.1. Changing Credentials
We will change some default passwords, and create some additional users to ensure RBAC still works.
osm user-update admin --password 'osm4u'
export OSM_PASSWORD=osm4u
osm project-create --domain-name default test_project_1
osm user-create test_admin_1 --projects admin --project-role-mappings 'test_project_1,project_user' --password testadmin --domain-name default
osm user-update test_admin_1 --remove-project-role 'admin,project_admin'
osm user-create test_member_1 --projects admin --project-role-mappings 'test_project_1,project_user' --password testmember --domain-name default
osm user-update test_member_1 --remove-project-role 'admin,project_admin'
17.4.2. Running Robot Test Suite
17.4.2.1. Install Docker (Charmed OSM Only)
Docker does not get installed with charmed OSM, so that needs to be installed first.
sudo snap install docker
sudo addgroup --system docker
sudo adduser $USER docker
newgrp docker
sudo snap disable docker
sudo snap enable docker
sudo iptables -I DOCKER-USER -j ACCEPT
17.4.2.2. Prepare to Run Tests
You should already have OSM_HOSTNAME and OSM_PASSWORD environment variables set, based on the output from the installer.
Source your openstack.rc file to get all your Openstack environment variables set.
17.4.2.2.1. Set Environment Variables (K8s Installation)
export OSM_HOSTNAME=127.0.0.1
export PROMETHEUS_HOSTNAME=127.0.0.1
export PROMETHEUS_PORT=9091
export JUJU_PASSWORD=`juju gui 2>&1 | grep password | awk '{print $2}'`
export HOSTIP=127.0.1.1
17.4.2.2.2. Set Environment Variables (Charmed OSM)
export OSM_HOSTNAME=$(juju config -m osm nbi site_url | sed "s/http.*\?:\/\///"):443
export PROMETHEUS_HOSTNAME=$(juju config -m osm prometheus site_url | sed "s/http.*\?:\/\///")
export PROMETHEUS_PORT=80
export JUJU_PASSWORD=`juju gui 2>&1 | grep password | awk '{print $2}'`
export HOSTIP=$(echo $PROMETHEUS_HOSTNAME | sed "s/prometheus.//" | sed "s/.nip.io//")
17.4.2.3. Create robot-systest.cfg
cat << EOF > robot-systest.cfg
VIM_TARGET=osm
VIM_MGMT_NET=osm-ext
ENVIRONMENTS_FOLDER=environments
PACKAGES_FOLDER=/robot-systest/osm-packages
OS_CLOUD=openstack
LC_ALL=C.UTF-8
LANG=C.UTF-8
EOF
for line in `env | grep "^OS_" | sort` ; do echo $line >> robot-systest.cfg ; done
17.4.2.4. Create robot.etc.hosts (Charmed OSM)
cat << EOF > robot.etc.hosts
127.0.0.1 localhost
${HOSTIP} prometheus.${HOSTIP}.nip.io nbi.${HOSTIP}.nip.io
EOF
17.4.2.5. Create clouds.yaml
cat << EOF > clouds.yaml
clouds:
openstack:
auth:
auth_url: $OS_AUTH_URL
project_name: $OS_PROJECT_NAME
username: $OS_USERNAME
password: $OS_PASSWORD
user_domain_name: $OS_USER_DOMAIN_NAME
project_domain_name: $OS_PROJECT_DOMAIN_NAME
EOF
17.4.2.6. Create VIM
Create a VIM called osm
. Be sure to add your specific configurations, such as floating
ip addresses, or untrusted SSL certificates.
osm vim-create --name osm --user "$OS_USERNAME" --password "$OS_PASSWORD" \
--auth_url "$OS_AUTH_URL" --tenant "$OS_USERNAME" --account_type openstack \
--config='{management_network_name: osm-ext}'
Provide a copy of your Kubernetes cluster configuration file to the Robot container.
export KUBECONFIG=/path/to/kubeconfig.yaml
17.4.2.7. Start Robot Tests Container
To keep a copy of the reports, create a directory for the container to store them.
mkdir reports
docker run -ti --entrypoint /bin/bash \
--env OSM_HOSTNAME=${OSM_HOSTNAME} \
--env PROMETHEUS_HOSTNAME=${PROMETHEUS_HOSTNAME} \
--env PROMETHEUS_PORT=${PROMETHEUS_PORT} \
--env JUJU_PASSWORD=${JUJU_PASSWORD} \
--env HOSTIP=${HOSTIP} \
--env OSM_PASSWORD=osm4u
--env-file robot-systest.cfg \
-v "$(pwd)/robot.etc.hosts":/etc/hosts \
-v "${KUBECONFIG}":/root/.kube/config \
-v "$(pwd)/clouds.yaml":/etc/openstack/clouds.yaml \
-v "$(pwd)/reports":/robot-systest/reports \
opensourcemano/tests:10
17.4.2.8. Run Tests Pre-Upgrade
From the Robot tests container command line, execute the prepare step.
./run_test.sh -t prepare
After the run has completed successfully, there should be 5 network services present in OSM:
+---------------------------------+--------------------------------------+---------------------+----------+-------------------+---------------+
| ns instance name | id | date | ns state | current operation | error details |
+---------------------------------+--------------------------------------+---------------------+----------+-------------------+---------------+
| basic_07_secure_key_management | 1a8621ea-d51d-434c-90e0-e153701729dd | 2022-08-03T17:51:07 | READY | IDLE (None) | N/A |
| basic_09_manual_scaling_test | 7611ce54-bff2-480e-94fb-5a8b0549a6c4 | 2022-08-03T17:54:32 | READY | IDLE (None) | N/A |
| basic_21 | 8090754c-f49c-4891-a7c0-1e5750c7980b | 2022-08-03T17:55:30 | READY | IDLE (None) | N/A |
| k8s_06-nopasswd_k8s_proxy_charm | a5eb22d7-4a4f-4615-ad44-9f8957cf243c | 2022-08-03T18:17:01 | READY | IDLE (None) | N/A |
| ldap | be0f6e33-e4d9-463d-92c6-cc27f2f1d5eb | 2022-08-03T18:51:51 | READY | IDLE (None) | N/A |
+---------------------------------+--------------------------------------+---------------------+----------+-------------------+---------------+
Run the verify step before upgrading:
./run_test.sh -t verify
17.4.2.9. Run Tests Post-Upgrade
After completing the upgrade procedure, execute the verify step again to ensure the upgrade was successful
./run_test.sh -t verify
This will only verify services that were already deployed in the prepare step.