17. Annex 9: LTS Upgrade
17.1. Introduction
Starting with version 10.1.0 of OSM, every even numbered release will receive two years of community support. This document covers the steps needed for upgrading OSM. Depending on the installation method, there are two methods for upgrading OSM to an LTS version.
17.2. Upgrade of Pre-LTS to 10.1.0 LTS
This procedure covers both upgrade from 9.1.5 or 10.0.3 to 10.1.0 LTS. Where necessary, additional steps for 9.1.5 are shown. There are two installation methods, each with its own set of procedures:
17.2.1. Kubernetes Installation Option
The following steps are to be followed for an upgrade to LTS:
17.2.1.1. Stop all OSM Services
kubectl -n osm scale deployment/grafana --replicas=0
kubectl -n osm scale statefulset/prometheus --replicas=0
kubectl -n osm scale statefulset/kafka --replicas=0
kubectl -n osm scale deployment/keystone --replicas=0
kubectl -n osm scale deployment/lcm --replicas=0
kubectl -n osm scale deployment/mon --replicas=0
kubectl -n osm scale deployment/nbi --replicas=0
kubectl -n osm scale deployment/ng-ui --replicas=0
kubectl -n osm scale deployment/pla --replicas=0
kubectl -n osm scale deployment/pol --replicas=0
kubectl -n osm scale deployment/ro --replicas=0
kubectl -n osm scale statefulset/zookeeper --replicas=0
Note if PLA was not installed, you can ignore the error about deployments.apps "pla" not found
.
17.2.1.2. Backup the Databases
Once all the deployments and statafulsets show 0 replicas, proceed to performing the database backup.
mysql_pod=$(kubectl get pod -n osm | grep -i mysql | tail -1 | awk -F" " '{print $1}')
kubectl exec -n osm -it $mysql_pod -- bash -c \
'mysqldump -u root -p$MYSQL_ROOT_PASSWORD --single-transaction --all-databases' \
| gzip > backup.sql.gz
mongodb_unit=$(juju status | grep -i mongodb | tail -1 | awk -F" " '{print $1}'| tr -d '[*]')
mongodb_pod=$(kubectl get pod -n osm | grep -i mongodb | grep -v operator | tail -1 | awk -F" " '{print $1}')
juju run-action $mongodb_unit backup --wait -m osm
kubectl cp osm/$mongodb_pod:/data/backup.archive backup.archive
17.2.1.3. Backup existing OSM manifests
cp -bR /etc/osm/docker/osm_pods backup_osm_manifests
17.2.1.4. Remove Deployed Charmed Services
juju destroy-model osm --destroy-storage -y --force
17.2.1.5. Upgrade Juju
sudo snap refresh juju --channel 2.9/stable
juju switch osm-vca:admin/controller
juju upgrade-model
17.2.1.6. Upgrade Kubernetes
Documentation for how to upgrade Kubernetes can be found at https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
17.2.1.7. Deploy Charmed Services
juju add-model osm k8scloud
juju deploy ch:mongodb-k8s -m osm
17.2.1.8. Upgrade OSM to 10.1.0 LTS
sudo apt-get update
sudo apt-get install -y osm-devops python3-osm-im python3-osmclient
sudo cp -R /usr/share/osm-devops/installers/docker/osm_pods /etc/osm/docker/osm_pods
sudo rm /etc/osm/docker/mongo.yaml
kubectl -n osm apply -f /etc/osm/docker/osm_pods
17.2.1.9. Stop all OSM Services
As we need to restore the database, we are going to stop all the OSM services once again.
kubectl -n osm scale deployment/grafana --replicas=0
kubectl -n osm scale statefulset/prometheus --replicas=0
kubectl -n osm scale statefulset/kafka --replicas=0
kubectl -n osm scale deployment/keystone --replicas=0
kubectl -n osm scale deployment/lcm --replicas=0
kubectl -n osm scale deployment/mon --replicas=0
kubectl -n osm scale deployment/nbi --replicas=0
kubectl -n osm scale deployment/ng-ui --replicas=0
kubectl -n osm scale deployment/pla --replicas=0
kubectl -n osm scale deployment/pol --replicas=0
kubectl -n osm scale deployment/ro --replicas=0
kubectl -n osm scale statefulset/zookeeper --replicas=0
17.2.1.10. Restore the Databases
mysql_pod=$(kubectl get pod -n osm | grep -i mysql | tail -1 | awk -F" " '{print $1}')
kubectl cp backup.sql.gz osm/$mysql_pod:/var/lib/mysql/backup.sql.gz
kubectl exec -n osm -it $mysql_pod -- bash -c \
'mysql -uroot -p${MYSQL_ROOT_PASSWORD} --execute "DROP DATABASE keystone"'
kubectl exec -n osm -it $mysql_pod -- bash -c \
'zcat backup.sql.gz | mysql -uroot -p${MYSQL_ROOT_PASSWORD}'
mongodb_pod=$(kubectl get pod -n osm | grep -i mongodb | grep -v operator | tail -1 | awk -F" " '{print $1}')
kubectl cp backup.archive osm/$mongodb_pod:/data/backup.archive
kubectl exec -n osm -it $mongodb_pod -- mongorestore --drop --gzip --archive=/data/backup.archive
17.2.1.11. Perform Database Migration
17.2.1.11.1. Keystone Database Updates
Update the database:
mysql_pod=$(kubectl get pod -n osm | grep -i mysql | tail -1 | awk -F" " '{print $1}')
kubectl exec -n osm -it $mysql_pod -- bash -c \
'mysql -uroot -p${MYSQL_ROOT_PASSWORD} -Dkeystone \
--execute "UPDATE endpoint SET url=\"http://osm-keystone:5000/v3/\" WHERE url=\"http://keystone:5000/v3/\"";'
17.2.1.11.2. OSM Application Database Updates
A helper charm has been created to assist in the database updates. Build, deploy and run the action as follows:
sudo snap install charmcraft --classic
git clone https://github.com/charmed-osm/osm-update-db-operator.git
cd osm-update-db-operator
charmcraft build
juju switch osm
juju deploy ./osm-update-db_ubuntu-20.04-amd64.charm
Run the upgrade as follows.
juju config osm-update-db mongodb-uri=mongodb://mongodb:27017
juju run-action --wait osm-update-db/0 apply-patch bug-number=1837
17.2.1.12. Restart all OSM Services
Now that the databases are migrated to the new version, we can restart the services.
kubectl -n osm scale deployment/grafana --replicas=1
kubectl -n osm scale statefulset/prometheus --replicas=1
kubectl -n osm scale statefulset/kafka --replicas=1
kubectl -n osm scale deployment/keystone --replicas=1
kubectl -n osm scale deployment/lcm --replicas=1
kubectl -n osm scale deployment/mon --replicas=1
kubectl -n osm scale deployment/nbi --replicas=1
kubectl -n osm scale deployment/ng-ui --replicas=1
kubectl -n osm scale deployment/pla --replicas=1
kubectl -n osm scale deployment/pol --replicas=1
kubectl -n osm scale deployment/ro --replicas=1
kubectl -n osm scale statefulset/zookeeper --replicas=1
At this point, OSM LTS is operational and ready to use.
17.2.2. Charmed Installation Option
For Charmed OSM Installation, the procedure is to maintain the database content while redeploying the application using Juju. Rather than adding a series of commands to manually redeploy, we can simply download and run the LTS installer to recreate OSM after removing the non-LTS software.
The following steps will upgrade OSM to the LTS version:
17.2.2.1. Stop all OSM Services
17.2.2.1.1. Version 9.1.5
juju scale-application grafana-k8s 0
juju scale-application prometheus-k8s 0
juju scale-application kafka-k8s 0
juju scale-application keystone 0
juju scale-application lcm-k8s 0
juju scale-application mon-k8s 0
juju scale-application nbi 0
juju scale-application ng-ui 0
juju scale-application pla 0
juju scale-application pol-k8s 0
juju scale-application ro-k8s 0
juju scale-application zookeeper-k8s 0
17.2.2.1.2. Version 10.0.3
juju scale-application grafana 0
juju scale-application prometheus 0
juju scale-application kafka-k8s 0
juju scale-application keystone 0
juju scale-application lcm 0
juju scale-application mon 0
juju scale-application nbi 0
juju scale-application ng-ui 0
juju scale-application pla 0
juju scale-application pol 0
juju scale-application ro 0
juju scale-application zookeeper-k8s 0
Wait for all the applications to scale to 0. The output of juju status
should look similar to the following, with only mariadb-k8s
and mongodb-k8s
units left.
Model Controller Cloud/Region Version SLA Timestamp
osm osm-vca microk8s/localhost 2.8.13 unsupported 20:57:44Z
App Version Status Scale Charm Store Rev OS Address Notes
grafana docker.io/ubuntu/grafana@sh... active 0 grafana jujucharms 4 kubernetes 10.152.183.45
kafka-k8s rocks.canonical.com:443/wur... active 0 kafka-k8s jujucharms 21 kubernetes 10.152.183.248
keystone keystone:10.0.3 active 0 keystone jujucharms 9 kubernetes 10.152.183.114
lcm lcm:10.0.3 active 0 lcm jujucharms 8 kubernetes 10.152.183.70
mariadb-k8s rocks.canonical.com:443/mar... active 1 mariadb-k8s jujucharms 35 kubernetes 10.152.183.177
mon mon:10.0.3 active 0 mon jujucharms 5 kubernetes 10.152.183.227
mongodb-k8s mongo:latest active 1 mongodb-k8s jujucharms 29 kubernetes 10.152.183.63
nbi nbi:10.0.3 active 0 nbi jujucharms 12 kubernetes 10.152.183.163
ng-ui ng-ui:10.0.3 active 0 ng-ui jujucharms 21 kubernetes 10.152.183.180
pla pla:10.0.3 active 0 pla jujucharms 9 kubernetes 10.152.183.7
pol pol:10.0.3 active 0 pol jujucharms 4 kubernetes 10.152.183.104
prometheus docker.io/ed1000/prometheus... active 0 prometheus jujucharms 4 kubernetes 10.152.183.120
ro ro:10.0.3 active 0 ro jujucharms 4 kubernetes 10.152.183.159
zookeeper-k8s rocks.canonical.com:443/k8s... active 0 zookeeper-k8s jujucharms 37 kubernetes 10.152.183.201
Unit Workload Agent Address Ports Message
mariadb-k8s/0* active idle 10.1.244.152 3306/TCP ready
mongodb-k8s/0* active idle 10.1.244.156 27017/TCP ready
17.2.2.2. Backup the Databases
Once all the units show a scale of 0, proceed to performing the database backup.
mariadb_unit=$(juju status | grep -i mariadb | tail -1 | awk -F" " '{print $1}' | tr -d '[*]')
mariadb_pod=$(microk8s.kubectl get pod -n osm | grep -i mariadb | tail -1 | awk -F" " '{print $1}')
juju run-action $mariadb_unit backup --wait -m osm
microk8s.kubectl cp osm/$mariadb_pod:/var/lib/mysql/backup.sql.gz backup.sql.gz
mongodb_unit=$(juju status | grep -i mongodb | tail -1 | awk -F" " '{print $1}'| tr -d '[*]')
mongodb_pod=$(microk8s.kubectl get pod -n osm | grep -i mongodb | tail -1 | awk -F" " '{print $1}')
juju run-action $mongodb_unit backup --wait -m osm
microk8s.kubectl cp osm/$mongodb_pod:/data/backup.archive backup.archive
17.2.2.3. Remove Deployed OSM Application
juju destroy-model osm --destroy-storage -y --force
microk8s.kubectl delete namespaces osm
17.2.2.3.1. Ingress for Upgrade from 9.1.5
If this is an upgrade from 9.1.5, the following commands should be run at this time to flush any legacy ingress descriptors.
microk8s disable ingress
microk8s enable ingress
17.2.2.4. Upgrade Juju
The following commands will upgrade the OSM controller.
sudo snap refresh juju --channel 2.9/stable
juju upgrade-controller
Next, for any native or proxy charms, upgrade each model.
for model in $(juju models --format json | jq .models[].name | tr -d \") ; do
juju switch $model
juju upgrade-model
done
17.2.2.5. Upgrade MicroK8s
sudo snap refresh microk8s --channel=1.23/stable
17.2.2.6. Install OSM 10.1.0 LTS
sudo apt remove -y --purge osm-devops
unset OSM_USERNAME
unset OSM_PASSWORD
wget https://osm-download.etsi.org/ftp/osm-10.0-ten/install_osm.sh
chmod +x ./install_osm.sh
./install_osm.sh --charmed --vca osm-vca --tag 10.1.0
17.2.2.7. Stop New OSM Services
As we need to restore the database, we are going to stop all the OSM services once again.
juju scale-application keystone 0
juju scale-application lcm 0
juju scale-application mon 0
juju scale-application nbi 0
juju scale-application ng-ui 0
juju scale-application pla 0
juju scale-application pol 0
juju scale-application ro 0
17.2.2.8. Restore the Databases
mariadb_unit=$(juju status | grep -i mariadb | tail -1 | awk -F" " '{print $1}' | tr -d '[*]')
mariadb_pod=$(microk8s.kubectl get pod -n osm | grep -i mariadb | tail -1 | awk -F" " '{print $1}')
microk8s.kubectl cp backup.sql.gz osm/$mariadb_pod:/var/lib/mysql/backup.sql.gz
juju run --unit $mariadb_unit -- bash -c 'mysql -p${MARIADB_ROOT_PASSWORD} --execute "DROP DATABASE keystone"'
juju run-action --wait -m osm $mariadb_unit restore
mongodb_pod=$(microk8s.kubectl get pod -n osm | grep -i mongodb | tail -1 | awk -F" " '{print $1}')
microk8s.kubectl cp backup.archive osm/$mongodb_pod:/data/backup.archive
microk8s.kubectl exec -n osm -it $mongodb_pod -- mongorestore --drop --gzip --archive=/data/backup.archive
17.2.2.9. Perform Database Migration
17.2.2.9.1. Keystone Database Updates
Start the Keystone container.
juju scale-application keystone 1
Keystone URL Endpoint update
mariadb_unit=$(juju status | grep -i mariadb | tail -1 | awk -F" " '{print $1}' | tr -d '[*]')
juju run --unit $mariadb_unit -- bash -c 'mysql -p${MARIADB_ROOT_PASSWORD} -Dkeystone \
--execute "UPDATE endpoint SET url=\"http://osm-keystone:5000/v3/\" WHERE url=\"http://keystone:5000/v3/\"";'
DB Sync command from Keystone to update schema to installed version
keystone_unit=$(juju status | grep -i keystone | tail -1 | awk -F" " '{print $1}' | tr -d '[*]')
juju run-action --wait -m osm $keystone_unit db-sync
17.2.2.9.2. OSM Application Database Updates
A helper charm has been created to assist in the database updates. Build, deploy and run the action as follows:
sudo snap install charmcraft --classic
git clone https://github.com/charmed-osm/osm-update-db-operator.git
cd osm-update-db-operator
charmcraft build
juju switch osm
juju deploy ./osm-update-db_ubuntu-20.04-amd64.charm
Run the upgrade as follows.
juju config osm-update-db mongodb-uri=mongodb://mongodb:27017
juju run-action --wait osm-update-db/0 apply-patch bug-number=1837
17.2.2.9.3. Upgrading From v9.0 Versions
If the ugrade is from v9.0 to 10.1.0 LTS, the following must also be run to update the database.
juju run-action --wait osm-update-db/0 update-db \
current-version=9 \
target-version=10 \
mongodb-only=True
The charm can now be removed.
juju remove-application osm-update-db
17.2.2.10. Restart all OSM Services
Now that the databases are migrated to the new version, we can restart the services.
juju scale-application lcm 1
juju scale-application mon 1
juju scale-application nbi 1
juju scale-application ng-ui 1
juju scale-application pla 1
juju scale-application pol 1
juju scale-application ro 1
At this point, OSM LTS is operational and ready to use.
17.3. Testing Upgrade
17.3.1. Changing Credentials
We will change some default passwords, and create some additional users to ensure RBAC still works.
osm user-update admin --password 'osm4u'
export OSM_PASSWORD=osm4u
osm project-create --domain-name default test_project_1
osm user-create test_admin_1 --projects admin --project-role-mappings 'test_project_1,project_user' --password testadmin --domain-name default
osm user-update test_admin_1 --remove-project-role 'admin,project_admin'
osm user-create test_member_1 --projects admin --project-role-mappings 'test_project_1,project_user' --password testmember --domain-name default
osm user-update test_member_1 --remove-project-role 'admin,project_admin'
17.3.2. Selection of Packages
In order to test that the upgrade does not impact existing operations, we will deploy a series of network services and slices prior to the upgrade and then verify all functionality post upgrade. What follows is a list of packages and tests to run. All packages will come from https://osm.etsi.org/gitlab/vnf-onboarding/osm-packages
git clone -j4 --recursive https://osm.etsi.org/gitlab/vnf-onboarding/osm-packages.git
17.3.2.1. Basic OpenStack VNF
17.3.2.1.1. Preparation
osm package-build hackfest_basic_ns
osm package-build hackfest_basic_vnf
osm upload-package hackfest_basic_vnf
osm upload-package hackfest_basic_ns
osm ns-create --ns_name hackfest_basic-ns \
--nsd_name hackfest_basic-ns \
--vim_account etsi-vim \
--config \
'{vld: [ {name: mgmtnet, vim-network-name: osm-ext} ] }'
17.3.2.2. Basic OpenStack VNF with Monitoring
17.3.2.2.1. Preparation
osm package-build hackfest_basic_metrics_vnf
osm package-build hackfest_basic_metrics_ns
osm upload-package ./hackfest_basic_metrics_vnf.tar.gz
osm upload-package ./hackfest_basic_metrics_ns.tar.gz
osm ns-create --ns_name hackfest_basic_metrics_ns \
--nsd_name hackfest_basic-ns-metrics \
--vim_account etsi-vim \
--config '{vld: [ {name: mgmtnet, vim-network-name: osm-ext} ] }'
17.3.2.3. Basic OpenStack VNF with Proxy Charms
osm package-build charm-packages/ha_proxy_charm_vnf
osm package-build charm-packages/ha_proxy_charm_ns
osm upload-package charm-packages/ha_proxy_charm_vnf.tar.gz
osm upload-package charm-packages/ha_proxy_charm_ns.tar.gz
osm ns-create --ns_name ha_proxy_charm-ns \
--nsd_name ha_proxy_charm-ns \
--vim_account etsi-vim \
--config '{vld: [ {name: mgmtnet, vim-network-name: osm-ext} ] }'
osm ns-action ha_proxy_charm-ns \
--vnf_name 1 \
--action_name touch \
--params '{filename: file-001.dat}'
17.3.2.4. Basic OpenStack VNF with Native Charms
osm package-build charm-packages/native_charm_vnf
osm package-build charm-packages/native_charm_ns
osm upload-package charm-packages/native_charm_vnf.tar.gz
osm upload-package charm-packages/native_charm_ns.tar.gz
osm ns-create --ns_name native_charm-ns \
--nsd_name native_charm-ns \
--vim_account etsi-vim \
--config '{vld: [ {name: mgmtnet, vim-network-name: osm-ext} ] }'
osm ns-action native_charm-ns \
--vnf_name 1 \
--vdu_id mgmtVM \
--action_name touch \
--params '{filename: file-001.dat}'
17.3.2.5. KNF with Helm
cat << EOF > ~/openldap-params.yaml
vld:
- name: mgmtnet
vim-network-name: osm-ext
additionalParamsForVnf:
- member-vnf-index: openldap
additionalParamsForKdu:
- kdu_name: ldap
additionalParams:
adminPassword: osm4u
configPassword: osm4u
env:
LDAP_ORGANISATION: "Example Inc."
LDAP_DOMAIN: "example.org"
LDAP_BACKEND: "hdb"
LDAP_TLS: "true"
LDAP_TLS_ENFORCE: "false"
LDAP_REMOVE_CONFIG_AFTER_SETUP: "true"
EOF
osm package-build openldap_knf
osm package-build openldap_ns
osm upload-package ./openldap_knf.tar.gz
osm upload-package ./openldap_ns.tar.gz
osm ns-create --ns_name openldap_ns \
--nsd_name openldap_ns \
--vim_account etsi-vim \
--config_file ~/openldap-params.yaml
17.3.2.6. KNF with Juju
osm package-build squid_metrics_cnf
osm package-build squid_metrics_cnf_ns
osm upload-package ./squid_metrics_cnf.tar.gz
osm upload-package ./squid_metrics_cnf_ns.tar.gz
osm ns-create --ns_name squid_cnf_ns \
--nsd_name squid_cnf_ns \
--vim_account dimension \
--config '{vld: [ {name: mgmtnet, vim-network-name: osm-ext} ] }'