# Annex 9: LTS Upgrade

## Introduction 

Starting with version 10.1.0 of OSM, every even numbered release will receive two years of community support.  This document covers the steps needed for upgrading OSM.  Depending on the installation method, there are two methods for upgrading OSM to an LTS version.

## How to Upgrade OSM 12.x to 14.y LTS

### Kubernetes Installation to 14.y

#### Back up the Databases <a name="k8s-14-db">

If desired, the databases can be backed up using the following commands:

```bash
mysql_pod=$(kubectl get pod -n osm | grep -i mysql | tail -1 | awk -F" " '{print $1}')
kubectl exec -n osm -it $mysql_pod -- bash -c \
     'mysqldump -u root -p$MYSQL_ROOT_PASSWORD --single-transaction --all-databases' \
     | gzip > backup.sql.gz

mongodb_unit=$(juju status | grep -i mongodb | tail -1 | awk -F" " '{print $1}'| tr -d '[*]')
mongodb_pod=$(kubectl get pod -n osm | grep -i mongodb | grep -v operator | tail -1 | awk -F" " '{print $1}')
juju run-action $mongodb_unit backup --wait -m osm
kubectl cp osm/$mongodb_pod:/data/backup.archive backup.archive
```

#### Upgrade Juju <a name="k8s-14-juju"></a>

The following commands will upgrade the OSM controller.

```bash
sudo snap refresh juju --channel 2.9/stable
juju upgrade-controller
```

Next, for any native or proxy charms, upgrade each model.

```bash
for model in $(juju models --format json | jq .models[].name | tr -d \") ; do 
    juju switch $model
    juju upgrade-model
done
```

#### Upgrade OSM Application <a name="k8s-14-osm">

First remove the old manifests and Kubernetes objects. Secrets will remain.

```bash
kubectl -n osm delete -f /etc/osm/docker/osm_pods/nbi.yaml
kubectl -n osm delete -f /etc/osm/docker/osm_pods/lcm.yaml
kubectl -n osm delete -f /etc/osm/docker/osm_pods/ro.yaml
kubectl -n osm delete -f /etc/osm/docker/osm_pods/grafana.yaml
kubectl -n osm delete -f /etc/osm/docker/osm_pods/ca_setup.yaml
kubectl -n osm delete -f /etc/osm/docker/osm_pods/zookeeper.yaml
kubectl -n osm delete -f /etc/osm/docker/osm_pods/kafka.yaml
kubectl -n osm delete -f /etc/osm/docker/osm_pods/mon.yaml
kubectl -n osm delete -f /etc/osm/docker/osm_pods/pol.yaml
kubectl -n osm delete -f /etc/osm/docker/osm_pods/keystone.yaml
kubectl -n osm delete -f /etc/osm/docker/osm_pods/mysql.yaml
kubectl -n osm delete -f /etc/osm/docker/osm_pods/prometheus.yaml
kubectl -n osm delete -f /etc/osm/docker/osm_pods/ng-ui.yaml
```

Then, update MongoDB using the charm:

```bash
# Build
sudo snap install charmcraft --classic
git clone https://osm.etsi.org/gerrit/osm/devops
cd devops/installers/charm/osm-update-db-operator
charmcraft pack

# Deploy
juju add-model update-db k8scloud
juju model-config default-series=kubernetes
juju deploy ./osm-update-db_ubuntu-20.04-amd64.charm
juju config osm-update-db mongodb-uri="mongodb://IP:27017"
juju run-action osm-update-db/0 update-db current-version=12 target-version=14 mongodb-only=True --wait

# Destroy model
juju destroy-model update-db --force -y
```

Create a new secret to be used by the OSM helm chart:

```bash
OSM_DATABASE_COMMONKEY=$(kubectl -n osm get secret/nbi-secret --template='{{.data.OSMNBI_DATABASE_COMMONKEY | base64decode}}')
OSM_SERVICE_PASSWORD=$(kubectl -n osm get secret/nbi-secret --template='{{.data.OSMNBI_AUTHENTICATION_SERVICE_PASSWORD | base64decode}}')
OSM_MYSQL_ROOT_PASSWORD=$(kubectl -n osm get secret/keystone-secret --template='{{.data.ROOT_DB_PASSWORD | base64decode}}')
OSM_KEYSTONE_DB_PASSWORD=$(kubectl -n osm get secret/keystone-secret --template='{{.data.KEYSTONE_DB_PASSWORD | base64decode}}')
echo "OSM_DATABASE_COMMONKEY=${OSM_DATABASE_COMMONKEY}" | sudo tee -a osm.env
echo "OSM_SERVICE_PASSWORD=${OSM_SERVICE_PASSWORD}" | sudo tee -a osm.env
echo "OSM_MYSQL_ROOT_PASSWORD=${OSM_MYSQL_ROOT_PASSWORD}" | sudo tee -a osm.env
echo "OSM_KEYSTONE_DB_PASSWORD=${OSM_KEYSTONE_DB_PASSWORD}" | sudo tee -a osm.env
kubectl -n osm create secret generic osm-secret --from-env-file=osm.env
```

Finally, deploy OSM with the helm chart:

```
git clone "https://osm.etsi.org/gerrit/osm/devops"
cd devops
OSM_VERSION="14.0.1"
git checkout $OSM_VERSION
# Add your own helm options (--set ...)
OSM_HELM_OPTS=""
# Check that there are no errors in the manifests
helm -n osm template osm ./installers/helm/osm ${OSM_HELM_OPTS}
# Deploy
helm -n osm install osm ./installers/helm/osm ${OSM_HELM_OPTS}
helm -n osm status osm
```

At this point, OSM has been upgraded.

## How to Upgrade OSM 14.x to OSM 14.y LTS in a helm-based installation

### Back up the Databases <a name="k8s-14-helm-db">

If desired, the databases can be backed up using the following commands:

```bash
mysql_pod=$(kubectl get pod -n osm | grep -i mysql | tail -1 | awk -F" " '{print $1}')
kubectl exec -n osm -it $mysql_pod -- bash -c \
     'mysqldump -u root -p$MYSQL_ROOT_PASSWORD --single-transaction --all-databases' \
     | gzip > backup.sql.gz

mongodb_unit=$(juju status | grep -i mongodb | tail -1 | awk -F" " '{print $1}'| tr -d '[*]')
mongodb_pod=$(kubectl get pod -n osm | grep -i mongodb | grep -v operator | tail -1 | awk -F" " '{print $1}')
juju run-action $mongodb_unit backup --wait -m osm
kubectl cp osm/$mongodb_pod:/data/backup.archive backup.archive
```

### Upgrade OSM Application <a name="k8s-14-helm-osm">

```
git clone "https://osm.etsi.org/gerrit/osm/devops"
cd devops
DESIRED_OSM_VERSION="14.0.0"
git checkout $DESIRED_OSM_VERSION

# Get the current values.yaml
helm -n osm get values osm > myvalues.yaml
# Compare current values.yaml with new values.yaml, and edit values.yaml conveniently
# diff myvalues.yaml installers/helm/osm/values.yaml

# Add your own helm options (--set ...)
# OSM_HELM_OPTS=""
# OSM_HELM_OPTS="-f myvalues.yaml"
# Check that there are no errors in the manifests
helm -n osm template osm ./installers/helm/osm ${OSM_HELM_OPTS}
# Upgrade OSM
helm -n osm upgrade osm ./installers/helm/osm ${OSM_HELM_OPTS}
helm -n osm status osm
```

At this point, OSM has been upgraded.

## How to Upgrade OSM 10.1.1 to 12.x LTS

### Kubernetes Installation to 12.x

#### Back up the Databases <a name="k8s-12-db">

If desired, the databases can be backed up using the following commands:

```bash
mysql_pod=$(kubectl get pod -n osm | grep -i mysql | tail -1 | awk -F" " '{print $1}')
kubectl exec -n osm -it $mysql_pod -- bash -c \
     'mysqldump -u root -p$MYSQL_ROOT_PASSWORD --single-transaction --all-databases' \
     | gzip > backup.sql.gz

mongodb_unit=$(juju status | grep -i mongodb | tail -1 | awk -F" " '{print $1}'| tr -d '[*]')
mongodb_pod=$(kubectl get pod -n osm | grep -i mongodb | grep -v operator | tail -1 | awk -F" " '{print $1}')
juju run-action $mongodb_unit backup --wait -m osm
kubectl cp osm/$mongodb_pod:/data/backup.archive backup.archive
```

#### Upgrade Juju <a name="k8s-12-juju"></a>

The following commands will upgrade the OSM controller.

```bash
sudo snap refresh juju --channel 2.9/stable
juju upgrade-controller
```

Next, for any native or proxy charms, upgrade each model.

```bash
for model in $(juju models --format json | jq .models[].name | tr -d \") ; do 
    juju switch $model
    juju upgrade-model
done
```

#### Upgrade OSM Application <a name="k8s-12-osm">

```bash
OSM_VERSION="12.0.6"
for module in lcm mon nbi ng-ui pla pol ro; do
    kubectl -n osm patch deployment ${module} --patch '{"spec": {"template": {"spec": {"containers": [{"name": "${module}", "image": "opensourcemano/${module}:${OSM_VERSION}"}]}}}}'
    kubectl -n osm scale deployment ${module} --replicas=0
    kubectl -n osm scale deployment ${module} --replicas=1
done
# In order to make this change persistent after reboots,
# you will have to update the files under /etc/osm/docker/osm_pods to reflect the changes
for module in lcm mon nbi ng-ui pol ro prometheus; do
    sudo sed -i "s/opensourcemano\/${module}:.*/opensourcemano\/${module}:${OSM_VERSION}/g" /etc/osm/docker/osm_pods/${module}.yaml
done
sudo sed -i "s/opensourcemano\/pla:.*/opensourcemano\/pla:${OSM_VERSION}/g" /etc/osm/docker/osm_pods/osm_pla/${module}.yaml
```

At this point, OSM has been upgraded.

## Upgrade of 10.1.0 to 10.1.1 LTS

This procedure covers both the upgrade of 10.1.0 to 10.1.1 LTS.  There are two installation methods, each with its own set of procedures:

* [Kubernetes Installation Option](#kubernetes-installation-to-1011)
* [Charmed Installation Option](#charmed-installation-to-1011)

### Kubernetes Installation to 10.1.1

#### Back up the Databases <a name="k8s-10-1-1-db">

If desired, the databases can be backed up using the following commands:

```bash
mysql_pod=$(kubectl get pod -n osm | grep -i mysql | tail -1 | awk -F" " '{print $1}')
kubectl exec -n osm -it $mysql_pod -- bash -c \
     'mysqldump -u root -p$MYSQL_ROOT_PASSWORD --single-transaction --all-databases' \
     | gzip > backup.sql.gz

mongodb_unit=$(juju status | grep -i mongodb | tail -1 | awk -F" " '{print $1}'| tr -d '[*]')
mongodb_pod=$(kubectl get pod -n osm | grep -i mongodb | grep -v operator | tail -1 | awk -F" " '{print $1}')
juju run-action $mongodb_unit backup --wait -m osm
kubectl cp osm/$mongodb_pod:/data/backup.archive backup.archive
```

#### Upgrade Juju <a name="k8s-10-1-1-juju"></a>

The following commands will upgrade the OSM controller.

```bash
sudo snap refresh juju --channel 2.9/stable
juju upgrade-controller
```

Next, for any native or proxy charms, upgrade each model.

```bash
for model in $(juju models --format json | jq .models[].name | tr -d \") ; do 
    juju switch $model
    juju upgrade-model
done
```

#### Upgrade OSM Application <a name="k8s-10-1-1-osm">

```bash
OSM_VERSION="10.1.1"
for module in lcm mon nbi ng-ui pla pol ro; do
    kubectl -n osm patch deployment ${module} --patch '{"spec": {"template": {"spec": {"containers": [{"name": "${module}", "image": "opensourcemano/${module}:${OSM_VERSION}"}]}}}}'
    kubectl -n osm scale deployment ${module} --replicas=0
    kubectl -n osm scale deployment ${module} --replicas=1
done
# In order to make this change persistent after reboots,
# you will have to update the files under /etc/osm/docker/osm_pods to reflect the changes
for module in lcm mon nbi ng-ui pol ro; do
    sudo sed -i "s/opensourcemano\/${module}:.*/opensourcemano\/${module}:${OSM_VERSION}/g" /etc/osm/docker/osm_pods/${module}.yaml
done
sudo sed -i "s/opensourcemano\/pla:.*/opensourcemano\/pla:${OSM_VERSION}/g" /etc/osm/docker/osm_pods/osm_pla/${module}.yaml
```

At this point, OSM has been upgraded.

### Charmed Installation to 10.1.1

#### Back up the Databases <a name="charm-10-1-1-db">

If desired, the databases can be backed up using the following commands:

```bash
mariadb_unit=$(juju status |  grep -i mariadb | tail -1 | awk -F" " '{print $1}' | tr -d '[*]')
mariadb_pod=$(microk8s.kubectl get pod -n osm | grep -i mariadb | tail -1 | awk -F" " '{print $1}')
juju run-action $mariadb_unit backup --wait -m osm
microk8s.kubectl cp osm/$mariadb_pod:/var/lib/mysql/backup.sql.gz backup.sql.gz

mongodb_unit=$(juju status | grep -i mongodb | tail -1 | awk -F" " '{print $1}'| tr -d '[*]')
mongodb_pod=$(microk8s.kubectl get pod -n osm | grep -i mongodb | tail -1 | awk -F" " '{print $1}')
microk8s.kubectl exec -n osm -it $mongodb_pod -- mongodump --gzip --archive=/data/backup.archive
microk8s.kubectl cp osm/$mongodb_pod:/data/backup.archive backup.archive
```

#### Upgrade Juju <a name="charm-10-1-1-juju"></a>

The following commands will upgrade the OSM controller.

```bash
sudo snap refresh juju --channel 2.9/stable
juju upgrade-controller
```

Next, for any native or proxy charms, upgrade each model.

```bash
for model in $(juju models --format json | jq .models[].name | tr -d \") ; do 
    juju switch $model
    juju upgrade-model
done
```

#### Upgrade OSM Application <a name="charm-10-1-1-osm">

```bash
juju attach-resource -m osm lcm image=opensourcemano/lcm:10.1.1
juju attach-resource -m osm mon image=opensourcemano/mon:10.1.1
juju attach-resource -m osm nbi image=opensourcemano/nbi:10.1.1
juju attach-resource -m osm ng-ui image=opensourcemano/ng-ui:10.1.1
juju attach-resource -m osm pla image=opensourcemano/pla:10.1.1
juju attach-resource -m osm pol image=opensourcemano/pol:10.1.1
juju attach-resource -m osm ro image=opensourcemano/ro:10.1.1
```

At this point, OSM has been upgraded.

## Upgrade of Pre-LTS to 10.1.0 LTS

This procedure covers both upgrade from 9.1.5 or 10.0.3 to 10.1.0 LTS.  Where necessary, additional steps for 9.1.5 are shown.  There are two installation methods, each with its own set of procedures:

* [Kubernetes Installation Option](#kubernetes-installation-option)
* [Charmed Installation Option](#charmed-installation-option)

### Kubernetes Installation Option

The following steps are to be followed for an upgrade to LTS:

* [Stop all OSM Services](#k8s-1)
* [Backup the Databases](#k8s-2)
* [Backup existing OSM manifests](#k8s-3)
* [Remove Deployed Charmed Services](#k8s-4)
* [Upgrade Juju](#k8s-5)
* [Upgrade Kubernetes](#k8s-6)
* [Deploy Charmed Services](#k8s-7)
* [Upgrade OSM to 10.1.0 LTS](#k8s-8)
* [Stop New OSM Service](#k8s-9)
* [Restore the Databases](#k8s-10)
* [Perform Database Migration](#k8s-11)
* [Restart all OSM services](#k8s-12)

#### Stop all OSM Services <a name="k8s-1"></a>

```bash
kubectl -n osm scale deployment/grafana --replicas=0
kubectl -n osm scale statefulset/prometheus --replicas=0
kubectl -n osm scale statefulset/kafka --replicas=0
kubectl -n osm scale deployment/keystone --replicas=0
kubectl -n osm scale deployment/lcm --replicas=0
kubectl -n osm scale deployment/mon --replicas=0
kubectl -n osm scale deployment/nbi --replicas=0
kubectl -n osm scale deployment/ng-ui --replicas=0
kubectl -n osm scale deployment/pla --replicas=0
kubectl -n osm scale deployment/pol --replicas=0
kubectl -n osm scale deployment/ro --replicas=0
kubectl -n osm scale statefulset/zookeeper --replicas=0
```

__Note__ if PLA was not installed, you can ignore the error about `deployments.apps "pla" not found`.

#### Backup the Databases <a name="k8s-2"></a>

Once all the deployments and statafulsets show 0 replicas, proceed to performing the database backup.

```bash
mysql_pod=$(kubectl get pod -n osm | grep -i mysql | tail -1 | awk -F" " '{print $1}')
kubectl exec -n osm -it $mysql_pod -- bash -c \
     'mysqldump -u root -p$MYSQL_ROOT_PASSWORD --single-transaction --all-databases' \
     | gzip > backup.sql.gz

mongodb_unit=$(juju status | grep -i mongodb | tail -1 | awk -F" " '{print $1}'| tr -d '[*]')
mongodb_pod=$(kubectl get pod -n osm | grep -i mongodb | grep -v operator | tail -1 | awk -F" " '{print $1}')
juju run-action $mongodb_unit backup --wait -m osm
kubectl cp osm/$mongodb_pod:/data/backup.archive backup.archive
```

#### Backup existing OSM manifests <a name="k8s-3"></a>

```bash
cp -bR /etc/osm/docker/osm_pods backup_osm_manifests
```

#### Remove Deployed Charmed Services <a name="k8s-4"></a>

```bash
juju destroy-model osm --destroy-storage -y --force
```

#### Upgrade Juju <a name="k8s-5"></a>

```bash
sudo snap refresh juju --channel 2.9/stable
juju switch osm-vca:admin/controller
juju upgrade-model
```

#### Upgrade Kubernetes <a name="k8s-6"></a>

Documentation for how to upgrade Kubernetes can be found at <https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/>

#### Deploy Charmed Services <a name="k8s-7"></a>

```bash
juju add-model osm k8scloud
juju deploy ch:mongodb-k8s -m osm
```

#### Upgrade OSM to 10.1.0 LTS <a name="k8s-8"></a>

```bash
sudo apt-get update
sudo apt-get install -y osm-devops python3-osm-im python3-osmclient
sudo cp -R /usr/share/osm-devops/installers/docker/osm_pods /etc/osm/docker/osm_pods
sudo rm /etc/osm/docker/mongo.yaml
kubectl -n osm apply -f /etc/osm/docker/osm_pods
```

#### Stop all OSM Services <a name="k8s-9"></a>

As we need to restore the database, we are going to stop all the OSM services once again.

```bash
kubectl -n osm scale deployment/grafana --replicas=0
kubectl -n osm scale statefulset/prometheus --replicas=0
kubectl -n osm scale statefulset/kafka --replicas=0
kubectl -n osm scale deployment/keystone --replicas=0
kubectl -n osm scale deployment/lcm --replicas=0
kubectl -n osm scale deployment/mon --replicas=0
kubectl -n osm scale deployment/nbi --replicas=0
kubectl -n osm scale deployment/ng-ui --replicas=0
kubectl -n osm scale deployment/pla --replicas=0
kubectl -n osm scale deployment/pol --replicas=0
kubectl -n osm scale deployment/ro --replicas=0
kubectl -n osm scale statefulset/zookeeper --replicas=0
```

#### Restore the Databases <a name="k8s-10"></a>

```bash
mysql_pod=$(kubectl get pod -n osm | grep -i mysql | tail -1 | awk -F" " '{print $1}')
kubectl cp backup.sql.gz osm/$mysql_pod:/var/lib/mysql/backup.sql.gz
kubectl exec -n osm -it $mysql_pod -- bash -c \
    'mysql -uroot -p${MYSQL_ROOT_PASSWORD} --execute "DROP DATABASE keystone"'
kubectl exec -n osm -it $mysql_pod -- bash -c \
    'zcat backup.sql.gz | mysql -uroot -p${MYSQL_ROOT_PASSWORD}'

mongodb_pod=$(kubectl get pod -n osm | grep -i mongodb | grep -v operator | tail -1 | awk -F" " '{print $1}')
kubectl cp backup.archive osm/$mongodb_pod:/data/backup.archive
kubectl exec -n osm -it $mongodb_pod -- mongorestore --drop --gzip --archive=/data/backup.archive
```

#### Perform Database Migration <a name="k8s-11"></a>

##### Keystone Database Updates

Update the database:

```bash
mysql_pod=$(kubectl get pod -n osm | grep -i mysql | tail -1 | awk -F" " '{print $1}')
kubectl exec -n osm -it $mysql_pod -- bash -c \
    'mysql -uroot -p${MYSQL_ROOT_PASSWORD} -Dkeystone \
 --execute "UPDATE endpoint SET url=\"http://osm-keystone:5000/v3/\" WHERE url=\"http://keystone:5000/v3/\"";'
```

##### OSM Application Database Updates

A helper charm has been created to assist in the database updates. Build, deploy and run the action as follows:

```bash
sudo snap install charmcraft --classic

git clone https://github.com/charmed-osm/osm-update-db-operator.git
cd osm-update-db-operator
charmcraft build

juju switch osm
juju deploy ./osm-update-db_ubuntu-20.04-amd64.charm
```

Run the upgrade as follows.

```bash
juju config osm-update-db mongodb-uri=mongodb://mongodb:27017

juju run-action --wait osm-update-db/0 apply-patch bug-number=1837
```

#### Restart all OSM Services <a name="k8s-12"></a>

Now that the databases are migrated to the new version, we can restart the services.

```bash
kubectl -n osm scale deployment/grafana --replicas=1
kubectl -n osm scale statefulset/prometheus --replicas=1
kubectl -n osm scale statefulset/kafka --replicas=1
kubectl -n osm scale deployment/keystone --replicas=1
kubectl -n osm scale deployment/lcm --replicas=1
kubectl -n osm scale deployment/mon --replicas=1
kubectl -n osm scale deployment/nbi --replicas=1
kubectl -n osm scale deployment/ng-ui --replicas=1
kubectl -n osm scale deployment/pla --replicas=1
kubectl -n osm scale deployment/pol --replicas=1
kubectl -n osm scale deployment/ro --replicas=1
kubectl -n osm scale statefulset/zookeeper --replicas=1
```

At this point, OSM LTS is operational and ready to use.

### Charmed Installation Option

For Charmed OSM Installation, the procedure is to maintain the database content while redeploying the application using Juju.  Rather than adding a series of commands to manually redeploy, we can simply download and run the LTS installer to recreate OSM after removing the non-LTS software.

The following steps will upgrade OSM to the LTS version:

* [Stop all OSM Services](#charm-1)
* [Backup the Databases](#charm-2)
* [Remove Deployed OSM Application](#charm-3)
* [Upgrade Juju](#charm-4)
* [Upgrade MicroK8s](#charm-5)
* [Install OSM 10.1.0 LTS](#charm-6)
* [Stop New OSM Services](#charm-7)
* [Restore the Databases](#charm-8)
* [Perform Database Migration](#charm-9)
* [Restart all OSM Services](#charm-10)

#### Stop all OSM Services <a name="charm-1"></a>

##### Version 9.1.5

```bash
juju scale-application grafana-k8s 0
juju scale-application prometheus-k8s 0
juju scale-application kafka-k8s 0
juju scale-application keystone 0
juju scale-application lcm-k8s 0
juju scale-application mon-k8s 0
juju scale-application nbi 0
juju scale-application ng-ui 0
juju scale-application pla 0
juju scale-application pol-k8s 0
juju scale-application ro-k8s 0
juju scale-application zookeeper-k8s 0
```

##### Version 10.0.3

```bash
juju scale-application grafana 0
juju scale-application prometheus 0
juju scale-application kafka-k8s 0
juju scale-application keystone 0
juju scale-application lcm 0
juju scale-application mon 0
juju scale-application nbi 0
juju scale-application ng-ui 0
juju scale-application pla 0
juju scale-application pol 0
juju scale-application ro 0
juju scale-application zookeeper-k8s 0
```

Wait for all the applications to scale to 0.  The output of `juju status` should look similar to the following, with only `mariadb-k8s` and `mongodb-k8s` units left.
```
Model  Controller  Cloud/Region        Version  SLA          Timestamp
osm    osm-vca     microk8s/localhost  2.8.13   unsupported  20:57:44Z

App            Version                         Status  Scale  Charm          Store       Rev  OS          Address         Notes
grafana        docker.io/ubuntu/grafana@sh...  active      0  grafana        jujucharms    4  kubernetes  10.152.183.45    
kafka-k8s      rocks.canonical.com:443/wur...  active      0  kafka-k8s      jujucharms   21  kubernetes  10.152.183.248   
keystone       keystone:10.0.3                 active      0  keystone       jujucharms    9  kubernetes  10.152.183.114   
lcm            lcm:10.0.3                      active      0  lcm            jujucharms    8  kubernetes  10.152.183.70    
mariadb-k8s    rocks.canonical.com:443/mar...  active      1  mariadb-k8s    jujucharms   35  kubernetes  10.152.183.177   
mon            mon:10.0.3                      active      0  mon            jujucharms    5  kubernetes  10.152.183.227   
mongodb-k8s    mongo:latest                    active      1  mongodb-k8s    jujucharms   29  kubernetes  10.152.183.63    
nbi            nbi:10.0.3                      active      0  nbi            jujucharms   12  kubernetes  10.152.183.163   
ng-ui          ng-ui:10.0.3                    active      0  ng-ui          jujucharms   21  kubernetes  10.152.183.180   
pla            pla:10.0.3                      active      0  pla            jujucharms    9  kubernetes  10.152.183.7     
pol            pol:10.0.3                      active      0  pol            jujucharms    4  kubernetes  10.152.183.104   
prometheus     docker.io/ed1000/prometheus...  active      0  prometheus     jujucharms    4  kubernetes  10.152.183.120   
ro             ro:10.0.3                       active      0  ro             jujucharms    4  kubernetes  10.152.183.159   
zookeeper-k8s  rocks.canonical.com:443/k8s...  active      0  zookeeper-k8s  jujucharms   37  kubernetes  10.152.183.201   

Unit            Workload  Agent  Address       Ports      Message
mariadb-k8s/0*  active    idle   10.1.244.152  3306/TCP   ready
mongodb-k8s/0*  active    idle   10.1.244.156  27017/TCP  ready
```

#### Backup the Databases <a name="charm-2"></a>

Once all the units show a scale of 0, proceed to performing the database backup.

```bash
mariadb_unit=$(juju status |  grep -i mariadb | tail -1 | awk -F" " '{print $1}' | tr -d '[*]')
mariadb_pod=$(microk8s.kubectl get pod -n osm | grep -i mariadb | tail -1 | awk -F" " '{print $1}')
juju run-action $mariadb_unit backup --wait -m osm
microk8s.kubectl cp osm/$mariadb_pod:/var/lib/mysql/backup.sql.gz backup.sql.gz

mongodb_unit=$(juju status | grep -i mongodb | tail -1 | awk -F" " '{print $1}'| tr -d '[*]')
mongodb_pod=$(microk8s.kubectl get pod -n osm | grep -i mongodb | tail -1 | awk -F" " '{print $1}')
juju run-action $mongodb_unit backup --wait -m osm
microk8s.kubectl cp osm/$mongodb_pod:/data/backup.archive backup.archive
```

#### Remove Deployed OSM Application <a name="charm-3"></a>

```bash
juju destroy-model osm --destroy-storage -y --force
microk8s.kubectl delete namespaces osm
```

##### Ingress for Upgrade from 9.1.5

If this is an upgrade from 9.1.5, the following commands should be run at this time to flush any legacy ingress descriptors.

```bash
microk8s disable ingress
microk8s enable ingress
```

#### Upgrade Juju <a name="charm-4"></a>

The following commands will upgrade the OSM controller.

```bash
sudo snap refresh juju --channel 2.9/stable
juju upgrade-controller
```

Next, for any native or proxy charms, upgrade each model.

```bash
for model in $(juju models --format json | jq .models[].name | tr -d \") ; do 
    juju switch $model
    juju upgrade-model
done
```

#### Upgrade MicroK8s <a name="charm-5"></a>

```bash
sudo snap refresh microk8s --channel=1.23/stable
```

#### Install OSM 10.1.0 LTS <a name="charm-6"></a>

```bash
sudo apt remove -y --purge osm-devops
unset OSM_USERNAME
unset OSM_PASSWORD

wget https://osm-download.etsi.org/ftp/osm-10.0-ten/install_osm.sh
chmod +x ./install_osm.sh
./install_osm.sh --charmed --vca osm-vca --tag 10.1.0
```

#### Stop New OSM Services <a name="charm-7"></a>

As we need to restore the database, we are going to stop all the OSM services once again.

```bash
juju scale-application keystone 0
juju scale-application lcm 0
juju scale-application mon 0
juju scale-application nbi 0
juju scale-application ng-ui 0
juju scale-application pla 0
juju scale-application pol 0
juju scale-application ro 0
```

#### Restore the Databases <a name="charm-8"></a>

```bash
mariadb_unit=$(juju status | grep -i mariadb | tail -1 | awk -F" " '{print $1}' | tr -d '[*]')
mariadb_pod=$(microk8s.kubectl get pod -n osm | grep -i mariadb | tail -1 | awk -F" " '{print $1}')
microk8s.kubectl cp backup.sql.gz osm/$mariadb_pod:/var/lib/mysql/backup.sql.gz
juju run --unit $mariadb_unit -- bash -c 'mysql -p${MARIADB_ROOT_PASSWORD} --execute "DROP DATABASE keystone"'
juju run-action --wait -m osm $mariadb_unit restore

mongodb_pod=$(microk8s.kubectl get pod -n osm | grep -i mongodb | tail -1 | awk -F" " '{print $1}')
microk8s.kubectl cp backup.archive osm/$mongodb_pod:/data/backup.archive 
microk8s.kubectl exec -n osm -it $mongodb_pod -- mongorestore --drop --gzip --archive=/data/backup.archive
```

#### Perform Database Migration <a name="charm-9"></a>

##### Keystone Database Updates

Start the Keystone container.

```bash
juju scale-application keystone 1
```

1. Keystone URL Endpoint update
```bash
mariadb_unit=$(juju status | grep -i mariadb | tail -1 | awk -F" " '{print $1}' | tr -d '[*]')
juju run --unit $mariadb_unit -- bash -c 'mysql -p${MARIADB_ROOT_PASSWORD} -Dkeystone \
 --execute "UPDATE endpoint SET url=\"http://osm-keystone:5000/v3/\" WHERE url=\"http://keystone:5000/v3/\"";'
```

2. DB Sync command from Keystone to update schema to installed version

```bash
keystone_unit=$(juju status | grep -i keystone | tail -1 | awk -F" " '{print $1}' | tr -d '[*]')
juju run-action --wait -m osm $keystone_unit db-sync
```

##### OSM Application Database Updates

A helper charm has been created to assist in the database updates. Build, deploy and run the action as follows:

```bash
sudo snap install charmcraft --classic

git clone https://github.com/charmed-osm/osm-update-db-operator.git
cd osm-update-db-operator
charmcraft build

juju switch osm
juju deploy ./osm-update-db_ubuntu-20.04-amd64.charm
```

Run the upgrade as follows.

```bash
juju config osm-update-db mongodb-uri=mongodb://mongodb:27017

juju run-action --wait osm-update-db/0 apply-patch bug-number=1837
```

##### Upgrading From v9.0 Versions

If the ugrade is from v9.0 to 10.1.0 LTS, the following must also be run to update the database.

```bash
juju run-action --wait osm-update-db/0 update-db \
     current-version=9 \
     target-version=10 \
     mongodb-only=True
```

The charm can now be removed.

```bash
juju remove-application osm-update-db
```

#### Restart all OSM Services <a name="charm-10"></a>

Now that the databases are migrated to the new version, we can restart the services.

```bash
juju scale-application lcm 1
juju scale-application mon 1
juju scale-application nbi 1
juju scale-application ng-ui 1
juju scale-application pla 1
juju scale-application pol 1
juju scale-application ro 1
```

At this point, OSM LTS is operational and ready to use.

## Testing Upgrade

### Changing Credentials

We will change some default passwords, and create some additional users to ensure RBAC still works.

```bash
osm user-update admin --password 'osm4u'
export OSM_PASSWORD=osm4u
```

```bash
osm project-create --domain-name default test_project_1
```

```bash
osm user-create test_admin_1 --projects admin --project-role-mappings 'test_project_1,project_user' --password testadmin --domain-name default
osm user-update test_admin_1 --remove-project-role 'admin,project_admin'
osm user-create test_member_1 --projects admin --project-role-mappings 'test_project_1,project_user' --password testmember --domain-name default
osm user-update test_member_1 --remove-project-role 'admin,project_admin'
```

### Running Robot Test Suite

#### Install Docker (Charmed OSM Only)

Docker does not get installed with charmed OSM, so that needs to be installed first.
```bash
sudo snap install docker
sudo addgroup --system docker
sudo adduser $USER docker
newgrp docker
sudo snap disable docker
sudo snap enable docker
sudo iptables -I DOCKER-USER -j ACCEPT
```

#### Prepare to Run Tests

You should already have OSM_HOSTNAME and OSM_PASSWORD environment variables set, based on the output from the installer.

Source your openstack.rc file to get all your Openstack environment variables set.

##### Set Environment Variables (K8s Installation)

```
export OSM_HOSTNAME=127.0.0.1
export PROMETHEUS_HOSTNAME=127.0.0.1
export PROMETHEUS_PORT=9091
export JUJU_PASSWORD=`juju gui 2>&1 | grep password | awk '{print $2}'`
export HOSTIP=127.0.1.1
```

##### Set Environment Variables (Charmed OSM)

```bash
export OSM_HOSTNAME=$(juju config -m osm nbi site_url | sed "s/http.*\?:\/\///"):443
export PROMETHEUS_HOSTNAME=$(juju config -m osm prometheus site_url | sed "s/http.*\?:\/\///")
export PROMETHEUS_PORT=80
export JUJU_PASSWORD=`juju gui 2>&1 | grep password | awk '{print $2}'`
export HOSTIP=$(echo $PROMETHEUS_HOSTNAME | sed "s/prometheus.//" | sed "s/.nip.io//")
```

#### Create robot-systest.cfg
```bash
cat << EOF > robot-systest.cfg
VIM_TARGET=osm
VIM_MGMT_NET=osm-ext
ENVIRONMENTS_FOLDER=environments
PACKAGES_FOLDER=/robot-systest/osm-packages
OS_CLOUD=openstack
LC_ALL=C.UTF-8
LANG=C.UTF-8
EOF
for line in `env | grep "^OS_" | sort` ; do echo $line >> robot-systest.cfg ; done
```

#### Create robot.etc.hosts (Charmed OSM)
```bash
cat << EOF > robot.etc.hosts
127.0.0.1           localhost
${HOSTIP}      prometheus.${HOSTIP}.nip.io nbi.${HOSTIP}.nip.io
EOF
```

#### Create clouds.yaml

```bash
cat << EOF > clouds.yaml
clouds:
  openstack:
    auth:
      auth_url: $OS_AUTH_URL
      project_name: $OS_PROJECT_NAME
      username: $OS_USERNAME
      password: $OS_PASSWORD
      user_domain_name: $OS_USER_DOMAIN_NAME
      project_domain_name: $OS_PROJECT_DOMAIN_NAME
EOF
```

#### Create VIM

Create a VIM called `osm`.  Be sure to add your specific configurations, such as floating
ip addresses, or untrusted SSL certificates.

```
osm vim-create --name osm --user "$OS_USERNAME" --password "$OS_PASSWORD" \
               --auth_url "$OS_AUTH_URL" --tenant "$OS_USERNAME" --account_type openstack \
               --config='{management_network_name: osm-ext}'
```

Provide a copy of your Kubernetes cluster configuration file to the Robot container.
```
export KUBECONFIG=/path/to/kubeconfig.yaml
```

#### Start Robot Tests Container
To keep a copy of the reports, create a directory for the container to store them.

```bash
mkdir reports
```

```bash
docker run -ti --entrypoint /bin/bash \
        --env OSM_HOSTNAME=${OSM_HOSTNAME} \
        --env PROMETHEUS_HOSTNAME=${PROMETHEUS_HOSTNAME} \
        --env PROMETHEUS_PORT=${PROMETHEUS_PORT} \
        --env JUJU_PASSWORD=${JUJU_PASSWORD} \
        --env HOSTIP=${HOSTIP} \
        --env OSM_PASSWORD=osm4u
        --env-file robot-systest.cfg \
        -v "$(pwd)/robot.etc.hosts":/etc/hosts \
        -v "${KUBECONFIG}":/root/.kube/config \
        -v "$(pwd)/clouds.yaml":/etc/openstack/clouds.yaml \
        -v "$(pwd)/reports":/robot-systest/reports \
        opensourcemano/tests:10
```

#### Run Tests Pre-Upgrade

From the Robot tests container command line, execute the prepare step.

```bash
./run_test.sh -t prepare
```

After the run has completed successfully, there should be 5 network services present
in OSM:

```
+---------------------------------+--------------------------------------+---------------------+----------+-------------------+---------------+
| ns instance name                | id                                   | date                | ns state | current operation | error details |
+---------------------------------+--------------------------------------+---------------------+----------+-------------------+---------------+
| basic_07_secure_key_management  | 1a8621ea-d51d-434c-90e0-e153701729dd | 2022-08-03T17:51:07 | READY    | IDLE (None)       | N/A           |
| basic_09_manual_scaling_test    | 7611ce54-bff2-480e-94fb-5a8b0549a6c4 | 2022-08-03T17:54:32 | READY    | IDLE (None)       | N/A           |
| basic_21                        | 8090754c-f49c-4891-a7c0-1e5750c7980b | 2022-08-03T17:55:30 | READY    | IDLE (None)       | N/A           |
| k8s_06-nopasswd_k8s_proxy_charm | a5eb22d7-4a4f-4615-ad44-9f8957cf243c | 2022-08-03T18:17:01 | READY    | IDLE (None)       | N/A           |
| ldap                            | be0f6e33-e4d9-463d-92c6-cc27f2f1d5eb | 2022-08-03T18:51:51 | READY    | IDLE (None)       | N/A           |
+---------------------------------+--------------------------------------+---------------------+----------+-------------------+---------------+
```

Run the verify step before upgrading:
```bash
./run_test.sh -t verify
```

#### Run Tests Post-Upgrade

After completing the upgrade procedure, execute the verify step again to ensure the upgrade
was successful

```bash
./run_test.sh -t verify
```

This will only verify services that were already deployed in the prepare step.