@@ -332,6 +332,56 @@ NBI_IP=juju status --format yaml | yq r - applications.nbi-k8s.address
echo"export OSM_HOSTNAME=$NBI_IP">> ~/.bashrc
```
#### Scaling OSM Components
##### Scaling OSM Charms
Scaling or replicating the amount of containers each OSM component has can help both with distributing the workloads (in the case of some components) and also with high availability in case of one of the replicas failing.
For the High Availability scenario Charms will automatically apply anti-affinity rules to distribute the component pods between different Kubernetes worker nodes. Therefore for _real_ High Availability a Kubernetes with multiple Worker Nodes will be needed.
To scale a charm the following command needs to be executed:
```bash
juju scale-application lcm-k8s 3 # 3 being the amount of replicas
```
If the application is already scaled to the number stated in the scale-application command nothing will change. If the number is lower, the application will scale down.
##### Scaling OSM VCA
For more detailed information about setting up a highly available controller please consult the official [documentation](https://juju.is/docs/controller-high-availability).
Nevertheless, one way of setting up a manual HA Juju Controller which will act as VCA will be demonstrated.
First of all, the set up of 3 machines with the latest LTS of Ubuntu and at least 4GB of RAM will be needed. The machine from which the controller will be created will need SSH access to the previously mentioned 3 machines.
Afterwards, the manual cloud will be added, executing the first command and following the steps shown in the screenshot.
Once the add-cloud command is finished. The following commands will be executed to create the controller, add the remaining machines and enable HA.
```bash
juju bootstrap my-manual manual-controller
juju switch controller
juju add-machine ssh:ubuntu@<ip-second-machine>
juju add-machine ssh:ubuntu@<ip-third-machine>
juju enable-ha --to 1,2
```
Once the juju status shows all machines in a “started” state, the HA controller is initialized.
To install Charmed OSM with the HA controller the following argument will be passed:
```bash
./install_osm.sh --charmed--vca manual-controller
```
### Installation from source
TODO: Under elaboration.
@@ -808,3 +858,63 @@ Or to use an old version of MON:
```bash
./install_osm.sh -c k8s -m MON -b tags/v6.0.3
```
### How to upgrade OSM when using the Charmed Installation
#### Upgrading only a specific component
There are two main components which can be upgraded: The OSM charms and the OSM images.
In general for a complete upgrade of an OSM component the following steps are recommended in this order:
1. Upgrade the OSM Charm to the latest version.
2. Upgrade the OSM version by passing the latest OSM image.
##### Upgrading OSM Charms
By upgrading the OSM Charms new features for operations can be enabled or a new OSM version can be supported in case the previous charm revision wasn’t able to support it. New Charm revisions will be compatible with different OSM versions, therefore the charm can be updated without updating the docker image version.
To update a charm to its latest stable version the following command will be executed:
```bash
juju upgrade-charm ui-k8s --channel stable
```
There is also the possibility to upgrade to a specific revision with the following command:
```bash
juju upgrade-charm ui-k8s --revision 43 # 43 being the revision number of the new charm version.
```
##### Upgrading OSM version
OSM is distributed with docker images, therefore, when a new version is released, a new tag is created for it. To update to this new tag the following command needs to be executed:
```bash
juju config lcm-k8s image=opensourcemano/lcm:8.0.1 # 8.0.1 Being the version tag
```
This will restart the pod with the new image version.
##### Upgrading from source version
First, the microk8s registry will have to be enabled, for more information please consult the official [documentation](https://microk8s.io/docs/registry-built-in).
```bash
microk8s enable registry
```
Afterwards the docker image of the module that needs to be upgraded has to be built and pushed, it is important that the tag starts with localhost:32000.