Commit 99fcc474 authored by garciadeblas's avatar garciadeblas
Browse files

Merge branch 'feature8170' into 'master'

Feature 8170: Helm-based OSM installation

See merge request !133
parents 46099441 a539f083
Pipeline #11536 passed with stages
in 1 minute and 53 seconds
......@@ -10,6 +10,8 @@ In order to install OSM, you will need, at least, a single server or VM with the
- Ubuntu20.04 cloud image (64-bit variant required) (<https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img>)
- Ubuntu20.04 server image (64-bit variant required) (<http://releases.ubuntu.com/20.04/>)
**Reminder**: Although OSM could work with other base images, the only recommended are the ones above, since these are the images used in our daily tests.
In addition, you will need a Virtual Infrastructure Manager available so that OSM can orchestrate workloads on it. The following figure illustrates OSM interaction with VIMs and the VNFs to be deployed there:
- OSM communicates with the VIM for the deployment of VNFs.
......@@ -55,8 +57,9 @@ Optionally, you can use the option `--k8s_monitor` to install an add-on to monit
You can include optional components in your installation by adding the following flags:
- **Kubernetes Monitor:**: `--k8s_monitor` (install an add-on to monitor the Kubernetes cluster and OSM running on top of it, through prometheus and grafana)
- **Kubernetes Monitor:** `--k8s_monitor` (install an add-on to monitor the Kubernetes cluster and OSM running on top of it, through prometheus and grafana)
- **PLA:** `--pla` (install the PLA module for placement support)
- **Old Service Assurance:** `--old-sa` (install old Service Assurance framework with MON and POL; do not install Airflow and Pushgateway)"
Example:
......@@ -72,8 +75,6 @@ OSM installer includes a larger number of install options. The general usage is
./install_osm.sh [OPTIONS]
```
With no options, it will install OSM from binaries.
**Options:**
```text
......@@ -82,45 +83,13 @@ With no options, it will install OSM from binaries.
-R <release>: use specified release for osm binaries (deb packages, lxd images, ...)
-u <repo base>: use specified repository url for osm packages
-k <repo key>: use specified repository public key url
--k8s_monitor: install the OSM kubernetes monitoring with prometheus and grafana
-m <MODULE>: install OSM but only rebuild the specified docker images (NG-UI, NBI, LCM, RO, MON, POL, KAFKA, MONGO, PROMETHEUS, PROMETHEUS-CADVISOR, KEYSTONE-DB, NONE)
-o <ADDON>: do not install OSM, but ONLY one of the addons (vimemu, elk_stack) (assumes OSM is already installed)
--showopts: print chosen options and exit (only for debugging)
--uninstall: uninstall OSM: remove the containers and delete NAT rules
-D <devops path> use local devops installation path
-h / --help: prints help
```
## Other installation methods
### How to install OSM in a remote OpenStack infrastructure
OSM could be installed to a remote OpenStack infrastructure from the OSM standard installer. It is based on Ansible and it takes care of configuring the OpenStack infrastructure before deploying a VM with OSM. The Ansible playbook performs the following steps:
1. Creation of a new VM flavour (4 CPUs, 8 GB RAM, 40 GB disk)
2. Download of Ubuntu 20.04 image and upload it to OpenStack Glance
3. Generation of a new SSH private and public key pair
4. Setup of a new security group to allow external SSH and HTTP access
5. Deployment of a clean Ubuntu 20.04 VM and installation of OSM to it
**Important note:** The OpenStack user needs Admin rights or similar to perform those operations.
The installation can be performed with the following command:
```bash
wget https://osm-download.etsi.org/ftp/osm-13.0-thirteen/install_osm.sh
chmod +x install_osm.sh
./install_osm.sh -O <openrc file/cloud name> -N <OpenStack public network name/ID> [--volume] [OSM installer options]
```
The options `-O` and `-N` are mandatory. The `-O` accepts both a path to an OpenStack openrc file or a cloud name. If a cloud name is used, the clouds.yaml file should be under `~/.config/openstack/` or `/etc/openstack/`. More information about the `clouds.yaml` file can be found [here](https://docs.openstack.org/python-openstackclient/latest/configuration/index.html)
The `-N` requires an external network name or ID. This is going to be the OpenStack network where the OSM VM is going to be attached.
The `--volume` option is used to instruct OpenStack to create an external volume attached to the VM instead of using a local one. This may be suitable for production environments. It requires OpenStack Cinder configured on the OpenStack infrastructure.
Some OSM installer options are supported, in particular the following: `-r -k -u -R -t`. Other options will be supported in the future.
### How to install Charmed OSM
Some cases where the Charmed installer might be more suitable:
......@@ -311,13 +280,41 @@ juju config keystone mysql_port="<MySQL Port>"
juju config keystone mysql_root_password="<MySQL Root Password>"
```
### How to upgrade components from daily images in standard deployment
### How to install OSM in a remote OpenStack infrastructure
OSM could be installed to a remote OpenStack infrastructure from the OSM standard installer. It is based on Ansible and it takes care of configuring the OpenStack infrastructure before deploying a VM with OSM. The Ansible playbook performs the following steps:
1. Creation of a new VM flavour (4 CPUs, 8 GB RAM, 40 GB disk)
2. Download of Ubuntu 20.04 image and upload it to OpenStack Glance
3. Generation of a new SSH private and public key pair
4. Setup of a new security group to allow external SSH and HTTP access
5. Deployment of a clean Ubuntu 20.04 VM and installation of OSM to it
**Important note:** The OpenStack user needs Admin rights or similar to perform those operations.
The installation can be performed with the following command:
```bash
wget https://osm-download.etsi.org/ftp/osm-13.0-thirteen/install_osm.sh
chmod +x install_osm.sh
./install_osm.sh -O <openrc file/cloud name> -N <OpenStack public network name/ID> [--volume] [OSM installer options]
```
The options `-O` and `-N` are mandatory. The `-O` accepts both a path to an OpenStack openrc file or a cloud name. If a cloud name is used, the clouds.yaml file should be under `~/.config/openstack/` or `/etc/openstack/`. More information about the `clouds.yaml` file can be found [here](https://docs.openstack.org/python-openstackclient/latest/configuration/index.html)
The `-N` requires an external network name or ID. This is going to be the OpenStack network where the OSM VM is going to be attached.
The `--volume` option is used to instruct OpenStack to create an external volume attached to the VM instead of using a local one. This may be suitable for production environments. It requires OpenStack Cinder configured on the OpenStack infrastructure.
Some OSM installer options are supported, in particular the following: `-r -k -u -R -t`. Other options will be supported in the future.
## How to upgrade components from daily images in standard deployment
**Upgrading a specific OSM component without upgrading the others accordingly may lead to potential inconsistencies.** Unless you are really sure about what you are doing, please use this procedure with caution.
One of the commonest reasons for this type of upgrade is using your own cloned repo of a module for development purposes.
#### Upgrading RO in K8s
### Upgrading RO in K8s
This involves upgrading (`ro`):
......@@ -335,7 +332,7 @@ kubectl -n osm scale deployment ro --replicas=1
# kubectl -n osm apply -f /etc/osm/docker/osm_pods/ro.yaml
```
#### Upgrading LCM in K8s
### Upgrading LCM in K8s
```bash
git clone https://osm.etsi.org/gerrit/osm/LCM
......@@ -351,7 +348,7 @@ kubectl -n osm scale deployment lcm --replicas=1
# kubectl -n osm apply -f /etc/osm/docker/osm_pods/lcm.yaml
```
#### Upgrading MON in K8s
### Upgrading MON in K8s
```bash
git clone https://osm.etsi.org/gerrit/osm/MON
......@@ -367,7 +364,7 @@ kubectl -n osm scale deployment mon --replicas=1
# kubectl -n osm apply -f /etc/osm/docker/osm_pods/mon.yaml
```
#### Upgrading POL in K8s
### Upgrading POL in K8s
```bash
git clone https://osm.etsi.org/gerrit/osm/POL
......@@ -383,7 +380,7 @@ kubectl -n osm scale deployment pol --replicas=1
# kubectl -n osm apply -f /etc/osm/docker/osm_pods/pol.yaml
```
#### Upgrading NBI in K8s
### Upgrading NBI in K8s
```bash
git clone https://osm.etsi.org/gerrit/osm/NBI
......@@ -399,7 +396,7 @@ kubectl -n osm scale deployment nbi --replicas=1
# kubectl -n osm apply -f /etc/osm/docker/osm_pods/nbi.yaml
```
#### Upgrading Next Generation UI in K8s
### Upgrading Next Generation UI in K8s
```bash
git clone https://osm.etsi.org/gerrit/osm/NG-UI
......@@ -504,3 +501,186 @@ export OSM_HOSTNAME="10.80.80.5"
```
For additional options, see `osm --help` for more info, and check our OSM client reference guide [here](10-osm-client-commands-reference.md)
## Reference. Helm-based OSM installation
With Release FOURTEEN, the deployment of OSM services (LCM, RO, NBI, NG-UI, etc.) in the community installer is done with a Helm chart.
When OSM is installed, behind the scenes the following steps are done:
- Installation of local LXD server (required for LXD-based proxy charms)
- Installation of Docker CE
- Installation and initialization of a local Kubernetes cluster, including a CNI (Flannel), container storage (OpenEBS) and a Load Balancer (MetalLB)
- Installation of Juju client
- Bootstraping of juju controller to allow the deployment of Execution Environments in local LXD server and local Kubernetes cluster
- Deployment of OSM
- Deployment of Mongo DB charm with juju
- Deployment of OSM services with the OSM Helm Chart, which includes the following components:
- NBI
- LCM
- RO
- NG-UI
- MON
- Webhook translator
- Other (Mysql, Keystone, Zookeeper, Kafka, Prometheus, Grafana)
- Deployment of NG-SA (new Service Assurance), which includes Airflow, Prometheus Alert Manager and Prometheus Pushgateway Helm Charts
- Installation of OSM client
Once OSM is installed, the following helm releases can be seen in namespace `osm`:
```bash
$ helm -n osm ls
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
airflow osm 1 2023-06-07 15:08:48.613039036 +0000 UTC deployed airflow-1.9.0 2.5.3
alertmanager osm 1 2023-06-07 15:10:23.448079581 +0000 UTC deployed alertmanager-0.22.0 v0.24.0
osm osm 1 2023-06-07 15:08:43.421836769 +0000 UTC deployed osm-0.0.1 14
pushgateway osm 1 2023-06-07 15:10:19.507304535 +0000 UTC deployed prometheus-pushgateway-1.18.2 1.4.2
```
The helm release `osm` corresponds to the OSM Helm chart.
And the following pods can be seen in `osm` namespace:
```bash
$ kubectl -n osm get pods
NAME READY STATUS RESTARTS AGE
airflow-postgresql-0 1/1 Running 2 (2d20h ago) 5d22h
airflow-redis-0 1/1 Running 1 (2d20h ago) 5d22h
airflow-scheduler-5f7dbdc4f5-54x9c 2/2 Running 4 (2d20h ago) 5d22h
airflow-statsd-d8c8f886c-vt7xq 1/1 Running 4 (2d20h ago) 5d22h
airflow-triggerer-6668bd965c-n6snh 2/2 Running 3 (2d20h ago) 5d22h
airflow-webserver-5fb957dcf7-bcgzw 1/1 Running 1 (2d20h ago) 5d22h
airflow-worker-0 2/2 Running 2 (2d20h ago) 5d22h
alertmanager-0 1/1 Running 6 (2d20h ago) 5d22h
grafana-69c9c55dfb-jtwfl 2/2 Running 2 (2d20h ago) 5d22h
kafka-0 1/1 Running 1 (2d20h ago) 5d22h
keystone-7dbf4b7796-rqwg4 1/1 Running 1 (2d20h ago) 5d22h
lcm-6d97b88675-4m77j 1/1 Running 2 (2d20h ago) 5d22h
modeloperator-7dd8bf6c79-wx49m 1/1 Running 1 (2d20h ago) 5d22h
mon-ccb965d54-drvmr 1/1 Running 1 (2d20h ago) 5d22h
mongodb-k8s-0 1/1 Running 3 (2d20h ago) 5d22h
mongodb-k8s-operator-0 1/1 Running 1 (2d20h ago) 3d11h
mysql-0 1/1 Running 1 (2d20h ago) 5d22h
nbi-64b4f6ffd9-jtbf5 1/1 Running 5 (2d20h ago) 5d22h
ngui-78d9bd66dc-xbff6 1/1 Running 3 (2d19h ago) 5d22h
prometheus-0 2/2 Running 4 (2d20h ago) 5d22h
pushgateway-prometheus-pushgateway-6f9dc6cb4d-4sp4x 1/1 Running 1 (2d20h ago) 5d22h
ro-86cf9d4b55-z6ls7 1/1 Running 5 (2d20h ago) 5d22h
webhook-translator-57b75fc797-j9s7w 1/1 Running 1 (2d20h ago) 5d22h
zookeeper-0 1/1 Running 1 (2d20h ago) 5d22h
```
## How to install OSM using OSM helm chart
Assuming that you have a Kubernetes cluster, and you have bootstrapped a Juju controller there, it is possible to deploy OSM on top of that cluster.
### Deploy MongoDB charm
```
# The following instructions assume that juju client is installed,
# the cloud "k8scloud" pointing to the K8s cluster has been added to juju,
# and a Juju controller "osm" has been bootstrapped there
juju add-model osm k8scloud
juju deploy ch:mongodb-k8s -m osm
```
### Deploy OSM helm chart
Get Juju host, secret, public key and CA certificate:
```bash
function parse_juju_password {
[ -z "${DEBUG_INSTALL}" ] || DEBUG beginning of function
password_file="${HOME}/.local/share/juju/accounts.yaml"
local controller_name=$1
local s='[[:space:]]*' w='[a-zA-Z0-9_-]*' fs=$(echo @|tr @ '\034')
sed -ne "s|^\($s\):|\1|" \
-e "s|^\($s\)\($w\)$s:$s[\"']\(.*\)[\"']$s\$|\1$fs\2$fs\3|p" \
-e "s|^\($s\)\($w\)$s:$s\(.*\)$s\$|\1$fs\2$fs\3|p" $password_file |
awk -F$fs -v controller=$controller_name '{
indent = length($1)/2;
vname[indent] = $2;
for (i in vname) {if (i > indent) {delete vname[i]}}
if (length($3) > 0) {
vn=""; for (i=0; i<indent; i++) {vn=(vn)(vname[i])("_")}
if (match(vn,controller) && match($2,"password")) {
printf("%s",$3);
}
}
}'
[ -z "${DEBUG_INSTALL}" ] || DEBUG end of function
}
OSM_VCA_HOST=$(juju show-controller osm |grep api-endpoints|awk -F\' '{print $2}'|awk -F\: '{print $1}')
OSM_VCA_SECRET=$(parse_juju_password osm)
OSM_VCA_CACERT=$(juju controllers --format json | jq -r --arg controller osm '.controllers[$controller]["ca-cert"]' | base64 | tr -d \\n)
OSM_VCA_PUBKEY=$(cat $HOME/.local/share/juju/ssh/juju_id_rsa.pub)
```
Deploy OSM Helm chart:
```bash
git clone "https://osm.etsi.org/gerrit/osm/devops"
cd devops
# Optionally check out a specific version
# DESIRED_OSM_VERSION="14.0.0"
#git checkout $DESIRED_OSM_VERSION
# Check default values
helm -n osm show values installers/helm/osm
# Generate helm values to be passed with -f osm-values.yaml
sudo bash -c "cat << EOF > osm-values.yaml
vca:
pubkey: \"${OSM_VCA_PUBKEY}\"
EOF"
# Customize your own helm options (--set ...)
OSM_HELM_OPTS=""
OSM_HELM_OPTS="${OSM_HELM_OPTS} --set vca.host=${OSM_VCA_HOST}"
OSM_HELM_OPTS="${OSM_HELM_OPTS} --set vca.secret=${OSM_VCA_SECRET}"
OSM_HELM_OPTS="${OSM_HELM_OPTS} --set vca.cacert=${OSM_VCA_CACERT}"
# Specify the repository base and the version that you want for the docker images
# OSM_HELM_OPTS="${OSM_HELM_OPTS} --set global.image.repositoryBase=opensourcemano"
# OSM_HELM_OPTS="${OSM_HELM_OPTS} --set global.image.tag=14"
# Check that there are no errors
helm -n osm template osm installers/helm/osm -f osm-values.yaml ${OSM_HELM_OPTS}
# Deploy OSM helm chart
helm -n osm install osm installers/helm/osm -f osm-values.yaml ${OSM_HELM_OPTS}
helm -n osm status osm
```
### Deploy NG-SA
```bash
AIRFLOW_HELM_VERSION=1.9.0
PROMPUSHGW_HELM_VERSION=1.18.2
ALERTMANAGER_HELM_VERSION=0.22.0
# Update installers/helm/values/airflow-values.yaml if needed
# update defaultAirflowTag if needed (e.g. "14")
# uppate defaultAirflowRepository (e.g. "opensourcemano/airflow")
# Deploy Apache Airflow
helm repo add apache-airflow https://airflow.apache.org
helm repo update
# Check that there are no errors and deploy
helm -n osm template airflow apache-airflow/airflow -f installers/helm/values/airflow-values.yaml --version ${AIRFLOW_HELM_VERSION}
helm -n osm install airflow apache-airflow/airflow -f installers/helm/values/airflow-values.yaml --version ${AIRFLOW_HELM_VERSION}
# Deploy Prometheus Pushgateway and Alert Manager
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
# Check that there are no errors and deploy
helm -n osm template pushgateway prometheus-community/prometheus-pushgateway --version ${PROMPUSHGW_HELM_VERSION}
helm -n osm install pushgateway prometheus-community/prometheus-pushgateway --version ${PROMPUSHGW_HELM_VERSION}
helm -n osm template alertmanager prometheus-community/alertmanager -f installers/helm/values/alertmanager-values.yaml --version ${ALERTMANAGER_HELM_VERSION}
helm -n osm install alertmanager prometheus-community/alertmanager -f installers/helm/values/alertmanager-values.yaml --version ${ALERTMANAGER_HELM_VERSION}
```
### Check the status of helm releases and pods
Run the following commands to check the status of helm releases and the pods. All pods should have started properly
```bash
helm -n osm ls
kubectl -n osm get pods
```
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment