Commit cf2ac543 authored by garciadeblas's avatar garciadeblas
Browse files

Update all docs for Release SIXTEEN



This change updates all references of the installer, IM, figures,
etc. to point to release SIXTEEN links.

In addition, the change includes a summary of the release and minor
changes regarding OSM installation and deprecated components.

Signed-off-by: default avatargarciadeblas <gerardo.garciadeblas@telefonica.com>
parent c2a0511a
Loading
Loading
Loading
Loading
Loading
+13 −27
Original line number Diff line number Diff line
@@ -2,23 +2,24 @@

Open Source MANO (OSM) is an ETSI-hosted open source community delivering a production-quality MANO stack for NFV, capable of consuming openly published information models, available to everyone, suitable for all VNFs, operationally significant and VIM-independent. OSM is aligned to NFV ISG information models while providing first-hand feedback based on its implementation experience.

OSM follows a regular cadence of two releases per year, alternating between Long Term Support (LTS) releases such as Release FOURTEEN or Release TWELVE (2 years support) and Standard releases (6 months support).
OSM follows a regular cadence of two releases per year, alternating between Long Term Support (LTS) releases such as Release SIXTEEN or Release FOURTEEN (2 years support) and Standard releases (6 months support).

This release, **Release FIFTEEN**, which will be a Standard release with support for 6 months, includes significant improvements in many key areas:
This release, **Release SIXTEEN**, brings a revolution in OSM’s functionality, positioning OSM as a generalized cloud-native orchestrator for infrastructure, platforms and services, which extends significantly its former scope. Full cloud-native management of Kubernetes clusters in public clouds, together with the applications or software units running on them, is now possible with Release SIXTEEN. Every operation related to the cluster management (creation, upgrading, scaling, deletion) or the applications running on them is reflected in Git repositories, following the GitOps model. This has been possible thanks to a major change in the internal architecture of OSM.
 
- __NS instantiation and lifecycle management__. Release FIFTEEN enables new use cases in private clouds based on Openstack such as the support of Service Function Chaining and the support of Availability Zones for Cinder-based storage when instantiating a Network Function. In addition, a new cancel operation has been added for Network Service lifecycle, which will allow cancelling a pending action over a Network Service.
- __Kubernetes support__. This release will include the support of OCI registries for Helm-based Kubernetes Deployment Units (KDU), which will allow OSM to retrieve helm charts from both authenticated and non-authenticated OCI registries, which are becoming a de-facto standard repository for cloud native images and artifacts. Besides, the support of Helm v2 for Helm-based KDU and Execution Environments will be discontinued, following the upstream recommendation from Helm, which had already deprecated Helm v2 since 2020.
- __VNF Management interface__. Release FIFTEEN continues adding functionality to the SOL003-based VNFM API offered by OSM, in addition to the main SOL005 NFVO API. Specifically, this release adds dual-stack IP support to enable operators to assign static both IPv4 and IPv6 addresses to VNFs launched through OSM VNFM API.
- __Closed-loop life cycle in public clouds__. In the last two releases of OSM, a new Service Assurance framework based on Apache Airflow was introduced. Release FIFTEEN adds to this new SA framework the capability to gather resource consumption metrics from VMs deployed on Google Cloud, which, combined with the closed loop workflows incorporated in previous releases, enables instantly the auto-healing and auto-scaling capabilities for VNF running in Google Cloud.
- __OSM installation__. This release leverages in the OSM helm chart introduced in Release FOURTEEN community installation and updates it to include as dependencies upstream helm charts for Kafka, Zookeeper, MongoDB and MySQL. By using the upstream helm charts, those components will be able to be maintained in a much more convenient and easier way, benefiting also from upstream built-in features such as replication and persistent storage. In addition, all container images that have reached or will reach shortly End-Of-Life support have been updated to guarantee a proper support in the coming years. Finally, upon discontinuation of Juju support in future LTS releases, Release FIFTEEN paves the way by making optional the use of Juju and its dependencies in the community installer. As a result, Juju-based EE and KDU will be available only if the Juju is installed with OSM.
- __E2E testing__. In this release cycle, the pipeline of OSM has evolved to include periodic tests of OSM over Azure Public cloud. The implemented pipeline enables the periodic and consistent evaluation of OSM behaviour over public clouds, currently on Azure and in other clouds in the future, such as AWS or Google Cloud. Besides, the E2E testing framework based on Robot has been enhanced to include the use of well-known linters for Robot framework such as Robocop and Robotframework-lint, which will help in the maintenance of OSM E2E tests.
OSM Release SIXTEEN includes significant improvements in the following key areas:

![Release FIFTEEN - Feature summary](assets/rel15-features.png)
- __Cloud-native operations in OSM__. Release SIXTEEN incorporates the provision of a management cluster for remote cloud-native management of infrastructure and applications. In addition, ad-hoc Git repositories are automatically created during OSM installation to support Continuous Deployment operations. Release SIXTEEN has added the logic to define and execute workflows in a declarative way for all the new operations. These workflows are responsible for committing the appropriate intents into the Git repositories, and the OSM management cluster is in charge of synchronizing this state into different target clouds, thanks to new capabilities added to VIM/Cloud account registration.
- __Management of Kubernetes clusters__. This release includes the full life-cycle management of Kubernetes clusters from OSM. Azure, AWS and GCP PaaS-based clusters can be created, upgraded, scaled and deleted from OSM. In addition, applications can be deployed and fully managed (upgraded, deleted) in those clusters. Finally, Rel SIXTEEN incorporates the concept of “profiles” as a way of grouping sets of software units to be deployed to a distributed fleet of Kubernetes clusters, such as Edge scenarios.
- __Enhanced operational capabilities__. Release SIXTEEN incorporates a whole new set of operational capabilities for Network Services (NS), including the following: NS config templates as first-class citizens in OSM, support for deletion of multiple NS instances, new options to reset or reuse values when upgrading CNFs, the addition of labels to Kubernetes objects created by OSM, and improved integration of the vertical scaling and KPI-based scaling of VNFs introduced in previous releases.
- __Security enhancements__. Release SIXTEEN incorporates important enhancements such as the password recovery based on One-Time Password (OTP) and improved audit logs for password-related and NS life-cycle operations.
- __OSM installation__. This release introduces relevant changes in the Kubernetes cluster where OSM is installed, such as the support of K3s as default Kubernetes distro for OSM installation, and the inclusion of an ingress controller to expose more conveniently all web services in OSM, including the Graphical User Interface and the North-Bound Interface. In addition, the OSM helm chart introduced in previous releases continues evolving in Release SIXTEEN to include upstream helm charts for Prometheus and Grafana. By using upstream helm charts, those components become much easier to maintain and upgrade, while benefiting from upstream built-in features such as replication and persistent storage. In addition, OSM helm chart has been adapted to be able to work with pre-existing Mongo DB deployments, instead of using the default one coming with OSM, which enables alternative deployments in production. Finally, other dependencies such as Zookeeper have been removed, making use of the built-in replication mechanism in Kafka.

![Release SIXTEEN - Feature summary](assets/rel16-features.png)

For a comprehensive overview of OSM functionalities, you can also refer to the [OSM White Papers and Release Notes of previous releases](https://osm.etsi.org/wikipub/index.php/Release_notes_and_whitepapers).

<!--
For the full list of new features, please refer to the [Release Notes](https://osm-download.etsi.org/ftp/osm-15.0-fifteen/OSM_Release_FIFTEEN_Release_Notes.pdf).
For the full list of new features, please refer to the [Release Notes](https://osm-download.etsi.org/ftp/osm-15.0-fifteen/OSM_Release_SIXTEEN_Release_Notes.pdf).
-->

**OSM in Practice**:
@@ -54,7 +55,7 @@ All you need to run OSM is a single server or VM with the following requirements
Once you have prepared the host with the previous requirements, all you need to do is:

```bash
wget https://osm-download.etsi.org/ftp/osm-15.0-fifteen/install_osm.sh
wget https://osm-download.etsi.org/ftp/osm-16.0-sixteen/install_osm.sh
chmod +x install_osm.sh
./install_osm.sh
```
@@ -64,28 +65,13 @@ This will install a standalone Kubernetes on a single host, and OSM on top of it
**TIP:** In order to facilitate potential trobleshooting later, it is recommended to save the full log of your installation process:

```bash
wget https://osm-download.etsi.org/ftp/osm-15.0-fifteen/install_osm.sh
wget https://osm-download.etsi.org/ftp/osm-16.0-sixteen/install_osm.sh
chmod +x install_osm.sh
./install_osm.sh 2>&1 | tee osm_install_log.txt
```

You will be asked if you want to proceed with the installation and configuration of LXD, juju, docker CE and the initialization of a local docker swarm, as pre-requirements. Please answer "y".

#### Installation including optional components

You can include optional components in your installation by adding the following flags:

- **Juju** and **LXD**: `--juju --lxd` (install Juju controller, required for VNFs that use Execution Environments based on Juju charms)
- **Kubernetes Monitor:**: `--k8s_monitor` (install an add-on to monitor the Kubernetes cluster and OSM running on top of it, through prometheus and grafana)
- **PLA:** `--pla` (install the PLA module for placement support)
- **Old Service Assurance:** `--old-sa` (install old Service Assurance framework with MON and POL; do not install Airflow and Pushgateway)"

Example:

```bash
./install_osm.sh --juju --lxd
```

For other special installation options, please refer to the [specific chapter on installation options](03-installing-osm.md).

### Checking your installation
+38 −33
Original line number Diff line number Diff line
@@ -29,7 +29,7 @@ Hence, it is assumed that:
Once you have one host available with the characteristics above, you just need to trigger the OSM installation by:

```bash
wget https://osm-download.etsi.org/ftp/osm-15.0-fifteen/install_osm.sh
wget https://osm-download.etsi.org/ftp/osm-16.0-sixteen/install_osm.sh
chmod +x install_osm.sh
./install_osm.sh
```
@@ -39,33 +39,38 @@ This will install a standalone Kubernetes on a single host, and OSM on top of it
**TIP:** In order to facilitate potential troubleshooting later, it is recommended to save the full log of your installation process:

```bash
wget https://osm-download.etsi.org/ftp/osm-15.0-fifteen/install_osm.sh
wget https://osm-download.etsi.org/ftp/osm-16.0-sixteen/install_osm.sh
chmod +x install_osm.sh
./install_osm.sh 2>&1 | tee osm_install_log.txt
```

You will be asked if you want to proceed with the installation and configuration of LXD, juju, docker CE and the initialization of a local Kubernetes cluster, as pre-requirements. Please answer `y`.

Optionally, you can use the option `--juju --lxd` to install Juju and LXD, which will be required if you want to deploy VNFs that use Execution Environments based on Juju charms.
### How to control installation of management and auxiliary cluster

 ```bash
./install_osm.sh --juju --lxd
 ```
Release SIXTEEN includes new operations and workflows for cluster management (creation, upgrading, scaling, deletion). Every operation related to the cluster or the applications running on them is reflected in Git repositories, following the GitOps model.

Optionally, you can use the option `--k8s_monitor` to install an add-on to monitor the K8s cluster and OSM running on top of it.
For that reason, Release SIXTEEN incorporates the provision of a management cluster for remote cloud-native management of infrastructure and applications. In addition, ad-hoc Git repositories are automatically created during OSM installation to support Continuous Deployment operations.

By default, the management and auxiliary cluster are provisioned in the same Kubernetes cluster where OSM is deployed. However, it is possible to control the provision of those clusters with the following options in the installer:

```bash
./install_osm.sh --k8s_monitor
--no-mgmt-cluster: Do not provision a mgmt cluster for cloud-native gitops operations in OSM (NEW in Release SIXTEEN) (by default, it is installed)
--no-aux-cluster: Do not provision an auxiliary cluster for cloud-native gitops operations in OSM (NEW in Release SIXTEEN) (by default, it is installed)
-M <KUBECONFIG_FILE>: Kubeconfig of an existing cluster to be used as mgmt cluster instead of OSM cluster
-G <KUBECONFIG_FILE>: Kubeconfig of an existing cluster to be used as auxiliary cluster instead of OSM cluster
```

### How to install optional components

You can include optional components in your installation by adding the following flags:
There are some components that were part of OSM and were maintained by the project in previous releases, but are no longer maintained. It must be noted that those components are provided as-is and can be optionally added to your OSM installation. If someone is interested in contributing and leading its evolution, please contact TSC.

You can include those optional components in your installation by adding the following flags:

- **Juju** and **LXD**: `--juju --lxd` (install Juju controller, required for VNFs that use Execution Environments based on Juju charms)
- **Kubernetes Monitor:** `--k8s_monitor` (install an add-on to monitor the Kubernetes cluster and OSM running on top of it, through prometheus and grafana)
- **PLA:** `--pla` (install the PLA module for placement support)
- **Old Service Assurance:** `--old-sa` (install old Service Assurance framework with MON and POL; do not install Airflow and Pushgateway)"
- **Juju** and **LXD**: `--juju --lxd` (install Juju controller, required for VNFs that use Execution Environments based on Juju charms)

Example:

@@ -248,18 +253,6 @@ The **OSM Client** is a client library and a command-line tool (based on Python)

Although the OSM Client is always available in the host machine after installation, it is sometimes convenient installing an OSM Client in another location, different from the OSM host, so that the access to the OSM services does not require OS-level/SSH credentials. Thus, in those cases where you have an OSM already installed in a remote server, you can still operate it from your local computer using the OSM Client.

There are two methods of installing the OSM client: via a Snap, or a Debian Package.

### How to install standalone OSM Client using snaps

On systems that support snaps, you can install the OSM client with the following command:

```bash
sudo snap install osmclient --channel 14.0/stable
```

There are tracks available for all releases.  Omitting the channel will use the latest stable release version.

### How to install standalone OSM Client using debian packages

In order to install the OSM Client in your local Linux machine, you should follow this procedure:
@@ -267,8 +260,8 @@ In order to install the OSM Client in your local Linux machine, you should follo
```bash
# Clean the previous repos that might exist
sudo sed -i "/osm-download.etsi.org/d" /etc/apt/sources.list
wget -qO - https://osm-download.etsi.org/repository/osm/debian/ReleaseFIFTEEN/OSM%20ETSI%20Release%20Key.gpg | sudo apt-key add -
sudo add-apt-repository -y "deb [arch=amd64] https://osm-download.etsi.org/repository/osm/debian/ReleaseFIFTEEN stable devops IM osmclient"
wget -qO - https://osm-download.etsi.org/repository/osm/debian/ReleaseSIXTEEN/OSM%20ETSI%20Release%20Key.gpg | sudo apt-key add -
sudo add-apt-repository -y "deb [arch=amd64] https://osm-download.etsi.org/repository/osm/debian/ReleaseSIXTEEN stable devops IM osmclient"
sudo apt-get update
sudo apt-get install -y python3-pip
sudo apt-get install -y python3-osm-im python3-osmclient
@@ -488,7 +481,7 @@ helm -n osm install alertmanager prometheus-community/alertmanager -f installers

### Check the status of helm releases and pods

Run the following commands to check the status of helm releases and the pods. All pods should have started properly
Run the following commands to check the status of helm releases and the pods. All pods should have started properly.

```bash
helm -n osm ls
@@ -558,36 +551,46 @@ echo "export OSM_PASSWORD=$NBI_PASSWORD" >> ~/.bashrc
The following instructions show how to retrieve usernames and passwords of OSM modules in Charmed installations.

##### OSM UI

The following commands return the username and password for logging into OSM UI as administrator:

```bash
juju config -m osm keystone admin-username
juju config -m osm keystone admin-password
```

If you also need the exposed IP address for the UI, you can issue the following command:

```bash
microk8s.kubectl describe -n osm ingress | grep -E "ui.*\.io" | xargs
```

##### Grafana

The following commands return the username and password for logging into Grafana dashboard:

```bash
juju config -m osm mon grafana-user
juju config -m osm mon grafana-password
```

##### Prometheus

The following commands return the username and password for logging into Prometheus dashboard:

```bash
juju config -m osm prometheus web_config_username
juju config -m osm prometheus web_config_password
```

##### Databases

**Disclaimer**: manual access to the databases is usually not required and we strongly suggest not to perform operations on them. However, in case the is a particular reason to access and/or manually modify them, here you can find the steps to retrieve the login data for Keystone and MariaDB.

###### Keystone

The following commands return the username and password for logging into Keystone:

```bash
juju config -m osm keystone admin-username
juju config -m osm keystone admin-password
@@ -596,7 +599,9 @@ juju config -m osm keystone service-password
```

###### MariaDB

The following commands return the username and password for logging into MariaDB:

```bash
juju config -m osm mariadb root_password
juju config -m osm mariadb password
@@ -688,8 +693,8 @@ juju config keystone mysql_root_password="<MySQL Root Password>"

OSM could be installed to a remote OpenStack infrastructure from the OSM standard installer. It is based on Ansible and it takes care of configuring the OpenStack infrastructure before deploying a VM with OSM. The Ansible playbook performs the following steps:

1. Creation of a new VM flavour (4 CPUs, 8 GB RAM, 40 GB disk) 
2. Download of Ubuntu 20.04 image and upload it to OpenStack Glance
1. Creation of a new VM flavour (4 CPUs, 16 GB RAM, 80 GB disk)
2. Download of Ubuntu 22.04 image and upload it to OpenStack Glance
3. Generation of a new SSH private and public key pair
4. Setup of a new security group to allow external SSH and HTTP access
5. Deployment of a clean Ubuntu 20.04 VM and installation of OSM to it
@@ -699,7 +704,7 @@ OSM could be installed to a remote OpenStack infrastructure from the OSM standar
The installation can be performed with the following command:

```bash
wget https://osm-download.etsi.org/ftp/osm-15.0-fifteen/install_osm.sh
wget https://osm-download.etsi.org/ftp/osm-16.0-sixteen/install_osm.sh
chmod +x install_osm.sh
./install_osm.sh -O <openrc file/cloud name> -N <OpenStack public network name/ID> [--volume] [OSM installer options]
```
+5 −5
Original line number Diff line number Diff line
@@ -359,7 +359,7 @@ osm ns-create --ns_name hf-basic --nsd_name hackfest_basic-ns --vim_account open

### Specify IP profile information and IP for a NS VLD <a name="specify-ip-profile-information-and-ip-for-a-ns-vld">

In a generic way, the mapping can be specified in the following way, where `datanet` is the name of the network in the NS descriptor, ip-profile is where you have to fill the associated parameters from the data model ( [NS data model](https://osm-download.etsi.org/repository/osm/debian/ReleaseFIFTEEN/docs/osm-im/osm_im_trees/etsi-nfv-nsd.html) ), and vnfd-connection-point-ref is the reference to the connection point:
In a generic way, the mapping can be specified in the following way, where `datanet` is the name of the network in the NS descriptor, ip-profile is where you have to fill the associated parameters from the data model ( [NS data model](https://osm-download.etsi.org/repository/osm/debian/ReleaseSIXTEEN/docs/osm-im/osm_im_trees/etsi-nfv-nsd.html) ), and vnfd-connection-point-ref is the reference to the connection point:

```yaml
--config '{vld: [ {name: datanet, ip-profile: {...}, vnfd-connection-point-ref: {...} } ] }'
@@ -373,7 +373,7 @@ osm ns-create --ns_name hf-multivdu --nsd_name hackfest_multivdu-ns --vim_accoun

### Specify IP profile information for an internal VLD of a VNF

In this scenario, the mapping can be specified in the following way, where `vnf1` is the member vnf index of the constituent vnf in the NS descriptor, `internal` is the name of internal-vld in the VNF descriptor and ip-profile is where you have to fill the associated parameters from the data model ([VNF data model](https://osm-download.etsi.org/repository/osm/debian/ReleaseFIFTEEN/docs/osm-im/osm_im_trees/etsi-nfv-vnfd.html)):
In this scenario, the mapping can be specified in the following way, where `vnf1` is the member vnf index of the constituent vnf in the NS descriptor, `internal` is the name of internal-vld in the VNF descriptor and ip-profile is where you have to fill the associated parameters from the data model ([VNF data model](https://osm-download.etsi.org/repository/osm/debian/ReleaseSIXTEEN/docs/osm-im/osm_im_trees/etsi-nfv-vnfd.html)):

```yaml
--config '{vnf: [ {member-vnf-index: vnf1, internal-vld: [ {name: internal, ip-profile: {...} ] } ] }'
@@ -390,7 +390,7 @@ osm ns-create --ns_name hf-multivdu --nsd_name hackfest_multivdu-ns --vim_accoun

#### Specify IP address for an interface

In this scenario, the mapping can be specified in the following way, where `vnf1` is the member vnf index of the constituent vnf in the NS descriptor, 'internal' is the name of internal-vld in the VNF descriptor, ip-profile is where you have to fill the associated parameters from the data model ([VNF data model](https://osm-download.etsi.org/repository/osm/debian/ReleaseFIFTEEN/docs/osm-im/osm_im_trees/etsi-nfv-vnfd.html)), `id1` is the internal-connection-point id and `a.b.c.d` is the IP that you have to specify for this scenario:
In this scenario, the mapping can be specified in the following way, where `vnf1` is the member vnf index of the constituent vnf in the NS descriptor, 'internal' is the name of internal-vld in the VNF descriptor, ip-profile is where you have to fill the associated parameters from the data model ([VNF data model](https://osm-download.etsi.org/repository/osm/debian/ReleaseSIXTEEN/docs/osm-im/osm_im_trees/etsi-nfv-vnfd.html)), `id1` is the internal-connection-point id and `a.b.c.d` is the IP that you have to specify for this scenario:

```yaml
--config '{vnf: [ {member-vnf-index: vnf1, internal-vld: [ {name: internal, ip-profile: {...}, internal-connection-point: [{id-ref: id1, ip-address: "a.b.c.d"}] ] } ] }'
@@ -471,7 +471,7 @@ You can try it using one of the examples of the hackfest (**packages: [hackfest_
```bash
osm ns-create --ns_name hf-basic --nsd_name hackfest_basic-ns

With the previous hackfest example, according to [VNF data model](https://osm-download.etsi.org/repository/osm/debian/ReleaseFIFTEEN/docs/osm-im/osm_im_trees/etsi-nfv-vnfd.html) you will add in VNF Descriptor:
With the previous hackfest example, according to [VNF data model](https://osm-download.etsi.org/repository/osm/debian/ReleaseSIXTEEN/docs/osm-im/osm_im_trees/etsi-nfv-vnfd.html) you will add in VNF Descriptor:

```yaml
     volumes:
@@ -1388,7 +1388,7 @@ The diagram below shows the `slice_basic_ns` and `slice_basic_middle_ns`, its co

### Creating a Network Slice Template (NST)

Based on the OSM information model for Network slice templates [here](http://osm-download.etsi.org/repository/osm/debian/ReleaseFIFTEEN/docs/osm-im/osm_im_trees/nst.html) it is possible to start writing the YAML descriptor for the NST.
Based on the OSM information model for Network slice templates [here](http://osm-download.etsi.org/repository/osm/debian/ReleaseSIXTEEN/docs/osm-im/osm_im_trees/nst.html) it is possible to start writing the YAML descriptor for the NST.

```yaml
nst:
+1 −1
Original line number Diff line number Diff line
@@ -1059,7 +1059,7 @@ CEF:Version|Device Vendor|Device Product|Device Version|Name|Severity|Extension
A sample CEF log for User login would be as below:

```text
CEF:0|OSM|OSM|15.0.0|User Login|1|msg=User Logged In, Project\=admin Outcome\=Success suser=admin
CEF:0|OSM|OSM|16.0.0|User Login|1|msg=User Logged In, Project\=admin Outcome\=Success suser=admin
```

### Audit Logs Prefixes
+10 −10

File changed.

Preview size limit exceeded, changes collapsed.

Loading