Commit 7f80af27 authored by garciadeblas's avatar garciadeblas
Browse files

Merge branch 'rel11' into 'master'

Update docs for Release ELEVEN

See merge request !77
parents fe46ba64 3e5db0c8
Loading
Loading
Loading
Loading
+36 −82
Original line number Diff line number Diff line
@@ -4,7 +4,7 @@

Open Source MANO (OSM) is an ETSI-hosted open source community delivering a production-quality MANO stack for NFV, capable of consuming openly published information models, available to everyone, suitable for all VNFs, operationally significant and VIM-independent. OSM is aligned to NFV ISG information models while providing first-hand feedback based on its implementation experience.

Release TEN brings a number of improvements over previous releases. For the full list of new features, please refer to the [Release Notes](https://osm.etsi.org/wikipub/images/0/01/OSM_Release_TEN_-_Release_Notes.pdf). For a comprehensive overview of OSM functionalities, you can also refer to the [OSM White Papers and Release Notes of previous releases](https://osm.etsi.org/wikipub/index.php/Release_notes_and_whitepapers).
Release ELEVEN brings a number of improvements over previous releases. For the full list of new features, please refer to the [Release Notes](https://osm.etsi.org/wikipub/images/0/01/OSM_Release_ELEVEN_-_Release_Notes.pdf). For a comprehensive overview of OSM functionalities, you can also refer to the [OSM White Papers and Release Notes of previous releases](https://osm.etsi.org/wikipub/index.php/Release_notes_and_whitepapers).

**OSM in Practice**:

@@ -33,12 +33,12 @@ All you need to run OSM is a single server or VM with the following requirements

- MINIMUM: 2 CPUs, 6 GB RAM, 40GB disk and a single interface with Internet access
- RECOMMENDED: 2 CPUs, 8 GB RAM, 40GB disk and a single interface with Internet access
- Base image: [Ubuntu18.04 (64-bit variant required)](http://releases.ubuntu.com/18.04/)
- Base image: [Ubuntu20.04 (64-bit variant required)](http://releases.ubuntu.com/20.04/)

Once you have prepared the host with the previous requirements, all you need to do is:

```bash
wget https://osm-download.etsi.org/ftp/osm-10.0-ten/install_osm.sh
wget https://osm-download.etsi.org/ftp/osm-11.0-eleven/install_osm.sh
chmod +x install_osm.sh
./install_osm.sh
```
@@ -48,7 +48,7 @@ This will install a standalone Kubernetes on a single host, and OSM on top of it
**TIP:** In order to facilitate potential trobleshooting later, it is recommended to save the full log of your installation process:

```bash
wget https://osm-download.etsi.org/ftp/osm-10.0-ten/install_osm.sh
wget https://osm-download.etsi.org/ftp/osm-11.0-eleven/install_osm.sh
chmod +x install_osm.sh
./install_osm.sh 2>&1 | tee osm_install_log.txt
```
@@ -61,8 +61,6 @@ You can include optional components in your installation by adding the following

- **Kubernetes Monitor:**: `--k8s_monitor` (install an add-on to monitor the Kubernetes cluster and OSM running on top of it, through prometheus and grafana)
- **PLA:** `--pla` (install the PLA module for placement support)
- **VIM Emulator:** `--vimemu` (more information [here](04-vim-setup.md#vim-emulator))
- **Fault Management features with ELK:** `--elk_stack` (only available with docker stack, more information [here](05-osm-usage.md#fault-management))

Example:

@@ -70,14 +68,6 @@ Example:
./install_osm.sh --k8s_monitor
```

#### Installation on a docker swarm environment

Although the default option is to use Kubernetes, you can optionally tell the installer to use docker swarm as the container framework. The installer will install the required packages to run a single-node docker swarm and will deploy the different objects on it.

```bash
./install_osm.sh -c swarm
```

#### Other installation options

- An additional installation option is the [Charmed Installation](03-installing-osm.md#charmed-installation) which will install OSM on Kubernetes with charms.
@@ -108,8 +98,9 @@ kubectl logs -n osm deployments/nbi # for NBI
kubectl logs -n osm deployments/pol           # for POL
kubectl logs -n osm deployments/ro            # for RO
kubectl logs -n osm deployments/keystone      # for Keystone
kubectl logs -n osm deployments/grafana       # for Grafana
kubectl logs -n osm statefulset/kafka         # for Kafka
kubectl logs -n osm statefulset/mongo         # for Mongo
kubectl logs -n osm statefulset/mongodb-k8s   # for MongoDB
kubectl logs -n osm statefulset/mysql         # for Mysql
kubectl logs -n osm statefulset/prometheus    # for Prometheus
kubectl logs -n osm statefulset/zookeeper     # for Zookeeper
@@ -127,61 +118,22 @@ OSM client, a python-based CLI for OSM, will be available as well in the host ma
osm --help
```

#### Checking your installation when installing on docker swarm

As a result of the installation, fourteen docker containers are created in the host (without considering optional stacks). You can check they are running by issuing the following commands:

```bash
docker stack ps osm |grep -i running
docker service ls
```

If the previous docker commands do not work, you might need to either reload the shell (logout and login) or run the following command to add your user to the 'docker' group in the running shell:

```bash
newgrp docker
```

![OSM Docker containers](assets/600px-Osm_containers_rel5.png)

At any time, you can quickly relaunch your deployment by using the pre-built docker images, like this:

```bash
docker stack rm osm
docker stack deploy -c /etc/osm/docker/docker-compose.yaml osm
```

To check the logs of any container:

```bash
docker service logs osm_lcm     # shows the logs of all containers (included dead containers) associated with LCM component.
docker logs $(docker ps -aqf "name=osm_lcm" -n 1)  # shows the logs of the last existant LCM container
```

## Adding VIM accounts

Before proceeding, make sure that you have a site with a VIM configured to run with OSM. Different kinds of VIMs are currently supported by OSM:

- **OpenVIM.** Check the following link to know how to install and use openvim for OSM: [OpenVIM installation](13-openvim-installation.md). OpenVIM must run in 'normal' mode (not test or fake) to have real virtual machines reachable from OSM.
- **OpenStack.** Check the following link to learn how to configure OpenStack to be used by OSM: [Openstack configuration](04-vim-setup.md#openstack)
- **VMware vCloud Director.** Check the following link to learn how to configure VMware VCD to be used by OSM: [Configuring VMware vCloud Director](04-vim-setup.md#vmwares-vcloud-director)
- **Amazon Web Services (AWS).** Check the following link to learn how to configure AWS (EC2 and Virtual Private Cloud) to be used by OSM: [Configuring AWS for OSM](04-vim-setup.md#amazon-web-services-aws)
- **Microsoft Azure** Check the following link to learn how to configure Microsoft Azure to be used by OSM: [Configuring Microsoft Azure for OSM](04-vim-setup.md#microsoft-azure)
- **Google Cloud Platform (GCP)** Check the following link to learn how to configure Google Cloud Platform to be used by OSM: [Configuring Google Cloud Platform for OSM](04-vim-setup.md#google-cloud-platform)
- **Amazon Web Services (AWS).** Check the following link to learn how to configure AWS (EC2 and Virtual Private Cloud) to be used by OSM: [Configuring AWS for OSM](04-vim-setup.md#amazon-web-services-aws)
- **VMware vCloud Director.** Check the following link to learn how to configure VMware VCD to be used by OSM: [Configuring VMware vCloud Director](04-vim-setup.md#vmwares-vcloud-director)
- **OpenVIM.** Check the following link to know how to install and use openvim for OSM: [OpenVIM installation](13-openvim-installation.md). OpenVIM must run in 'normal' mode (not test or fake) to have real virtual machines reachable from OSM.
- **Eclipse fog05** Check the following link to learn how to configure Eclipse fog05 to be used by OSM: [Configuring Eclipse fog05 for OSM](04-vim-setup.md#fog05)

OSM can manage external SDN controllers to perform the dataplane underlay network connectivity on behalf of the VIM. See [EPA and SDN assist](04-vim-setup.md#advanced-setups-for-high-io-performance-epa-and-sdn-assist)

### Adding VIMs through OSM client

#### OpenVIM site

Execute the following command, using the appropriate parameters (e.g. site name: "openvim-site", IP address: 10.10.10.10, VIM tenant: "osm")

```bash
osm vim-create --name openvim-site --auth_url http://10.10.10.10:9080/openvim --account_type openvim \
   --description "Openvim site" --tenant osm --user dummy --password dummy
```

#### Openstack site

Execute the following command, using the appropriate parameters (e.g. site name: "openstack-site", IP address: 10.10.10.11, VIM tenant: "admin", user: "admin", password: "userpwd")
@@ -193,20 +145,6 @@ osm vim-create --name openstack-site --user admin --password userpwd \

For advanced options, please refer to the [OpenStack Setup Guide](04-vim-setup.md#openstack).

#### VMware vCloud Director site

- Execute the following command, using the appropriate parameters (e.g. site name: "vmware-site", IP address: 10.10.10.12, VIM tenant: "vmware-tenant", user: "osm", password: "osm4u", admin user: "admin", admin password: "adminpwd", organization: "orgVDC")

```bash
osm vim-create --name vmware-site --user osm --password osm4u --auth_url https://10.10.10.12 \
    --tenant vmware-tenant  --account_type vmware \
    --config '{admin_username: user, admin_password: passwd, orgname: organization, nsx_manager: "http://10.10.10.12",
    nsx_user: user, nsx_password: userpwd,"vcenter_port": port, "vcenter_user":user, "vcenter_password":password,
    "vcenter_ip": 10.10.10.14}'
```

For advanced options, please refer to the [Configuring VMware vCloud Director](04-vim-setup.md#vmwares-vcloud-director).

#### VMware Integrated Openstack (VIO) site

Execute the following command, using the appropriate parameters (e.g. site name: "openstack-site-vio4", IP address: 10.10.10.12, VIM tenant: `admin`, user: `admin`, password: `passwd`)
@@ -252,6 +190,15 @@ osm vim-create --name azure --account_type azure --auth_url http://www.azure.com

For advanced options, please refer to the [Microsoft Azure setup guide](04-vim-setup.md#microsoft-azure).

#### OpenVIM site

Execute the following command, using the appropriate parameters (e.g. site name: "openvim-site", IP address: 10.10.10.10, VIM tenant: "osm")

```bash
osm vim-create --name openvim-site --auth_url http://10.10.10.10:9080/openvim --account_type openvim \
   --description "Openvim site" --tenant osm --user dummy --password dummy
```

#### Eclipse fog05 site

- Execute the following command, using the appropriate parameters (e.g. runtime supported: "hypervisor", cpu architecture: "arch", user: "XXX", password: "YYY")
@@ -262,6 +209,19 @@ osm vim-create --name fos --auth_url <rest proxy ip>:8080 --account_type fos --t

For advanced options, please refer to the [Configuring Eclipse fog05 for OSM](04-vim-setup.md#fog05).

#### VMware vCloud Director site

- Execute the following command, using the appropriate parameters (e.g. site name: "vmware-site", IP address: 10.10.10.12, VIM tenant: "vmware-tenant", user: "osm", password: "osm4u", admin user: "admin", admin password: "adminpwd", organization: "orgVDC")

```bash
osm vim-create --name vmware-site --user osm --password osm4u --auth_url https://10.10.10.12 \
    --tenant vmware-tenant  --account_type vmware \
    --config '{admin_username: user, admin_password: passwd, orgname: organization, nsx_manager: "http://10.10.10.12",
    nsx_user: user, nsx_password: userpwd,"vcenter_port": port, "vcenter_user":user, "vcenter_password":password,
    "vcenter_ip": 10.10.10.14}'
```

For advanced options, please refer to the [Configuring VMware vCloud Director](04-vim-setup.md#vmwares-vcloud-director).

### Adding VIMs through GUI

@@ -273,28 +233,22 @@ Just access the *VIM Accounts* tab, click the *New VIM* button and fill the para

Before going on, download the required VNF and NS packages from this URL: <https://osm-download.etsi.org/ftp/Packages/examples/>

You can also clone VNF and NS packages from [Gitlab](https://osm.etsi.org/gitlab/vnf-onboarding/osm-packages) and build them with `osmclient`.

### Onboarding a VNF

The onboarding of a VNF in OSM involves preparing and adding the corresponding VNF package to the system. This process also assumes, as a pre-condition, that the corresponding VM images are available in the VIM(s) where it will be instantiated.

#### Uploading VM image(s) to the VIM(s)

In this example, only a vanilla Ubuntu16.04 image is needed. It can be obtained from the following link: <https://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img>
In this example, only a vanilla Ubuntu20.04 image is needed. It can be obtained from the following link: <https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img>

It will be required to upload the image into the VIM. Instructions differ from one VIM to another (please check the reference of your type of VIM).

For instance, this is the OpenStack command for uploading images:

```bash
openstack image create --file="./xenial-server-cloudimg-amd64-disk1.img" --container-format=bare --disk-format=qcow2 ubuntu16.04
```

And this one is the appropriate command in OpenVIM:

```bash
#copy your image to the NFS shared folder (e.g. /mnt/openvim-nfs)
cp ./xenial-server-cloudimg-amd64-disk1.img /mnt/openvim-nfs/
openvim image-create --name cirros034 --path /mnt/openvim-nfs/xenial-server-cloudimg-amd64-disk1.img
openstack image create --file="./focal-server-cloudimg-amd64.img" --container-format=bare --disk-format=qcow2 ubuntu20.04
```

#### Onboarding a VNF Package
+92 −500

File changed.

Preview size limit exceeded, changes collapsed.

+1 −3
Original line number Diff line number Diff line
@@ -378,7 +378,7 @@ Additional optional configuration:

**NOTE for VNF Onboarding:** You need to make sure that your VNF packages include a reference to an appropriate alternative image in Microsoft Azure's image repository. In case you are creating a VNF Package from scratch, please note you should use the full Azure image name: `publisher:offer:sku:version` (e.g. `Canonical:UbuntuServer:18.04-LTS:18.04.201809110`).

## Google Cloud Platform (GCP)
## Google Cloud Platform

### Preparation for using GCP in OSM

@@ -430,7 +430,6 @@ Eclipse fog05 (can be read as _fog-O-five_ or _fog-O-S_) is a different kind of

It stores information in a distributed key-value store that then is able to provide location transparency to the user, and all the state information are stored in it.


#### Upload Images

Image registration can be done by using the python rest API. First generate the descriptor of your image:
@@ -788,7 +787,6 @@ The previous configuration has taken as a reference the documents in the links b
- <https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/10/html-single/network_functions_virtualization_configuration_guide/>
- <https://docs.openstack.org/newton/networking-guide/config-sriov.html>


## Distributed VCA

You can configure the VIM to use a Distributed VCA instead of using the default one. **[This section](06-osm-platform-configuration.html#distributed-vca)** explains the details on how to set everything up.
+1 −1
Original line number Diff line number Diff line
# What to read next

[latest-hackfest]: https://osm.etsi.org/wikipub/index.php/8th_OSM_Hackfest#Hackfest_Material
[latest-hackfest]: https://osm.etsi.org/wikipub/index.php/Latest_OSM_Hackfest_Material#Hackfest_Material
[developer-guide]: https://osm.etsi.org/docs/developer-guide/

If you want to learn more, these additional contents are highly recommended:
+0 −78
Original line number Diff line number Diff line
@@ -47,34 +47,6 @@ kubectl -n osm get all

All the deployments and statefulsets should have 1 replica: 1/1

#### Checking whether all processes/services are running in docker swarm

```bash
docker stack ps osm |grep -i running
```

All the services should have at least 1 replica: 1/1

```bash
$ docker service ls

ID                  NAME                MODE                REPLICAS            IMAGE                           PORTS
paxqvnwwubcf        osm_grafana               replicated          1/1                 grafana/grafana:latest          *:3000->3000/tcp
xkn3jr7ipibf        osm_kafka                 replicated          1/1                 wurstmeister/kafka:latest       *:30002->9092/tcp
px2xfetg68z1        osm_keystone              replicated          1/1                 opensourcemano/keystone:8       *:5000->5000/tcp
62yljr0s97vv        osm_lcm                   replicated          1/1                 opensourcemano/lcm:8
lwtfoh29sb95        osm_ng-ui                 replicated          1/1                 opensourcemano/ng-ui:8          *:80->80/tcp
xjl2vx9t6ogz        osm_mon                   replicated          1/1                 opensourcemano/mon:8            *:8662->8662/tcp
t6r9wjjxqy1v        osm_mongo                 replicated          1/1                 mongo:latest
rmuhwvl5gkgo        osm_mysql                 replicated          1/1                 mysql:5
vjyee8af3a8r        osm_nbi                   replicated          1/1                 opensourcemano/nbi:8            *:9999->9999/tcp
ihdjxn68aa4p        osm_pol                   replicated          1/1                 opensourcemano/pol:8
tnk91kubxfvk        osm_prometheus            replicated          1/1                 prom/prometheus:latest          *:9091->9090/tcp
4e5c49m9x0by        osm_prometheus-cadvisor   replicated          1/1                 google/cadvisor:latest          *:8080->8080/tcp
m1cxap6wkxmf        osm_ro                    replicated          1/1                 opensourcemano/ro:8             *:9090->9090/tcp
97r6t2zrs4ho        osm_zookeeper             replicated          1/1                 wurstmeister/zookeeper:latest
```

### Issues on standard installation

#### Juju
@@ -609,56 +581,6 @@ kubectl -n osm logs -f statefulset/zookeeper --all-containers=true 2>&1 | tee zo
kubectl -n osm logs -f statefulset/prometheus --all-containers=true 2>&1 | tee prometheus-log.txt
```

### Checking the logs in Docker Swarm

You can check the logs of any container with the following commands:

```bash
docker logs $(docker ps -aqf "name=osm_mon.1" -n 1)
docker logs $(docker ps -aqf "name=osm_pol" -n 1)
docker logs $(docker ps -aqf "name=osm_lcm" -n 1)
docker logs $(docker ps -aqf "name=osm_nbi" -n 1)
docker logs $(docker ps -aqf "name=osm_ng-ui" -n 1)
docker logs $(docker ps -aqf "name=osm_ro.1" -n 1)
docker logs $(docker ps -aqf "name=osm_ro-db" -n 1)
docker logs $(docker ps -aqf "name=osm_mongo" -n 1)
docker logs $(docker ps -aqf "name=osm_kafka" -n 1)
docker logs $(docker ps -aqf "name=osm_zookeeper" -n 1)
docker logs $(docker ps -aqf "name=osm_keystone.1" -n 1)
docker logs $(docker ps -aqf "name=osm_keystone-db" -n 1)
docker logs $(docker ps -aqf "name=osm_prometheus" -n 1)
```

For live debugging, the following commands can be useful to save the log output to a file and show it in the screen:

```bash
docker logs -f $(docker ps -aqf "name=osm_mon.1" -n 1) 2>&1 | tee mon-log.txt
docker logs -f $(docker ps -aqf "name=osm_pol" -n 1) 2>&1 | tee pol-log.txt
docker logs -f $(docker ps -aqf "name=osm_lcm" -n 1) 2>&1 | tee lcm-log.txt
docker logs -f $(docker ps -aqf "name=osm_nbi" -n 1) 2>&1 | tee nbi-log.txt
docker logs -f $(docker ps -aqf "name=osm_ng-ui" -n 1) 2>&1 | tee ng-log.txt
docker logs -f $(docker ps -aqf "name=osm_ro.1" -n 1) 2>&1 | tee ro-log.txt
docker logs -f $(docker ps -aqf "name=osm_ro-db" -n 1) 2>&1 | tee rodb-log.txt
docker logs -f $(docker ps -aqf "name=osm_mongo" -n 1) 2>&1 | tee mongo-log.txt
docker logs -f $(docker ps -aqf "name=osm_kafka" -n 1) 2>&1 | tee kafka-log.txt
docker logs -f $(docker ps -aqf "name=osm_zookeeper" -n 1) 2>&1 | tee zookeeper-log.txt
docker logs -f $(docker ps -aqf "name=osm_keystone.1" -n 1) 2>&1 | tee keystone-log.txt
docker logs -f $(docker ps -aqf "name=osm_keystone-db" -n 1) 2>&1 | tee keystonedb-log.txt
docker logs -f $(docker ps -aqf "name=osm_prometheus" -n 1) 2>&1 | tee prometheus-log.txt
```

For each container, logs can be found under:

```bash
/var/lib/docker/containers/DOCKER_ID/DOCKER_ID-json.log
```

And the DOCKER_ID can be obtained this way, e.g. for MON

```bash
docker ps -aqf "name=osm_mon.1" -n 1 --no-trunc
```

### Changing the log level

You can change the log level of any container, by updating the container with the right `LOG_LEVEL` env var.
Loading