Commit 6e5d52bd authored by garciadeblas's avatar garciadeblas
Browse files

Updated documentation for Rel NINE, added figures for NGUI


Signed-off-by: garciadeblas's avatargarciadeblas <gerardo.garciadeblas@telefonica.com>
parent b07ae4be
# OSM Quickstart
<!---
[![8th OSM Hack.png](https://osm.etsi.org/wikipub/images/thumb/f/f5/Upcoming_Hack.png/320px-Upcoming_Hack.png)](https://osm.etsi.org/wikipub/index.php/Next_OSM_Hackfest)
--->
[![Next OSM Hack.png](https://osm.etsi.org/wikipub/images/thumb/f/f5/Upcoming_Hack.png/320px-Upcoming_Hack.png)](https://osm.etsi.org/wikipub/index.php/Next_OSM_Hackfest)
Open Source MANO (OSM) is an ETSI-hosted open source community delivering a production-quality MANO stack for NFV, capable of consuming openly published information models, available to everyone, suitable for all VNFs, operationally significant and VIM-independent. OSM is aligned to NFV ISG information models while providing first-hand feedback based on its implementation experience.
Release NINE brings a number of improvements over previous releases. For the full list of new features, please refer to the [Release Notes](https://osm.etsi.org/wikipub/images/0/01/OSM_Release_NINE_-_Release_Notes.pdf). For a comprehensive overview of OSM functionalities, you can also refer to the [OSM White Papers and Release Notes of previous releases](https://osm.etsi.org/wikipub/index.php/Release_notes_and_whitepapers).
......@@ -31,22 +31,24 @@ In order for OSM to work, it is assumed that:
All you need to run OSM is a single server or VM with the following requirements:
- MINIMUM: 2 CPUs, 4 GB RAM, 20GB disk and a single interface with Internet access
- MINIMUM: 2 CPUs, 6 GB RAM, 40GB disk and a single interface with Internet access
- RECOMMENDED: 2 CPUs, 8 GB RAM, 40GB disk and a single interface with Internet access
- Base image: [Ubuntu18.04 (64-bit variant required)](http://releases.ubuntu.com/18.04/)
Once you have prepared the host with the previous requirements, all you need to do is:
```bash
wget https://osm-download.etsi.org/ftp/osm-8.0-eight/install_osm.sh
wget https://osm-download.etsi.org/ftp/osm-9.0-nine/install_osm.sh
chmod +x install_osm.sh
./install_osm.sh
```
This will install a standalone Kubernetes on a single host, and OSM on top of it.
**TIP:** In order to facilitate potential trobleshooting later, it is recommended to save the full log of your installation process:
```bash
wget https://osm-download.etsi.org/ftp/osm-8.0-eight/install_osm.sh
wget https://osm-download.etsi.org/ftp/osm-9.0-nine/install_osm.sh
chmod +x install_osm.sh
./install_osm.sh 2>&1 | tee osm_install_log.txt
```
......@@ -57,27 +59,23 @@ You will be asked if you want to proceed with the installation and configuration
You can include optional components in your installation by adding the following flags:
- **Kubernetes Monitor:**: `--k8s_monitor` (install an add-on to monitor the Kubernetes cluster and OSM running on top of it, through prometheus and grafana)
- **PLA:** `--pla` (install the PLA module for placement support)
- **VIM Emulator:** `--vimemu` (more information [here](04-vim-setup.md#vim-emulator))
- **Fault Management features with ELK:** `--elk_stack` (more information [here](05-osm-usage.md#fault-management))
- **Fault Management features with ELK:** `--elk_stack` (only available with docker stack, more information [here](05-osm-usage.md#fault-management))
Example:
```bash
./install_osm.sh --elk_stack --vimemu
./install_osm.sh --k8s_monitor
```
#### Installation on a standalone Kubernetes environment
OSM can be deployed on a single host running a Kubernetes cluster. Although the default option is to use docker swarm, you can now tell the installer to use K8s as the container framework. The installer will install the required packages to run a single-node K8s cluster and will deploy the different K8s objects on it.
```bash
./install_osm.sh -c k8s
```
#### Installation on a docker swarm environment
In addition, you can use the option `--k8s_monitor` to install an add-on to monitor the K8s cluster and OSM running on top of it.
Although the default option is to use Kubernetes, you can optionally tell the installer to use docker swarm as the container framework. The installer will install the required packages to run a single-node docker swarm and will deploy the different objects on it.
```bash
./install_osm.sh -c k8s --k8s_monitor
./install_osm.sh -c swarm
```
#### Other installation options
......@@ -90,70 +88,74 @@ In addition, you can use the option `--k8s_monitor` to install an add-on to moni
After some time, you will get a fresh OSM installation with its latest, pre-built docker images which are built daily. You can access to the UI in the following URL (user:`admin`, password: `admin`): [http://1.2.3.4](http://1.2.3.4/), replacing 1.2.3.4 by the IP address of your host.
![OSM home](assets/600px-Osm_lwb_ui_login.png)
![OSM home](assets/600px-Osm_ng_ui_login.png)
![OSM installation result](assets/600px-Osm_lwb_ui.png)
![OSM installation result](assets/600px-Osm_ng_ui.png)
As a result of the installation, fourteen docker containers are created in the host (without considering optional stacks). You can check they are running by issuing the following commands:
As a result of the installation, different K8s objects (deployments, statefulsets, etc.) created in the host. You can check the status by running the following commands:
```bash
docker stack ps osm |grep -i running
docker service ls
kubectl get all -n osm
```
If the previous docker commands do not work, you might need to either reload the shell (logout and login) or run the following command to add your user to the 'docker' group in the running shell:
To check the logs of any container:
```bash
newgrp docker
kubectl logs -n osm deployments/lcm # for LCM
kubectl logs -n osm deployments/light-ui # for LW-UI
kubectl logs -n osm deployments/mon # for MON
kubectl logs -n osm deployments/nbi # for NBI
kubectl logs -n osm deployments/pol # for POL
kubectl logs -n osm deployments/ro # for RO
kubectl logs -n osm deployments/keystone # for Keystone
kubectl logs -n osm statefulset/kafka # for Kafka
kubectl logs -n osm statefulset/mongo # for Mongo
kubectl logs -n osm statefulset/mysql # for Mysql
kubectl logs -n osm statefulset/prometheus # for Prometheus
kubectl logs -n osm statefulset/zookeeper # for Zookeeper
```
![OSM Docker containers](assets/600px-Osm_containers_rel5.png)
At any time, you can quickly relaunch your deployment by using the pre-built docker images, like this:
Finally, if you used the option `--k8s_monitor` to install an add-on to monitor the K8s cluster and OSM, you can check the status in this way.
```bash
docker stack rm osm
docker stack deploy -c /etc/osm/docker/docker-compose.yaml osm
kubectl get all -n monitoring
```
To check the logs of any container:
OSM client, a python-based CLI for OSM, will be available as well in the host machine. Via the OSM client, you can manage descriptors, NS and VIM complete lifecycle.
```bash
docker service logs osm_lcm # shows the logs of all containers (included dead containers) associated with LCM component.
docker logs $(docker ps -aqf "name=osm_lcm" -n 1) # shows the logs of the last existant LCM container
osm --help
```
OSM client, a python-based CLI for OSM, will be available as well in the host machine. Via the OSM client, you can manage descriptors, NS and VIM complete lifecycle.
#### Checking your installation when installing on docker swarm
#### Checking your installation when installing on K8s
As a result of the installation, fourteen docker containers are created in the host (without considering optional stacks). You can check they are running by issuing the following commands:
As a result of the installation, different K8s objects (deployments, statefulsets, etc.) created in the host. You can check the status by running the following commands:
```bash
docker stack ps osm |grep -i running
docker service ls
```
If the previous docker commands do not work, you might need to either reload the shell (logout and login) or run the following command to add your user to the 'docker' group in the running shell:
```bash
kubectl get all -n osm
newgrp docker
```
To check the logs of any container:
![OSM Docker containers](assets/600px-Osm_containers_rel5.png)
At any time, you can quickly relaunch your deployment by using the pre-built docker images, like this:
```bash
kubectl logs -n osm deployments/lcm # for LCM
kubectl logs -n osm deployments/light-ui # for LW-UI
kubectl logs -n osm deployments/mon # for MON
kubectl logs -n osm deployments/nbi # for NBI
kubectl logs -n osm deployments/pol # for POL
kubectl logs -n osm deployments/ro # for RO
kubectl logs -n osm deployments/keystone # for Keystone
kubectl logs -n osm statefulset/kafka # for Kafka
kubectl logs -n osm statefulset/mongo # for Mongo
kubectl logs -n osm statefulset/mysql # for Mysql
kubectl logs -n osm statefulset/prometheus # for Prometheus
kubectl logs -n osm statefulset/zookeeper # for Zookeeper
docker stack rm osm
docker stack deploy -c /etc/osm/docker/docker-compose.yaml osm
```
Finally, if you used the option `--k8s\_monitor` to install an add-on to monitor the K8s cluster and OSM, you can check the status in this way.
To check the logs of any container:
```bash
kubectl get all -n monitoring
docker service logs osm_lcm # shows the logs of all containers (included dead containers) associated with LCM component.
docker logs $(docker ps -aqf "name=osm_lcm" -n 1) # shows the logs of the last existant LCM container
```
## Adding VIM accounts
......@@ -265,93 +267,88 @@ For advanced options, please refer to the [Configuring Eclipse fog05 for OSM](04
Just access the *VIM Accounts* tab, click the *New VIM* button and fill the parameters accordingly.
![AddingVIMUI](assets/600px-Osmvim.png)
![AddingVIMUI](assets/600px-Osmvim_r9.png)
## Deploying your first Network Service
In this example we will deploy the following Network Service, consisting of two simple VNFs based on CirrOS connected by a simple VLD.
![NS with 2 CirrOS VNF](assets/500px-Cirros_2vnf_ns.png)
Before going on, download the required VNF and NS packages from this URL: <https://osm-download.etsi.org/ftp/osm-3.0-three/examples/cirros_2vnf_ns/>
Before going on, download the required VNF and NS packages from this URL: <https://osm-download.etsi.org/ftp/Packages/examples/>
### Onboarding a VNF
The onboarding of a VNF in OSM involves adding the corresponding VNF package to the system. This process also assumes, as a pre-condition, that the corresponding VM images are available in the VIM(s) where it will be instantiated.
The onboarding of a VNF in OSM involves preparing and adding the corresponding VNF package to the system. This process also assumes, as a pre-condition, that the corresponding VM images are available in the VIM(s) where it will be instantiated.
#### Uploading VM image(s) to the VIM(s)
In this example, only a vanilla CirrOS 0.3.4 image is need. It can be obtained from the following link: <http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img>
In this example, only a vanilla Ubuntu16.04 image is needed. It can be obtained from the following link: <https://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img>
If not available, it would be required to upload the image into the VIM. Instructions differ from one VIM to another:
It will be required to upload the image into the VIM. Instructions differ from one VIM to another (please check the reference of your type of VIM).
- In Openstack:
For instance, this is the OpenStack command for uploading images:
```bash
openstack image create --file="./cirros-0.3.4-x86_64-disk.img" --container-format=bare --disk-format=qcow2 cirros034
openstack image create --file="./xenial-server-cloudimg-amd64-disk1.img" --container-format=bare --disk-format=qcow2 ubuntu16.04
```
- In OpenVIM:
And this one is the appropriate command in OpenVIM:
```bash
#copy your image to the NFS shared folder (e.g. /mnt/openvim-nfs)
cp ./cirros-0.3.4-x86_64-disk.img /mnt/openvim-nfs/
openvim image-create --name cirros034 --path /mnt/openvim-nfs/cirros-0.3.4-x86_64-disk.img
cp ./xenial-server-cloudimg-amd64-disk1.img /mnt/openvim-nfs/
openvim image-create --name cirros034 --path /mnt/openvim-nfs/xenial-server-cloudimg-amd64-disk1.img
```
#### VNF package onboarding
#### Onboarding a VNF Package
- From the UI:
- Go to Projects --> Admin --> VNF Packages (*Open List*)
- Click on the Onboard VNFD button
- Drag and drop the VNF package file cirros_vnf.tar.gz in the importing area.
- Go to 'VNF Packages' on the 'Packages' menu to the left
- Drag and drop the VNF package file `hackfest_basic_vnf.tar.gz` in the importing area.
![Onboarding a VNF](assets/600px-Vnfd_onboard_r4.png)
![Onboarding a VNF](assets/600px-Vnfd_onboard_r9.png)
- From OSM client:
```bash
osm vnfd-create cirros_vnf.tar.gz
osm vnfd-list
osm nfpkg-create hackfest_basic_vnf.tar.gz
osm nfpkg-list
```
### Onboarding a NS
### Onboarding a NS Package
- From the UI:
- Go to Projects --> Admin --> NS Packages (*Open List*)
- Click on the Onboard NSD button
- Drag and drop the NS package file cirros_2vnf_ns.tar.gz in the importing area.
- Go to 'NS Packages' on the 'Packages' menu to the left
- Drag and drop the NS package file `hackfest_basic_ns.tar.gz` in the importing area.
![Onboarding a NS](assets/600px-Nsd_onboard_r4.png)
![Onboarding a NS](assets/600px-Nsd_onboard_r9.png)
- From OSM client:
```bash
osm nsd-create cirros_2vnf_ns.tar.gz
osm nsd-list
osm nspkg-create hackfest_basic_ns.tar.gz
osm nspkg-list
```
### Instantiating the NS
- From the UI:
- Go to Projects --> Admin --> NS Packages (*Open List*)
- Next the NS descriptor to be instantiated, click on Launch
#### Instantiating a NS from the UI
![Instantiating a NS (assets/600px-Nsd_list.png)](assets/600px-Nsd_list.png)
- Go to 'NS Packages' on the 'Packages' menu to the left
- Next the NS descriptor to be instantiated, click on the 'Instantiate NS' button.
- Fill in the form, adding at least a name and selecting the VIM
![Instantiating a NS (assets/600px-Nsd_list_r9.png)](assets/600px-Nsd_list_r9.png)
![Instantiating a NS (assets/600px-New_ns.png)](assets/600px-New_ns.png)
- Fill in the form, adding at least a name, description and selecting the VIM:
- From OSM client:
![Instantiating a NS (assets/600px-New_ns_r9.png)](assets/600px-New_ns_r9.png)
**Instantiation parameters can be specified using both CLI and UI. You can find a thorough explanation with examples in this page: [OSM instantiation parameters](05-osm-usage.md#advanced-instantiation-using-instantiation-parameters).**
#### Instantiating a NS from the OSM client
```bash
osm ns-create --nsd_name cirros_2vnf_ns --ns_name <ns-instance-name> --vim_account <data-center-name>
osm ns-create --ns_name <ns-instance-name> --nsd_name hackfest_basic-ns --vim_account <vim-target-name>
osm ns-list
```
**Instantiation parameters can be specified using both CLI and UI. You can find a thorough explanation with examples in this page: [OSM instantiation parameters](05-osm-usage.md#advanced-instantiation-using-instantiation-parameters).**
## What's next?
If you want to learn more, you can refer to the rest of **[OSM documentation](index.md)**.
......@@ -7,7 +7,7 @@ The goal of ETSI OSM (Open Source MANO) is the development of a community-driven
OSM's approach aims to minimize integration efforts thanks to four key aspects:
1. **A well-known [Information Model (IM)][OSM-IM-PAGE]**, aligned with ETSI NFV, that is capable of modelling and automating the full lifecycle of Network Functions (virtual, physical or hybrid), Network Services (NS), and Network Slices (NSI), from their initial deployment (instantiation, Day-0, and Day-1) to their daily operation and monitoring (Day-2).
1. **A well-known [Information Model (IM)][OSM-IM-PAGE]**, aligned with ETSI NFV SOL006, that is capable of modelling and automating the full lifecycle of Network Functions (virtual, physical or hybrid), Network Services (NS), and Network Slices (NST/NSI), from their initial deployment (instantiation, Day-0, and Day-1) to their daily operation and monitoring (Day-2). OSM IM augments ETSI NFV SOL006 by adding the support of Network Slices, day-1 and day-2 primitives at VNF and NS level, and Enhanced Platform Awareness for dataplane workloads.
- Actually, OSM's IM is completely infrastructure-agnostic, so that the same model can be used to instantiate a given element (e.g. VNF) in a large variety of VIM types and transport technologies, enabling an ecosystem of VNF models ready for their deployment everywhere.
......
This diff is collapsed.
......@@ -2,11 +2,7 @@
## Deploying your first Network Service
In this example we will deploy the following Network Service, consisting of two simple VNFs based on CirrOS connected by a simple VLD.
![NS with 2 CirrOS VNF](assets/500px-Cirros_2vnf_ns.png)
Before going on, download the required VNF and NS packages from this URL: <https://osm-download.etsi.org/ftp/osm-3.0-three/examples/cirros_2vnf_ns/>
Before going on, download the required VNF and NS packages from this URL: <https://osm-download.etsi.org/ftp/Packages/examples/>
### Onboarding a VNF
......@@ -14,52 +10,52 @@ The onboarding of a VNF in OSM involves preparing and adding the corresponding V
#### Uploading VM image(s) to the VIM(s)
In this example, only a vanilla CirrOS 0.3.4 image is needed. It can be obtained from the following link: <http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img>
In this example, only a vanilla Ubuntu16.04 image is needed. It can be obtained from the following link: <https://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img>
If not available, it would be required to upload the image into the VIM. Instructions differ from one VIM to another (please check the reference of your type of VIM).
It will be required to upload the image into the VIM. Instructions differ from one VIM to another (please check the reference of your type of VIM).
For instance, this is the OpenStack command for uploading images:
```bash
openstack image create --file="./cirros-0.3.4-x86_64-disk.img" --container-format=bare --disk-format=qcow2 cirros034
openstack image create --file="./xenial-server-cloudimg-amd64-disk1.img" --container-format=bare --disk-format=qcow2 ubuntu16.04
```
And this one is the appropriate command in OpenVIM:
```bash
#copy your image to the NFS shared folder (e.g. /mnt/openvim-nfs)
cp ./cirros-0.3.4-x86_64-disk.img /mnt/openvim-nfs/
openvim image-create --name cirros034 --path /mnt/openvim-nfs/cirros-0.3.4-x86_64-disk.img
cp ./xenial-server-cloudimg-amd64-disk1.img /mnt/openvim-nfs/
openvim image-create --name cirros034 --path /mnt/openvim-nfs/xenial-server-cloudimg-amd64-disk1.img
```
#### Onboarding a VNF Package
- From the UI:
- Go to 'VNF Packages' on the 'Packages' menu to the left
- Drag and drop the VNF package file `cirros_vnf.tar.gz` in the importing area.
- Drag and drop the VNF package file `hackfest_basic_vnf.tar.gz` in the importing area.
![Onboarding a VNF](assets/600px-Vnfd_onboard_r4.png)
![Onboarding a VNF](assets/600px-Vnfd_onboard_r9.png)
- From OSM client:
```bash
osm vnfd-create cirros_vnf.tar.gz
osm vnfd-list
osm nfpkg-create hackfest_basic_vnf.tar.gz
osm nfpkg-list
```
### Onboarding a NS Package
- From the UI:
- Go to 'NS Packages' on the 'Packages' menu to the left
- Drag and drop the NS package file `cirros_2vnf_ns.tar.gz` in the importing area.
- Drag and drop the NS package file `hackfest_basic_ns.tar.gz` in the importing area.
![Onboarding a NS](assets/600px-Nsd_onboard_r4.png)
![Onboarding a NS](assets/600px-Nsd_onboard_r9.png)
- From OSM client:
```bash
osm nsd-create cirros_2vnf_ns.tar.gz
osm nsd-list
osm nspkg-create hackfest_basic_ns.tar.gz
osm nspkg-list
```
### Instantiating the NS
......@@ -69,16 +65,16 @@ osm nsd-list
- Go to 'NS Packages' on the 'Packages' menu to the left
- Next the NS descriptor to be instantiated, click on the 'Instantiate NS' button.
![Instantiating a NS (assets/600px-Nsd_list.png)](assets/600px-Nsd_list.png)
![Instantiating a NS (assets/600px-Nsd_list_r9.png)](assets/600px-Nsd_list_r9.png)
- Fill in the form, adding at least a name, description and selecting the VIM:
![Instantiating a NS (assets/600px-New_ns.png)](assets/600px-New_ns.png)
![Instantiating a NS (assets/600px-New_ns_r9.png)](assets/600px-New_ns_r9.png)
#### Instantiating a NS from the OSM client
```bash
osm ns-create --nsd_name cirros_2vnf_ns --ns_name <ns-instance-name> --vim_account <vim-target-name>
osm ns-create --ns_name <ns-instance-name> --nsd_name hackfest_basic-ns --vim_account <vim-target-name>
osm ns-list
```
......@@ -96,10 +92,10 @@ In a generic way, the mapping can be specified in the following way, where `vldn
--config '{vld: [ {name: vldnet, vim-network-name: netVIM1} ] }'
```
You can try it using one of the examples of the hackfest (**descriptors: [hackfest-basic_vnfd](https://osm-download.etsi.org/ftp/osm-6.0-six/8th-hackfest/packages/hackfest-basic_vnfd.tar.gz), [hackfest-basic_nsd](https://osm-download.etsi.org/ftp/osm-6.0-six/8th-hackfest/packages/hackfest-basic_nsd.tar.gz); images: [ubuntu1604](https://osm-download.etsi.org/ftp/osm-3.0-three/1st-hackfest/images/US1604.qcow2); presentation: [creating a basic VNF and NS](https://osm-download.etsi.org/ftp/osm-6.0-six/8th-hackfest/presentations/8th%20OSM%20Hackfest%20-%20Session%202.1%20-%20Creating%20a%20basic%20VNF%20and%20NS.pdf)**) in the following way:
You can try it using one of the examples of the hackfest (**packages: [hackfest_basic_vnf](https://osm.etsi.org/gitlab/vnf-onboarding/osm-packages/-/tree/master/hackfest_basic_vnf), [hackfest_basic_ns](https://osm.etsi.org/gitlab/vnf-onboarding/osm-packages/-/tree/master/hackfest_basic_ns)); images: [ubuntu16.04](https://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img)**) in the following way:
```bash
osm ns-create --ns_name hf-basic --nsd_name hackfest-basic_nsd --vim_account openstack1 --config '{vld: [ {name: mgmtnet, vim-network-name: mgmt} ] }'
osm ns-create --ns_name hf-basic --nsd_name hackfest_basic-ns --vim_account openstack1 --config '{vld: [ {name: mgmtnet, vim-network-name: mgmt} ] }'
```
### Specify a VIM network name for an internal VLD of a VNF
......@@ -110,17 +106,29 @@ In this scenario, the mapping can be specified in the following way, where `"1"`
--config '{vnf: [ {member-vnf-index: "1", internal-vld: [ {name: internal, vim-network-name: netVIM1} ] } ] }'
```
TODO: update example with latest Hackfest
You can try it using one of the examples of the hackfest (**packages: [hackfest_multivdu_vnf](https://osm.etsi.org/gitlab/vnf-onboarding/osm-packages/-/tree/master/hackfest_multivdu_vnf), [hackfest_multivdu_ns](https://osm.etsi.org/gitlab/vnf-onboarding/osm-packages/-/tree/master/hackfest_multivdu_ns)); images: [US1604](https://osm-download.etsi.org/ftp/images/tests/US1604.qcow2)**) in the following way:
You can try it using one of the examples of the hackfest (**descriptors: [hackfest2-vnf](https://osm-download.etsi.org/ftp/osm-4.0-four/3rd-hackfest/packages/hackfest_2_vnfd.tar.gz), [hackfest2-ns](https://osm-download.etsi.org/ftp/osm-4.0-four/3rd-hackfest/packages/hackfest_2_nsd.tar.gz); images:[ubuntu1604](https://osm-download.etsi.org/ftp/osm-3.0-three/1st-hackfest/images/US1604.qcow2), presentation: [modeling multi-VDU VNF](https://osm-download.etsi.org/ftp/osm-4.0-four/3rd-hackfest/presentations/20180626%20OSM%20Hackfest%20-%20Session%203%20-%20Modeling%20multi-VDU%20VNF%20v2.pdf)**) in the following way:
```bash
osm ns-create --ns_name hf-multivdu --nsd_name hackfest_multivdu-ns --vim_account openstack1 --config '{vnf: [ {member-vnf-index: "1", internal-vld: [ {name: internal, vim-network-name: mgmt} ] } ] }'
```
### Specify a VIM network (provider network) to be created with specific parameters (physnet label, encapsulation type, segmentation id) for a NS VLD
The mapping can be specified in the following way, where `vldnet` is the name of the network in the NS descriptor, `physnet1` is the physical network label in the VIM, `vlan` is the encapsulation type and `400` is the segmentation IDthat you want to use:
```yaml
--config '{vld: [ {name: vldnet, provider-network: {physical-network: physnet1, network-type: vlan, segmentation-id: 400} } ] }'
```
You can try it using one of the examples of the hackfest (**packages: [hackfest_basic_vnf](https://osm.etsi.org/gitlab/vnf-onboarding/osm-packages/-/tree/master/hackfest_basic_vnf), [hackfest_basic_ns](https://osm.etsi.org/gitlab/vnf-onboarding/osm-packages/-/tree/master/hackfest_basic_ns)); images: [ubuntu16.04](https://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img)**) in the following way:
```bash
osm ns-create --ns_name hf2 --nsd_name hackfest2-ns --vim_account openstack1 --config '{vnf: [ {member-vnf-index: "1", internal-vld: [ {name: internal, vim-network-name: mgmt} ] } ] }'
osm ns-create --ns_name hf-basic --nsd_name hackfest_basic-ns --vim_account openstack1 --config '{vld: [ {name: mgmtnet, provider-network: {physical-network: physnet1, network-type: vlan, segmentation-id: 400} } ] }'
```
### Specify IP profile information and IP for a NS VLD
In a generic way, the mapping can be specified in the following way, where `datanet` is the name of the network in the NS descriptor, ip-profile is where you have to fill the associated parameters from the data model ( [NS data model](http://osm-download.etsi.org/ftp/osm-doc/nsd.html#) ), and vnfd-connection-point-ref is the reference to the connection point:
In a generic way, the mapping can be specified in the following way, where `datanet` is the name of the network in the NS descriptor, ip-profile is where you have to fill the associated parameters from the data model ( [NS data model](http://osm-download.etsi.org/ftp/osm-doc/etsi-nfv-nsd.html) ), and vnfd-connection-point-ref is the reference to the connection point:
```yaml
--config '{vld: [ {name: datanet, ip-profile: {...}, vnfd-connection-point-ref: {...} } ] }'
......@@ -136,7 +144,7 @@ osm ns-create --ns_name hf2 --nsd_name hackfest2-ns --vim_account openstack1 --c
### Specify IP profile information for an internal VLD of a VNF
In this scenario, the mapping can be specified in the following way, where `"1"` is the member vnf index of the constituent vnf in the NS descriptor, `internal` is the name of internal-vld in the VNF descriptor and ip-profile is where you have to fill the associated parameters from the data model ([VNF data model](http://osm-download.etsi.org/ftp/osm-doc/vnfd.html)):
In this scenario, the mapping can be specified in the following way, where `"1"` is the member vnf index of the constituent vnf in the NS descriptor, `internal` is the name of internal-vld in the VNF descriptor and ip-profile is where you have to fill the associated parameters from the data model ([VNF data model](http://osm-download.etsi.org/ftp/osm-doc/etsi-nfv-vnfd.html)):
```yaml
--config '{vnf: [ {member-vnf-index: "1", internal-vld: [ {name: internal, ip-profile: {...} ] } ] }'
......@@ -154,7 +162,7 @@ osm ns-create --ns_name hf2 --nsd_name hackfest2-ns --vim_account openstack1 --c
#### Specify IP address for an interface
In this scenario, the mapping can be specified in the following way, where `"1"` is the member vnf index of the constituent vnf in the NS descriptor, 'internal' is the name of internal-vld in the VNF descriptor, ip-profile is where you have to fill the associated parameters from the data model ([VNF data model](http://osm-download.etsi.org/ftp/osm-doc/vnfd.html)), `id1` is the internal-connection-point id and `a.b.c.d` is the IP that you have to specify for this scenario:
In this scenario, the mapping can be specified in the following way, where `"1"` is the member vnf index of the constituent vnf in the NS descriptor, 'internal' is the name of internal-vld in the VNF descriptor, ip-profile is where you have to fill the associated parameters from the data model ([VNF data model](http://osm-download.etsi.org/ftp/osm-doc/etsi-nfv-vnfd.html)), `id1` is the internal-connection-point id and `a.b.c.d` is the IP that you have to specify for this scenario:
```yaml
--config '{vnf: [ {member-vnf-index: "1", internal-vld: [ {name: internal, ip-profile: {...}, internal-connection-point: [{id-ref: id1, ip-address: "a.b.c.d"}] ] } ] }'
......@@ -243,7 +251,7 @@ TODO: update example with latest Hackfest
You can try it using one of the examples of the hackfest (**descriptors: [hackfest1-vnf](https://osm-download.etsi.org/ftp/osm-4.0-four/3rd-hackfest/packages/hackfest_1_vnfd.tar.gz), [hackfest1-ns](https://osm-download.etsi.org/ftp/osm-4.0-four/3rd-hackfest/packages/hackfest_1_nsd.tar.gz); images: [ubuntu1604](https://osm-download.etsi.org/ftp/osm-3.0-three/1st-hackfest/images/US1604.qcow2), presentation: [creating a basic VNF and NS](https://osm-download.etsi.org/ftp/osm-4.0-four/3rd-hackfest/presentations/20180626%20OSM%20Hackfest%20-%20Session%202%20-%20Creating%20a%20basic%20VNF%20and%20NS.pdf)**) in the following way:
With the previous hackfest example, according [VNF data model](http://osm-download.etsi.org/ftp/osm-doc/vnfd.html) you will add in VNF Descriptor:
With the previous hackfest example, according [VNF data model](http://osm-download.etsi.org/ftp/osm-doc/etsi-nfv-vnfd.html) you will add in VNF Descriptor:
```yaml
volumes:
......
......@@ -39,6 +39,14 @@ It is highly recommended saving a log of your installation:
### Recommended checks after installation
#### Checking whether all processes/services are running in K8s
```bash
kubectl -n osm get all
```
All the deployments and statefulsets should have 1 replica: 1/1
#### Checking whether all processes/services are running in docker swarm
```bash
......@@ -67,39 +75,8 @@ m1cxap6wkxmf osm_ro replicated 1/1
97r6t2zrs4ho osm_zookeeper replicated 1/1 wurstmeister/zookeeper:latest
```
#### Checking whether all processes/services are running in K8s
```bash
kubectl -n osm get all
```
### Issues on standard installation
#### Docker Swarm
##### `network netosm could not be found`
The error is `network "netosm" is declared as external, but could not be found. You need to create a swarm-scoped network before the stack is deployed`
It usually happens when a `docker system prune` is done with the stack stopped. The following script will create it:
```bash
#!/bin/bash
# Create OSM Docker Network ...
[ -z "$OSM_STACK_NAME" ] && OSM_STACK_NAME=osm
OSM_NETWORK_NAME=net${OSM_STACK_NAME}
echo Creating OSM Docker Network
DEFAULT_INTERFACE=$(route -n | awk '$1~/^0.0.0.0/ {print $8}')
DEFAULT_MTU=$(ip addr show $DEFAULT_INTERFACE | perl -ne 'if (/mtu\s(\d+)/) {print $1;}')
echo \# OSM_STACK_NAME = $OSM_STACK_NAME
echo \# OSM_NETWORK_NAME = $OSM_NETWORK_NAME
echo \# DEFAULT_INTERFACE = $DEFAULT_INTERFACE
echo \# DEFAULT_MTU = $DEFAULT_MTU
sg docker -c "docker network create --driver=overlay --attachable \
--opt com.docker.network.driver.mtu=${DEFAULT_MTU} \
${OSM_NETWORK_NAME}"
```
#### Juju
##### Juju bootstrap hangs
......@@ -207,6 +184,31 @@ When dialog messages related to LXD configuration are shown, please answer in th
- << Default values apply for next questions >>
- **Do you want to setup an IPv6 subnet? No**
#### Docker Swarm
##### `network netosm could not be found`
The error is `network "netosm" is declared as external, but could not be found. You need to create a swarm-scoped network before the stack is deployed`
It usually happens when a `docker system prune` is done with the stack stopped. The following script will create it:
```bash
#!/bin/bash
# Create OSM Docker Network ...
[ -z "$OSM_STACK_NAME" ] && OSM_STACK_NAME=osm
OSM_NETWORK_NAME=net${OSM_STACK_NAME}
echo Creating OSM Docker Network
DEFAULT_INTERFACE=$(route -n | awk '$1~/^0.0.0.0/ {print $8}')
DEFAULT_MTU=$(ip addr show $DEFAULT_INTERFACE | perl -ne 'if (/mtu\s(\d+)/) {print $1;}')
echo \# OSM_STACK_NAME = $OSM_STACK_NAME
echo \# OSM_NETWORK_NAME = $OSM_NETWORK_NAME
echo \# DEFAULT_INTERFACE = $DEFAULT_INTERFACE
echo \# DEFAULT_MTU = $DEFAULT_MTU
sg docker -c "docker network create --driver=overlay --attachable \
--opt com.docker.network.driver.mtu=${DEFAULT_MTU} \
${OSM_NETWORK_NAME}"
```
### Issues on advanced installation (manual build of docker images)
#### Manual build of images. Were all docker images successfully built?
......@@ -283,19 +285,22 @@ Error: "VIM Exception vimmconnConnectionException ConnectFailure: Unable to esta
- In order to debug potential issues with the connection, in the case of an OpenStack VIM, you can install the OpenStack client in the OSM VM and run some basic tests. I.e.:
```bash
$ # Install the OpenStack client
$ sudo apt-get install python-openstackclient
$ # Load your OpenStack credentials. For instance, if your credentials are saved in a file named 'myVIM-openrc.sh', you can load them with:
$ source myVIM-openrc.sh
$ # Test if the VIM API is operational with a simple command. For instance:
$ openstack image list
# Install the OpenStack client
sudo apt-get install python-openstackclient
# Load your OpenStack credentials. For instance, if your credentials are saved in a file named 'myVIM-openrc.sh', you can load them with:
source myVIM-openrc.sh
# Test if the VIM API is operational with a simple command. For instance:
openstack image list
```
If the openstack client works, then make sure that you can reach the VIM from the RO docker:
If the openstack client works, then make sure that you can reach the VIM from the RO container:
```bash
$ docker exec -it osm_ro.1.xxxxx bash
$ curl <URL_CONTROLLER>
# If running OSM on top of docker swarm, go to the container in docker swarm
docker exec -it osm_ro.1.xxxxx bash
# If running OSM on top of K8s, go to the RO deployment in kubernetes
kubectl -n osm exec -it deployment/ro bash
curl <URL_CONTROLLER>
```
_In some cases, the errors come from the fact that the VIM was added to OSM using names in the URL that are not Fully Qualified Domain Names (FQDN)._
......@@ -307,8 +312,11 @@ Think of an NFV infrastructure with tens of VIMs, first you will have to use dif
However, it is useful to have a mean to work with lab environments using non-FQDN names. Three options here. Probably you are looking for the third one, but we recommend the first one:
- Option 1. Change the admin URL and/or public URL of the endpoints to use an IP address or an FQDN. You might find this interesting if you want to bring your Openstack setup to production.
- Option 2. Modify `/etc/hosts` in the docker RO container. This is not persistent after reboots or restarts of the osm docker stack.
- Option 3. Modify `/etc/osm/docker/docker-compose.yaml` in the host, adding extra_hosts in the ro section with the entries that you want to add to `/etc/hosts` in the RO docker:
- Option 2. Modify `/etc/hosts` in the docker RO container. This is not persistent after reboots or restarts.
- Option 3a (for docker swarm). Modify `/etc/osm/docker/docker-compose.yaml` in the host, adding extra_hosts in the ro section with the entries that you want to add to `/etc/hosts` in the RO docker:
- Option 3b (for kubernetes). Modify `/etc/osm/docker/osm_pods/ro.yaml` in the host, adding extra_hosts in the ro section with the entries that you want to add to `/etc/hosts` in the RO docker:
With docker swarm, the modification of `/etc/osm/docker/docker-compose.yaml` would be:
```yaml
ro:
......@@ -316,14 +324,33 @@ ro:
controller: 1.2.3.4
```
Then restart the stack:
Then:
```bash
docker stack rm osm
docker stack deploy -c /etc/osm/docker/docker-compose.yaml osm
```
This is persistent after reboots and restarts of the osm docker stack.
With kubernetes, the procedure is very similar. The modification of `/etc/osm/docker/osm_pods/ro.yaml` would be:
```yaml
...
spec:
...
hostAliases:
- ip: "1.2.3.4"
hostnames:
- "controller"
...
```
Then:
```bash
kubectl -n osm apply -f /etc/osm/docker/osm_pods/ro.yaml
```
This is persistent after reboots and restarts.
### VIM authentication
......@@ -374,7 +401,7 @@ If this does not work, typically it is due to one of these issues:
## Common issues with VCA/Juju
### Status is not coherent with running NS
### Juju status shows pending objects after deleting a NS
In extraordinary situations, the output of `juju status` could show pending units that should have been removed when deleting a NS. In those situations, you can clean up VCA by following the procedure below:
......@@ -544,7 +571,45 @@ docker stack deploy -c /etc/osm/docker/osm_metrics/docker-compose.yml osm_metric
## Logs
### Checking the logs
### Checking the logs of OSM in Kubernetes
You can check the logs of any container with the following commands:
```bash
kubectl -n osm logs deployment/mon --all-containers=true
kubectl -n osm logs deployment/pol --all-containers=true
kubectl -n osm logs deployment/lcm --all-containers=true
kubectl -n osm logs deployment/nbi --all-containers=true
kubectl -n osm logs deployment/ng-ui --all-containers=true
kubectl -n osm logs deployment/ro --all-containers=true
kubectl -n osm logs deployment/grafana --all-containers=true
kubectl -n osm logs deployment/keystone --all-containers=true
kubectl -n osm logs statefulset/mysql --all-containers=true
kubectl -n osm logs statefulset/mongo --all-containers=true
kubectl -n osm logs statefulset/kafka --all-containers=true
kubectl -n osm logs statefulset/zookeeper --all-containers=true
kubectl -n osm logs statefulset/prometheus --all-containers=true
```
For live debugging, the following commands can be useful to save the log output to a file and show it in the screen:
```bash
kubectl -n osm logs -f deployment/mon --all-containers=true 2>&1 | tee mon-log.txt
kubectl -n osm logs -f deployment/pol --all-containers=true 2>&1 | tee pol-log.txt
kubectl -n osm logs -f deployment/lcm --all-containers=true 2>&1 | tee lcm-log.txt
kubectl -n osm logs -f deployment/nbi --all-containers=true 2>&1 | tee nbi-log.txt
kubectl -n osm logs -f deployment/ng-ui --all-containers=true 2>&1 | light-log.txt
kubectl -n osm logs -f deployment/ro --all-containers=true 2>&1 | tee ro-log.txt
kubectl -n osm logs -f deployment/grafana --all-containers=true 2>&1 | tee grafana-log.txt
kubectl -n osm logs -f deployment/keystone --all-containers=true 2>&1 | tee keystone-log.txt
kubectl -n osm logs -f statefulset/mysql --all-containers=true 2>&1 | tee mysql-log.txt
kubectl -n osm logs -f statefulset/mongo --all-containers=true 2>&1 | tee mongo-log.txt
kubectl -n osm logs -f statefulset/kafka --all-containers=true 2>&1 | tee kafka-log.txt
kubectl -n osm logs -f statefulset/zookeeper --all-containers=true 2>&1 | tee zookeeper-log.txt
kubectl -n osm logs -f statefulset/prometheus --all-containers=true 2>&1 | tee prometheus-log.txt
```
### Checking the logs in Docker Swarm
You can check the logs of any container with the following commands:
......@@ -553,7 +618,7 @@ docker logs $(docker ps -aqf "name=osm_mon.1" -n 1)
docker logs $(docker ps -aqf "name=osm_pol" -n 1)
docker logs $(docker ps -aqf "name=osm_lcm" -n 1)
docker logs $(docker ps -aqf "name=osm_nbi" -n 1)
docker logs $(docker ps -aqf "name=osm_light-ui" -n 1)
docker logs $(docker ps -aqf "name=osm_ng-ui" -n 1)
docker logs $(docker ps -aqf "name=osm_ro.1" -n 1)
docker logs $(docker ps -aqf "name=osm_ro-db" -n 1)
docker logs $(docker ps -aqf "name=osm_mongo" -n 1)
......@@ -571,7 +636,7 @@ docker logs -f $(docker ps -aqf "name=osm_mon.1" -n 1) 2>&1 | tee mon-log.txt
docker logs -f $(docker ps -aqf "name=osm_pol" -n 1) 2>&1 | tee pol-log.txt
docker logs -f $(docker ps -aqf "name=osm_lcm" -n 1) 2>&1 | tee lcm-log.txt
docker logs -f $(docker ps -aqf "name=osm_nbi" -n 1) 2>&1 | tee nbi-log.txt
docker logs -f $(docker ps -aqf "name=osm_light-ui" -n 1) 2>&1 | tee light-log.txt
docker logs -f $(docker ps -aqf "name=osm_ng-ui" -n 1) 2>&1 | tee light-log.txt
docker logs -f $(docker ps -aqf "name=osm_ro.1" -n 1) 2>&1 | tee ro-log.txt
docker logs -f $(docker ps -aqf "name=osm_ro-db" -n 1) 2>&1 | tee rodb-log.txt
docker logs -f $(docker ps -aqf "name=osm_mongo" -n 1) 2>&1 | tee mongo-log.txt
......@@ -605,16 +670,16 @@ Log levels are:
- INFO
- DEBUG
For instance, to increase the log level to DEBUG for the NBI in a deployment of OSM over docker swarm:
For instance, to set the log level to INFO for the MON in a deployment of OSM over K8s:
```bash
docker service update --env-add OSMNBI_LOG_LEVEL=DEBUG osm_nbi
kubectl -n osm set env deployment mon OSMMON_GLOBAL_LOGLEVEL=INFO
```
For instance, to set the log level to INFO for the MON in a deployment of OSM over K8s:
For instance, to increase the log level to DEBUG for the NBI in a deployment of OSM over docker swarm:
```bash
kubectl -n osm set env deployment mon OSMMON_GLOBAL_LOGLEVEL=INFO
docker service update --env-add OSMNBI_LOG_LEVEL=DEBUG osm_nbi
```
## How to report an issue
......
......@@ -153,8 +153,8 @@ In order to install the OSM Client in your local Linux machine, you should follo
```bash
# Clean the previous repos that might exist
sudo sed -i "/osm-download.etsi.org/d" /etc/apt/sources.list
wget -qO - https://osm-download.etsi.org/repository/osm/debian/ReleaseEIGHT/OSM%20ETSI%20Release%20Key.gpg | sudo apt-key add -
sudo add-apt-repository -y "deb [arch=amd64] https://osm-download.etsi.org/repository/osm/debian/ReleaseEIGHT stable devops IM osmclient"
wget -qO - https://osm-download.etsi.org/repository/osm/debian/ReleaseNINE/OSM%20ETSI%20Release%20Key.gpg | sudo apt-key add -
sudo add-apt-repository -y "deb [arch=amd64] https://osm-download.etsi.org/repository/osm/debian/ReleaseNINE stable devops IM osmclient"
sudo apt-get update
sudo apt-get install -y python3-pip
sudo -H python3 -m pip install -U pip
......
This diff is collapsed.
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment