How to configure your environment to develop with OSM

The aim of this chapter is to provide a guide for developers to set up their environment, in order to ease the development with the different OSM modules in the local environment to ease debugging (rather than using a Docker container).

Alternatively, a modified code can be tested by building a new Docker image with the code under test and apply it to the Docker container. These links explain how to do it:

Introduction

Modules run in separate Docker containers (except juju controller that uses a lxd container). Current installation uses Docker Swarm by default, but Kubernetes is also available as an option of the installer (with the switch -c k8s). The modules are:

  • kafka: Provides a Kafka bus used for OSM communication. This module relies on zookeeper.

  • zookeeper: Used by kafka.

  • nbi: North Bound Interface of OSM. Restful server that follows ETSI SOL005 interface. Relies on mongo database and kafka bus. For authentication it can optionally uses keystone.

  • keystone: Used for NBI authentication and RBAC. It stores the users, projects and role permissions. It relies on mysql.

  • lcm: Provides the Live Cycle Management. It uses ro for resource orchestration and juju for configuration. It relies also on mongo.

  • ro: Makes the Resource Orchestration, or VIM deployment. Relies on mysql.

  • light-ui: Web user interface. It communicates with nbi.

  • mon: Performs OSM monitoring. Relies on mongo and mysql.

  • mongo: Common non relational database for OSM modules.

  • mysql: Relational database server used for ro, keystone, mon and pol.

  • pol: Policy Manager for OSM.

  • prometheus: for monitoring .

In addition LCM and NBI, shares a common file system where packages are stored. For Docker Swarm it uses a shared Docker volume called osm_package. For Kubernetes it uses the host mount path /var/lib/osm/osm/osm_osm_packages/_data.

This picture shows the modules with the port that they export, their relationships with the name of env variables that control them. For clarity not all the dependencies are depicted, as kafka, mysql, …

     __________                                                              ________
    |          |                                                            |        |
    | light-ui |OSM_SERVER               _______                            |keystone|
    | :80      |----------------------> |       |-------------------------> |:5000   |
    |__________|                        | nbi   |                           |________|
                     OSMNBI_STORAGE_PATH| :9999 |OSMNBI_DATABASE_HOST        _______ 
    .............. <--------------------|_______|-------------------------> |       |
    . volume:    .                                                          |       |
    . osm_osm_   .                                                          | mongo |
    . packages   .   OSMLCM_STORAGE_PATH _______ OSMLCM_DATABASE_HOST       | :27017|
    .............. <--------------------|       |-------------------------> |_______|
                                        | lcm   |
    **************       OSMLCM_VCA_HOST|       |OSMLCM_RO_HOST
    * lxd: juju  * <--------------------|_______|--------------|
    * controller *                                             |
    **************                       _______               |             _______
                                        |       | <-------------            |       |
                                        | ro    |                           | mysql |  
                                        | :9090 |RO_DB_HOST                 | :3306 |
                                        |_______|-------------------------> |_______|

     _______     _______                 _______                             _________
    |       |   |       |               |       |                           |         |
    | mon   |   | pm    |               | kafka |KAFKA_ZOOKEEPER_CONNECT    |zookeeper|  
    | :8662 |   |       |               | :9092 |-------------------------> | :2181   |
    |_______|   |_______|               |_______|                           |_________|

For debugging it is convenient running the target module directly at host maintaining the rest of dependent modules in containers. In the following sections it is described how to achieve that.

General steps

1 Shutdown the container you want to debug

First thing, you need to stop the module you want to debug. As OSM uses a Docker service do not manually stop the container because it will be automatically relaunched again. Scale it to 0 for stopping and to 1 for running again.

# For Docker Swarm:
docker service scale osm_lcm=0
# For Kubernetes:
kubectl -n osm scale deployment lcm --replicas=0

2 Clone the module

git clone https://osm.etsi.org/gerrit/osm/LCM

3 Install the module

Inside the folder where the module is cloned type the following command:

python3 -m pip install -e <code-folder>
# use option '--user' if get permission errors.
# Use option '-U' (upgrade) if you have already installed it using a different folder

Note: pip3 can be installed with sudo apt-get install python3-pip

Note: it is recommended to upgrade pip3 with python3 -m pip install -U pip

4 Setup the IDE

For this tutorial we will use PyCharm as IDE. First thing, we will set “Python3” as default python interpreter:

PythonInterpreter.jpg

Next we will configure a new debug environment. For that we will go to the “Run” tab “Edit configurations”. In the new window that appears we will need to configure the script and the environment parameters.

PyCharmConfiguration.jpg

5 Configure it to interact with other modules

You need to feed the IP addresses of the modules it is going to communicate to. For that, you can use your “/etc/hosts” file.

In case the module under development is running in the same server where the rest of the modules are located, use “127.0.0.1” (localhost) as the IP address of those modules. For instance, in the following example we have added mongo, mysql, ro and kafka to the line containing “127.0.0.1”:

127.0.0.1 localhost kafka kafka-0.kafka.osm.svc.cluster.local keystone nbi mongo mysql ro

In case the module under development is running in a different server from the rest of modules, you will need to provide the IP address of that server. For instance, in the following example we have added a new line with the name resolution for mongo ro and kafka to IP address “a.b.c.d”:

a.b.c.d kafka kafka-0.kafka.osm.svc.cluster.local keystone nbi mongo mysql ro

Note: the host ‘kafka-0.kafka.osm.svc.cluster.local’ is needed only for kubernetes. Check this is the proper name with command kubectl -n osm exec -ti  statefulset/kafka cat /etc/hosts

6 Install needed packages

Is it possible that you will need to install some additional packages in your server. If needed use the commands python3 -m pip install or apt-get install for that.

Some modules imports another modules from OSM. The modules needed are:

  • n2vc: git clone https://osm.etsi.org/gerrit/osm/N2VC

  • common: git clone https://osm.etsi.org/gerrit/osm/common

  • IM: git clone https://osm.etsi.org/gerrit/osm/IM

Install them with:

python3 -m pip install -e common --user  # needed for LCM, NBI
python3 -m pip install -e N2VC --user  # needed for LCM
python3 -m pip install IM --user -U  # needed for NBI and RO. Option -e does not work

7 Expose needed ports of Docker services

Expose services for Docker Swarm

To expose mongo database and kafka (services osm_mongo, osm_kafka), needed for NBI, LCM, MON:

docker service update osm_kafka --publish-add 9092:9092
docker service update osm_keystone --publish-add 5000:5000
docker service update osm_mongo --publish-add 27017:27017
docker service update osm_mysql --publish-add 3306:3306
docker service update osm_ro --publish-add 9090:9090
# check exposed ports by:
docker service list

Alternatively you can modify the Docker osm stack by editing file /etc/osm/docker/docker-compose.yaml, adding/uncommenting the exposed ports; and restart the stack:

sudo vi /etc/osm/docker/docker-compose.yaml
# add/uncomment at section mongo:
#    ports:
#    - "27017:27017"
# same for kafka
#    ports:
#    - "9092:9092"
docker stack rm osm && sleep 60
docker stack deploy -c /etc/osm/docker/docker-compose.yaml osm
# make again the service scale to 0 of step 1 in this section

Expose services for Kubernetes

Download file ‘debug_k8s_services.yaml’ from https://osm.etsi.org/gerrit/#/c/osm/devops/+/9592/2/installers/docker/debug_k8s_services.yaml. Optionally remove sections not needed and apply it with:

kubectl -n osm apply -f debug_k8s_services.yaml
# check exposed ports by (those ended with -debug are the one added):
kubectl -n osm get service
# Undo with:
kubectl -n osm delete service/mongo-debug  # mongo-debug, mysql-debug, kafka-debug ro-debug or/and keystone-debug

NBI

Ensure services for mongo, kafka and keystone expose their ports. See Developer_HowTo - Expose needed ports of docker services

Additionally you may want that docker light-ui uses your local copy of NBI:

For Docker Swarm use one of:

  • Update light-ui docker service:

docker service update osm_light-ui --force --env-add "OSM_SERVER=172.17.0.1"
# Get the needed address at your setup by 'ip a | grep docker0'
  • … Or edit file /etc/osm/docker/docker-compose.yaml, and set at light-ui section the required IP address from Docker to the VM:

OSM_SERVER: <172.17.0.1>    # nbi
# Get the needed address at your setup by 'ip a | grep docker0'

Then restart stack to apply this change with:

docker stack rm osm
docker stack deploy -c /etc/osm/docker/docker-compose.yaml osm

Alternatively For Kubernetes use one of:

kubectl -n osm patch deployment light-ui --patch '{"spec": {"template": {"spec": {"containers": [{"name": "light-ui", "env": [{"name": "OSM_SERVER", "value": "172.17.0.1"}] }]}}}}'

Clone and install needed IM and NBI packages.

# Install Information Model IM
git clone https://osm.etsi.org/gerrit/osm/IM
python3 -m pip install IM --user

git clone https://osm.etsi.org/gerrit/osm/NBI
# configure gerrit commit-msg hook
curl -Lo NBI/.git/hooks/commit-msg http://osm.etsi.org/gerrit/tools/hooks/commit-msg
chmod u+x NBI/.git/hooks/commit-msg
cp NBI/.gitignore-common NBI/.gitignore
python3 -m pip install -e NBI  # try with --user if get permission errors

Python interpreter: Python3 Script: $INSTALLATION_FOLDER/NBI/osm_nbi/nbi.py

Environment variables:

  • OSMNBI_DATABASE_COMMONKEY: must be the same value used by NBI. Get it with cat /etc/osm/docker/nbi.env or kubectl -n osm exec -ti deployment/nbi env.

  • OSMNBI_AUTHENTICATION_SERVICE_PASSWORD: obtain in the same way as before

  • OSMNBI_STORAGE_PATH: Path of the Docker volume for file storage. Both LCM and NBI must share the same path. You can either:

    • Create a folder and debug both NBI and LCM at the same time (needed if you develop in a different server than OSM); or

    • Use the Docker volume.

      • For Docker Swarm discover local path (mount point) with docker volume inspect osm_osm_packages and grant write permissions to pycharm on it with (path can be different at your environment) sudo chmod o+rx /var/lib/docker /var/lib/docker/volumes; sudo chmod -R o+w /var/lib/docker/volumes/osm_osm_packages/_data.

      • For Kubernetes use path: ‘/var/lib/osm/osm/osm_osm_packages/_data’. Exec sudo chmod -R o+w /var/lib/osm/osm/osm_osm_packages/_data

  • OSMNBI_DATABASE_HOST: Mongo IP in case host mongo is not at /etc/hosts file

  • OSMNBI_STATIC_DIR: <Absolute path of NBI>/osm_nbi/html_public

Finally remove running Docker service. For Docker Swarm exec docker service scale osm_nbi=0; scale to 1 to undo. For Kubernetes:

kubectl -n osm scale deployment nbi --replicas=0
kubectl -n osm delete service/nbi
# undo both with:
# kubectl -n osm apply -f /etc/osm/docker/osm_pods/nbi.yaml

LCM

Ensure services for mongo and kafka expose their ports. See Developer_HowTo - Expose needed ports of Docker services

Install needed OSM packages if you intend to modify develop them:

# N2VC:
git clone https://osm.etsi.org/gerrit/osm/N2VC
python3 -m pip install -e N2VC  # try with --user if get permission errors

# osm_common:
git clone https://osm.etsi.org/gerrit/osm/common
python3 -m pip install -e common  # try with --user if get permission errors

Clone LCM

docker service scale osm_lcm=0
git clone https://osm.etsi.org/gerrit/osm/LCM
# configure gerrit commit-msg hook
curl -Lo LCM/.git/hooks/commit-msg http://osm.etsi.org/gerrit/tools/hooks/commit-msg
chmod u+x LCM/.git/hooks/commit-msg
cp LCM/.gitignore-common LCM/.gitignore
python3 -m pip install -e LCM  # try with --user if get permission errors

Python interpreter: Python3 Script: $INSTALLATION_FOLDER/LCM/osm_lcm/lcm.py

Environment variables:

Values are stored at /etc/osm/docker/lcm.env but here it is explained how to obtain these values.

  • OSMLCM_DATABASE_COMMONKEY: must be the same value used by NBI. Get it with cat /etc/osm/docker/lcm.env or kubectl -n osm exec -ti deployment/lcm env.

  • OSMLCM_STORAGE_PATH: Path of the Docker volume for file storage. Both LCM and NBI must share the same path. You can either:

    • Create a folder and debug both NBI and LCM at the same time (needed if you develop in a different server than OSM); or

    • Use the Docker volume.

      • For Docker Swarm discover local path (mount point) with docker volume inspect osm_osm_packages and grant write permissions to pycharm on it with (path can be different at your environment) sudo chmod o+rx /var/lib/docker /var/lib/docker/volumes; sudo chmod -R o+w /var/lib/docker/volumes/osm_osm_packages/_data.

      • For Kubernetes use path: ‘/var/lib/osm/osm/osm_osm_packages/_data’. Exec sudo chmod -R o+w /var/lib/osm/osm/osm_osm_packages/_data

  • OSMLCM_DATABASE_HOST: Mongo IP in case ‘mongo’ host it is not at /etc/hosts file. See Developer_HowTo#5 Configure it to interact with other modules

  • OSMLCM_RO_HOST: RO IP in case ‘ro’ host it is not at /etc/hosts file

  • OSMLCM_VCA_CACERT: To get this value run the following command in the OSM host:

    • juju controllers --format json | jq -r '.controllers["osm"]["ca-cert"]'

  • OSMLCM_VCA_PUBKEY: To get this value run the following command in the OSM host:

    • cat $HOME/.local/share/juju/ssh/juju_id_rsa.pub

  • OSMLCM_VCA_SECRET: To get this value run the following command in the OSM host:

    • grep password /home/ubuntu/.local/share/juju/accounts.yaml |awk '{print $2}'

  • OSMLCM_VCA_HOST: Will be different depending on where your develop environment is running:

    • In case you run it in the same server as OSM use the following command to get the IP (<VCA_IP>):

      • juju show-controller|grep api-endpoints|awk -F\' '{print $2}'|awk -F\: '{print $1}'

    • In case you use a different server than OSM use the IP address of OSM host (<OSM_IP>). But in addition you need to redirect inside the OSM host, the port 17070 to the VCA container by one of:

      • Configure the following ip-table rule in OSM host (not persistent on reboot):

        • sudo iptables -t nat -A PREROUTING -p tcp -d <OSM_IP> --dport 17070 -j DNAT --to <VCA_IP>:17070

      • or creates a ssh tunnel inside OSM host (just temporal until session is closed):

        • ssh -L 0.0.0.0:17070:<VCA_IP>:17070 root@<VCA_IP>

  • OSMLCM_GLOBAL_LOGLEVEL: DEBUG

Finally remove running Docker service.

  • For Docker Swarm exec docker service scale osm_lcm=0. To undo just scale it to 1.

  • For Kubernetes run kubectl -n osm scale deployment lcm --replicas=0. to undo just scale it to 1.

RO

Ensure services for mysql exposes its port. See Developer_HowTo - Expose needed ports of docker services

Additionally you may want that Docker lcm uses your local copy of RO.

For Docker Swarm, it can be done by one of:

  • Modify Docker service

  docker service update osm_lcm --env-add OSMLCM_RO_HOST=172.17.0.1
  # check the right IP address to use in your system with:
  ip a | grep docker0
  • Edit file /etc/osm/docker/docker-compose.yaml, and set at lcm section the required IP address from Docker to the VM (Restart stack afterwards):

     OSMLCM_RO_HOST: 172.17.0.1    # ro
     # Get the needed address at your setup by 'ip a | grep docker0'

Then restart stack to apply this change with:

docker stack rm osm
docker stack deploy -c /etc/osm/docker/docker-compose.yaml osm

Alternatively For Kubernetes use one of:

kubectl -n osm patch deployment lcm --patch '{"spec": {"template": {"spec": {"containers": [{"name": "lcm", "env": [{"name": "OSMLCM_RO_HOST", "value": "172.17.0.1"}] }]}}}}'

Install needed packages and clone code:

# clone and configure gerrit commit-msg hook
git clone https://osm.etsi.org/gerrit/osm/RO
curl -Lo RO/.git/hooks/commit-msg http://osm.etsi.org/gerrit/tools/hooks/commit-msg
chmod u+x RO/.git/hooks/commit-msg
cp RO/.gitignore-common RO/.gitignore

Install needed packages and install RO with python3-pip:

sudo DEBIAN_FRONTEND=noninteractive apt-get -y install libssl-dev libmysqlclient-dev mysql-client
python3 -m pip install --user -e ./RO/RO
# client
python3 -m pip install --user -e ./RO/RO-client

Install other additional plugins for VIM and SDN:

# VMware VIM
sudo DEBIAN_FRONTEND=noninteractive apt-get -y install genisoimage
python3 -m pip install --user -U progressbar pyvmomi pyvcloud==19.1.1
python3 -m pip install --user -e ./RO/RO-VIM-vmware

# openstack VIM
python3 -m pip install -U --user networking-l2gw
python3 -m pip install --user -e ./RO/RO-VIM-openstack

# other VIMs
python3 -m pip install --user -e ./RO/RO-VIM-openvim
python3 -m pip install --user -e ./RO/RO-VIM-aws
python3 -m pip install --user -e ./RO/RO-VIM-azure
python3 -m pip install --user -e ./RO/RO-VIM-fos

# SDN plugins
python3 -m pip install --user -e ./RO/RO-SDN-dynpac
python3 -m pip install --user -e ./RO/RO-SDN-tapi
python3 -m pip install --user -e ./RO/RO-SDN-onos_vpls
python3 -m pip install --user -e ./RO/RO-SDN-onos_openflow
python3 -m pip install --user -e ./RO/RO-SDN-floodlight_openflow
python3 -m pip install --user -e ./RO/RO-SDN-arista

Python interpreter: Python3 Script: $INSTALLATION_FOLDER/RO/RO/openmanod.py

Environment variables:

  • RO_DB_HOST, RO_DB_OVIM_HOST: mysql or <OSM_IP> depending if running on the same server or not.

  • RO_LOG_LEVEL: DEBUG

Finally remove running Docker service.

For Docker Swarm, exec docker service scale osm_ro=0, and scale to 1 to undo.

For Kubernetes, you should follow this procedure:

kubectl -n osm scale deployment ro --replicas=0
kubectl -n osm delete service/ro
# undo both with:
# kubectl -n osm apply -f /etc/osm/docker/osm_pods/ro.yaml