Developer Guides for Specific OSM Modules

Developer Guide for RO Module

Getting Started

The recommended Linux distribution for RO module is Ubuntu 18.04 LTS Server. PyCharm is a nice and easy to use tool for development and debugging.

The community installer installs all OSM modules as containers. However, for development, a virtual machine may be more suitable. To install the RO module follow Developer on RO. There are an installation script useful for installing RO as separate component including the needed mysql datatabase:

wget -O install-openmano.sh "https://osm.etsi.org/gitweb/?p=osm/RO.git;a=blob_plain;f=scripts/install-openmano.sh"
chmod +x install-openmano.sh
sudo ./install-openmano.sh -q --develop   #-h for help

Prepare your git environment to push with a proper user/email, push to gerrit, etc. See and configure:

Workflow_with_OSM_tools#Clone_your_project

Workflow_with_OSM_tools#Configure_your_Git_environment

Workflow_with_OSM_tools#Commit_changes_to_your_local_project

Workflow_with_OSM_tools#Push_your_contribution_to_Gerrit

Generate a .gitignore, you can use the .gitignore-common example that skips PyCharm and Eclipse files:

cp RO/.gitignore-common RO/.gitignore
#edit to include your local files to ignore

Programming Language

The RO module uses Python3. Descriptors can be YAML (preferred, as it is more readable and allows comments) or JSON.

Code Style

Please follow PEP8 style guide for all the Python code. Lines up to 120 length

Logging

Use the appropriate logging levels when logging the messages. An example is shown below:

   self.logger.debug("Changing state to %s", next_state)

Logging levels (general and per module) are specified at openmanod.cfg

Try to use few useful logs, not verbose, that brings useful information. For example, in case of fail getting a server, the complete URL should be provided.

Avoid several logs together.

WRONG:

 self.looger.debug("Entering in method A")
 self.logger.debug("Contacting server"

RIGHT:

 self.logger.debug("method A, contacting server %s", url)

When the traceback is needed (call stack that generate the exception), use the exc_info=True parameter

 self.logger.error("Exception %s when ...", exception, exc_info=True)

Exceptions

Code must be wrote in a way that functions and methods raise an exception when something goes wrong, instead of returning a negative or false value.

Example

WRONG:

   def get_ip_address():
       ...
       if fail:
           return False, "Fail because xxx"
       return True, ip

   ...
   result, ip = get_ip_address()
   if not result:
       return False, "Cannot get ip address..."

RIGHT:

   def get_ip_address():
       ...
       if fail:
           raise customException("Fail because ...")
       return ip

   ...
   try:
       ip = get_ip_address()
       ...
   except customException as e:
       raise customException2(str(e))

Directory Organization

The code organized into the following high level directories:

  • RO-plugin/ contains base classes for VIM and SDN plugins and dummy plugins

  • NG-RO/ the new generation RO is the main server engine

  • RO-SDN-*/ contains the SDN plugins

  • RO-VIM-*/ contains the VIM plugins

  • RO/ (Deprecated) old RO server engine.

  • RO-client/ (Deprecated) contains the own RO client CLI

RO Architecture

NG-RO-Architecture.png

NG-RO Server modules

The NG-RO module contains the following modules

ro_main.py Starting point. Load configuration from ro.cfg file and overrides with ENV with the format “OSMRO_XXX_YYY”. Receives http requests and calls ns.py methods. The accepted URLs are:

  • /ro/version

    • GET: To get the version

  • /ro/ns/v1/deploy

    • GET: just to get list of Network Services (debugging)

  • /ro/ns/v1/deploy/(nsrs_id)

    • POST: To create a NS action, that is an modification of the current deployment

    • GET: Return the list of actions registered over this NS

    • DELETE: Removes all the database entries for this NS

  • /ro/ns/v1/deploy/(nsrs_id)/(action_id)

    • GET: Obtain status over an action or incremental deployment

An example of the content for creating a NS action is the following:

action_id: proposed id for the action. Normally equals to nscmop_id. RO will append a suffix if already present, to have it unique.
cloud_init_content: {'(vnfr_id):file:cloud-init.cfg': cloud_init_text}
flavor:
  - id: '0'  # unique in this list
    guest-epa: {'cpu-pinning-policy': 'DEDICATED', 'cpu-thread-pinning-policy': 'PREFER', 'mempage-size': 'LARGE', 'numa-node-policy': {'mem-policy': 'STRICT', 'node-cnt': 1}}
    name: 'ubuntu_cloudinit-vnf-VM-flv'
    storage-gb: 10
    vim_info: {'vim:a9adcb0b-ae70-4e09-9b40-e78b94655829': {}}
image:
  - id: '0'  # unique in this list
    image: ubuntu16.04
    vim_info': {'vim:a9adcb0b-ae70-4e09-9b40-e78b94655829': {}}
name: 'NS_NAME'
ns:
    vld:
      - id: 'mgmtnet'
        mgmt-network: True
        name: mgmtnet
        vim_info:  # vims/sdnc where this vld must be deployed, and parameters for creation
            vim:a9adcb0b-ae70-4e09-9b40-e78b94655829:
                vim_network_name: 'internal'  # look for a network with this name
    vnf: # vnfs to deploy
      - id: 2434c189-7569-4cad-8bf4-67b2e6bbe4b7  # vnf record id
        additionalParamsForVnf: {}  # parameters used for jinja2 cloud init
        member-vnf-index-ref: '1'
        vld:
        - id: internal vld_id
          vim_info: {vim:a9adcb0b-ae70-4e09-9b40-e78b94655829: {}}
        vdur:
          - id: ebac7ccb-82f0-49e1-b81a-c03760b6cc58
            additionalParams:  # parameters used for jinja2 cloud init
            cloud-init': e015e1ef-86ad-4daf-8b42-603c97e6127b:file:cloud-init.cfg  # must be at cloud_init_content
            count-index: 0   # every instance will have a separate vdur entry
            ns-flavor-id: '0'  # must exist at flavor.id
            ns-image-id: '0'   # must exist at image.id
            ssh-keys: list of ssh-keys to inject with cloud-init
            vim_info: {'vim:a9adcb0b-ae70-4e09-9b40-e78b94655829': {}}
            interfaces:
              - mgmt-interface: True
                mgmt-vnf: True
                name: eth0
                ns-vld-id: mgmtnet  # connect to ns vld. This id must exist at ns.vld.id or...
                vnf-vld-id: mgmtnet  # connect to vnf vld. This id must exist at ns.vnf.vld.id

ns.py Main engine. Compute the requests. Main methods are:

  • method deploy: Used for create/delete/modify a NS. It computes differences between current deployment (read from database nsrs and vnfrs) and target deployment in the body of the http requests. It create tasks (database ro_tasks) to accomplish target. Same method is used for a new instantiation, scale or termination (empty target). An action id can be provided (normally equal to the nslcmop_id). Otherwise a new id is created an returned for later polling.

  • method status: get status of a previously created action. The action_id must be supplied

  • method delete: remove database entries for a NS when no longer is needed. Used when NS is deleted This module also manages the creation of ns_threads and track the VIMs/SDNCs/WIMs that they are processing

ns_thread.py Manage and perform all the ro_tasks of one, or several VIM/SDNC/WIM Each one has a queue to receive orders:

  • load_vim/reload_vim: to load or reload a VIM/SDNC/WIM and start processing ro_tasks of this element

  • check_vim: Load if needed a VIM/SDNC/WIM, check connectivity and credentials and update its status on database

  • unload_vim. Stop processing ro_tasks of this VIM/SDNC/WIM

  • terminate: finish thread

Apart from reading this queue, its main work is reading and processing pending ro_tasks. It performs a lock/unlock writing in a key locked_at for HA exclusiveness processing:

  • Looks for ro_tasks where to_check_at less than current time, target_id one of its vim list, locked_at less than current time minus LOCKED_TIME

  • Locks it by writing at locked_at current time. If fails abandon this ro_task

  • Processes all the tasks. Classes VimInteraction are used for that. Then updates database (target_record) with the result

  • Updates and unlock ro_task (by setting locked_at with current time minus LOCKED_TIME)

vim_admin.py Read kafka topics for VIM/SDNCs/WIM (broadcast mode over all RO workers) and order to load/check/unload. Detect unattended ro_tasks (among others because a crass of a previos RO worker) and order to load this VIM/SDNC/WIM to start processing these ro_tasks (done by vim_watcher method)

html_out.py Just to serve a developer simple UI validation.py Contain jsonschema to validate input content of http

Database content

NG-RO manages two collection over a non relational database ro_tasks There is a ro_task entry per element in a VIM/SDNC/WIM (a network, flavor, image, VM). It contains a list of tasks. A task is an action as CREATE (find with filter or create with params), EXEC (e.g. inject a key) or DELETE. Each task contains the target (NS,VNFRS) with path where the status must be updated upon completion. Stored internal plugin information used by itself for later deletion

    _id: unique id
    locked_by: just for debugging
    locked_at: Time (with up to 6 decimals) when this is locked. After some time it is considered unlocked automatically. In case locked crashes
    target_id: has the format "vim:vim_id", "sdn:sdnc_id" or "wim:wim_id"
    vim_info:  # object with vim_information needed to maintain and delete:
      created: False, If the item has been created by OSM and then need to be deleted
      created_items: extra information returned by plugins and used by them to delete
      vim_id: internal VIM id
      vim_name: internal VIM name
      vim_status: internal VIM status
      vim_details: text with error message
      refresh_at: when this information needs to be refreshed
    modified_at: when it has been modified
    created_at: when it has been created
    to_check_at: when it needs to be processed/refreshed
    tasks: # list with
    - action_id: all tasks has one action id, normally the nslcmop_id; instantiate, scale, terminate
      nsr_id: NS id where this task belong.
      task_id: action_id:task_index
      status: SCHEDULED, DONE, ERROR
      action: CREATE, EXEC, DELETE
      item: vdu, net, flavor, image, sdn-net
      target_record: nsrs:<nsrs_id>:path.to.entry.to.update or vnfrs:<vnfrs_id>:path.to,update
      target_record_id: The identity of the element with the format nsrs:<nsrs_id>:path.id
      common_id: used to indentify the same element for different NS. Used for common netslice VLDs for different NSs

Note that same ro_task can contain tasks for several NS. It can contains also several “CREATE” tasks, each one with different target_record (place where to update the result)

ro_nsrs This is used to store important internal information over a NS. In current version it stores the private an public ssh-key (one different per NS) used to inject keys to the VM. Public key is injected using cloud-init. Other keys are injected using the private key.

public_key: public_ssh_key
private_key: private_ssh_key_encrypted
actions: [] # list of ids for every action. Used to avoid duplications

Tests

There are dummy VIMs and dummy SDN that can be used for system tests without needed facilities

Creating a new VIM plugin

Choose a name, eg. XXX

Create a new plugin folder RO-VIM-XXX. Copy one of the existing, there are several connectors already created to be used as example as openstack. Create a class derived from vimconn.py. Implement the relevant functions.

DO NOT change the method names, parameters or parameter content. RO uses the same methods for all the VIMs and they cannot be changed to accommodate VIM specifics. VIM specifics must be solved inside the connector.

The new module can need specific configuration for the VIM that it is passed as a dictionary in the config variable at constructor. For example, in the case of openstack, config variable is used for: enabling/disabling port_security_enable, specifying the name of the physical network used for dataplane, regions, etc. The config variable is the right place to specify those needed parameters not provided by RO at the methods that form part of the VIM configuration. See Openstack configuration#Add openstack to OSM and Configuring AWS for OSM Release TWO#Add AWS to OSM for examples.

Create methods must return a unique ´id´ and it can return an object with extra information that will be available at deleteion method. The content is VIM dependent, It admit nested dictionaries, but never use a forbidden mongo character (start with ‘$’ or contains ‘.’ in any key)

Creating a new SDN/WIM plugin

Choose a name, eg. XXX. Create a new plugin folder RO-SDN-XXX. Copy one of the existing examples. Modifies Makefile and setup.py files with the chosen name. Create a class derived from sdnconn.py. Implement the relevant functions. DO NOT change the method names, parameters or parameter content. RO uses the same methods for all the SDNs and they cannot be changed to accommodate SDN specifics. SDN specifics must be solved inside the connector.

The new module can need specific configuration for the SDN that it is passed as a dictionary in the config variable at constructor. The config variable is the right place to specify those needed parameters not provided by RO at the methods that form part of the SDN configuration.

If the plugin has any third party dependent library, set the python dependencies at requirements.txt, setup.py, and debian package dependencies at stdeb.cfg file.

Arista SDN assist module

The plugin uses Arista CloudVision controller to manage the Arista switches. The Arista CloudVision product must be installed in order to centralize the management and operation of the deployed Arista switches. The main features of Arista CloudVision is to a serve as a configuration repository, change control, and operation and monitoring tool. The communication protocol between the OSM plugin and Arista CloudVision is made through a REST API, using the python library cvprac (https://github.com/aristanetworks/cvprac).

All the switch configuration is defined as a set of snippet of configuration lines, that are called configLet. The switches (for CloudVision a switch is called ‘device’) are associated in a hierarchy where all the devices of the set up are associated. CloudVision can assign configlet to to all the elements within the hierarchy, or to the final switch directly.

In the Arista plugin the switches to configure are obtained from the port-mapping if supplied, and/or the switches read from the config parameter. If neither port-mapping nor switchesare provided, then it is obtained in the Arista CloudVision inventory looking for those switches with topology_type tag set to ‘leaf’.

It is possible to define the topology used in the deployment, topology entry from the config parameter (VXLAN-MLAG default, VXLAN, VLAN-MLAG, VLAN). The configLet are built in the class AristaSDNConfigLet inside the file aristaConfigLet.py, depending on the topology used the switches configuration vary.

It is possible to provide specific parameters for each switch using the switches field in the config:

  • Loopback (Lo0) and BGP autonomous system number (AS)

  • ip, (not used) if it is not present the Arista CloudVision inventory is used to obtain it.

The Arista Cloud Vision API workflow is the following:

  • When the create_connectivity_service or edit_connectivity_service methods are invoked, all the connection points are processed in order to create the configuration to apply in each switch calling the internal method __processConnection where:

    • Then configLet are associated to the devices (leaf switches) in the method __updateConnection calling __configlet_modify and __device_modify methods:

      • Automatically a task (in ‘Pending’ state) is created when this association is done.

      • By calling the method __exec_task, tasks are executed and the configuration is applied to the switch. Internally, Arista CloudVision associates the task with a change control (with the possibility of rolling back that operation)

      • In case of error all the applied operations in CloudVision are rollback.

    • The service information is returned in the response of the creation and edition calls, so that OSM saves it, and supplies in other calls for the same connectivity service.

    • All created services identification are stored in a generic ConfigLet ‘OSM_metadata’ to keep track of the managed resources by OSM in the Arista plugin.

    • In the edition of a service connection, if a switch has no longer a port in the service, the configlet is deassigned from the switch and deleted.

  • When the delete_connectivity_service methods is invoked, it calls __rollbackConnection that calls __configlet_modify and __device_modify methods. (The same __exec_tasks operations as previously described applies)

The cvprac API calls used in the plugin are:

  • in check_credentials and __get_Connection

    • get_cvp_info: obtains general information about CloudVision

  • in get_connectivity_service_status

    • get_task_by_id

  • in method __updateConnection, __configlet_modify and __device_modify are invoked

    • get_configlet_by_name:

    • add_note_to_configlet

    • execute_task

  • in __rollbackConnection, __configlet_modify and __device_modify are invoked

    • execute_task

  • in __device_modify

    • get_configlets_by_device_id

    • remove_configlets_from_device

    • apply_configlets_to_device

  • in __configlet_modify

    • get_configlet_by_name

    • delete_configlet

    • update_configlet

    • add_configlet

  • in __get_configletsDevices

    • get_applied_devices

  • in __addMetadata, __removeMetadata, __get_srvVLANs, __get_srvUUIDs and __get_serviceConfigLets

    • get_configlet_by_name

  • in __load_inventory

    • get_inventory

This SDN plugin needs the following external libraries: requests, uuid, cvprac

Developer Guide for OpenVIM

Getting Started

Openvim is a light VIM that was born with the idea of provide underlay dataplane connectivity with a guarantee bandwidth. The rcomended Linux distribution for developping is Ubuntu Sever LTS 16.04. Pycharm is a nice and easy to use tool for developping

You can install it by: OpenVIM_installation_(Release_TWO)#Installation but using the option “--develop”: sudo ./install-openvim.sh -q --develop

git clone https://osm.etsi.org/gerrit/osm/openvim
sudo ./scripts/install-openvim.sh --noclone --develop
# --develop option will avoid to be installed as a service, in order to run it with pycharm
# --noclone option will avoid to get sources from repository, as it is already cloned 

See also and follow Workflow with OSM tools - Clone your project

New code features must be incorporated to master:

git checkout master

Generate a .gitignore, you can use the .gitignore-common example that skip pycharm and eclipse files:

cp RO/.gitignore-common RO/.gitignore
#edit to include your local files to ignore

Prepare your git environment to push with a proper user/email, push to gerrit. See and configure:

Workflow_with_OSM_tools#Configure_your_Git_environment

Workflow_with_OSM_tools#Commit_changes_to_your_local_project

Workflow_with_OSM_tools#Push_your_contribution_to_Gerrit

Programming Language

The RO module uses Python2. However Python3 conventions for a possible future migration should be used as far as possible. For example:

BAD Python2 OK Python2 compatible with python3
"format test string %s number %d" % (st, num) "format test string {} number {}".format(st, num)
print a, b, c print(a,b,c)
except Exception, e except Exception as e
if type(x) == X: if isinstance(x,X):

Descriptors can be YAML (preferred because more readable and allow comments) or JSON

Code Style

Please follow PEP8 style guide for all the Python code.

Logging

Use the appropriate logging levels when logging the messages. An example is shown below:

   self.logger.debug("Changing state to %s", next_state)

Logging levels (general and per module) are specified at openmanod.cfg

Try to use few useful logs, not verbose, that brings useful information. For example, in case of fail getting a server, the complete URL should be provided.

Avoid several logs together

WRONG:

self.looger.debug("Entering in method A")
self.logger.debug("Contacting server"

RIGHT:

self.logger.debug("method A, contacting server %s", url)

When the traceback is needed (call stack that generate the exception), use the exc_info=True parameter:

 self.logger.error("Exception %s when ...", exception, exc_info=True)

Exceptions

Code must be wrote in a way that functions and methods raise an exception when something goes wrong, instead of returning a negative or false value.

Example_

WRONG:

   def get_ip_address():
       ...
       if fail:
           return False, "Fail because xxx"
       return True, ip

   ...
   result, ip = get_ip_address()
   if not result:
       return False, "Cannot get ip address..."

RIGHT:

   def get_ip_address():
       ...
       if fail:
           raise customException("Fail because ...")
       return ip

   ...
   try:
       ip = get_ip_address()
       ...
   except customException as e:
       raise customException2(str(e))

Directory Organization

(Draft) The code organized into the following high level directories:

  • /osm_openvim contains the openvim code. ovim.py is the main engine module

  • /test contains scripts and code for testing

  • /database_utils contains scripts for database creation, dumping and migration

  • /scripts general scripts, as installation, execution, reporting

  • /scenarios examples and templates of network scnario descriptors

  • / contains the entry point server openvimd and the client openvim

Openvim Architecture

(TBC)

Database changes

Database schema can be changed if needed. It is recomended to use a graphical tool (e.g. Heidy) to change database and change it back and copy the SQL commands. Make this steps:

  1. osm_openvim/ovim.py: increment ‘version’, ‘version_date’ and ‘database_version’

  2. database_utils/migrate_vim_db.sh. See the three “TODO”

    • 2a. Increment LAST_DB_VERSION

    • 2b. Annotate a comment to have a track of versions: [ $OPENVIM_VER_NUM -ge VSSS ] && DATABASE_TARGET_VER_NUM=DD #0.V.SS => DD

  3. Generate new methods function upgrade_to_XX() and function downgrade_from_XX. Insert here the sql commands. Last sql command over schema_version is quite important to detect the database version.

Test several upgrades/downgrades to version 15 with migrate_vim_db.sh and migrate_vim_db.sh 15

CLI client

The openvim code contains a python CLI (openvim) that allows friendly command execution. This CLI client can run on a separate machine where openvimd server is running. openvim config indicates where the server is (by default localhost)

Northbound Interface

openvim uses a REST API with YAML/JSON content. TBC

Running Unit Tests

Launching openvim

Openvim can run as systemd service. (Can be installed with ./scripts/install-openvim-service.sh -f openvim)

Or can be run inside a screen (the preferred method for developers). The script to easily launch/remove is ./scripts/service-openvim.sh. Execute with -h to see options

Tests

TBC

Creating a new SDN plugin

(DRAFT)

Openvim install openflow pro-active rules for the known ports that must be connected among them. The port contains the information of physical port switch, mac address (if known), VLAN tag (if any). It creates both E-LINE (two ports, mac is not used) or E-LAN (several ports, mac is used for compute destination).

The openflow rules are computed at file openflow_thread.py method. Each openflow rule is added/removed using a plugin that interact with the concrete SDN controller. Current available plugins are ODL.py (for opendaylight), onos.py (for ONOS), floodlight.py (for floodlight). All of them creates a class OF_conn derived from base class OpenflowConn (at file openflow_conn.py)

Creating a new SDN plugin is quite easy following these steps: update_of_flows. This method get the different ports that the network must connect

  1. Create a new osm_openvim/.py file (copy ODL.py or onos.py). Choose an appropriate name, that will be used at SDN-create command with the option --<name>. Another example is the file openvim/floodlight.py but this is a bit more complicated because it contains extra code for auto-detecting the floodlight version 2.

  2. Rewrite the content of the functions, but not modify the class and functions names.

  3. Firstly test it with the osm_openvim/openflow tool. Execute ./openflow config to see the needed variables that need to be defined at the operating system. It is convenient to create a script to be run with source to set up this variable or put in the .bashrc. Put the right values for the ip; port, etc depending on your environment. For OF_CONTROLLER_TYPE put <name>. At this step the OPENVIM_* variables can be ignored.

  4. Test your file with this tool meanwhile you modify the functions. Follow these steps:

    1. Make __init__: use ‘vim.OF.<name>’ at logging.getLogger. Ignore user/password if not needed.

    2. Make get_of_switches and test with openflow switches

    3. Make obtain_port_correspondence and test with openflow port-list

    4. Make get_of_rules and test with openflow list --no-translate and openflow list

    5. Make new_flow and test with openflow add <rule-name> ... and ensure is really inserted with openflow list

    6. Make del_flow and test with openflow delete <rule-name>

    7. Make clear_all_flows and test with openflow clear

  5. Run the automatic test test/test_openflow.sh

  6. Test with openvim with fake compute nodes and a real openflow controller

    1. Add a new SDN controller. Modify the openvim/openvimd.cfg with mode: OF only and the proper values for of_*** (e.g. of_controller: <name>)

    2. In order to put the right switch port names where the NIC interfaces of the fake compute hosts are connected to, we can do one of the two options:

      1. (Suggested): Modify the value of switch_port and switch_dpid at every fake compute nodes descriptors at /test/hosts/host-example?.json and insert real values

      2. or: Modify the file openvim/database_utils/of_ports_pci_correspondence.sql and change the names port?/? by valid names obtained with openflow port-list (the physical port names)

    3. Run the script (to be completed) for testing. Openflow rules must appear at the openflow controller, and all the networks must be ACTIVE (not ERROR) (see with openvim net-list)

Developer Guide for OSM client

The OSM client is installed by default in the own host where OSM is installed (i.e. it does not run in a docker container).

However, in order to have a local instalation ready for development, you will need to clone the osmclient repo, and install it by using python pip3.

Remove debian package (optional)

This is optional: You can remove the osmclient debian package to assure that the only osmclient is the one from the cloned repo. In principle, this step is not required because the installation of the client with the instructions below will create an executable file in $HOME/.local/bin and that location is included in the PATH in Ubuntu 18.04 and later.

# check if already present, and other osm packages
dpkg -l | grep python3-osm
# Remove debian package
sudo apt-get remove -y python3-osmclient

Installation procedure

To install it:

# Ubuntu 18.04 pre-requirements
sudo apt-get install python3-pip libcurl4-openssl-dev libssl-dev
# Centos pre-requirements:
# sudo yum install python3-pip libcurl-devel gnutls-devel

# Upgrade pip and install dependencies (python-magic, osm-im)
# Next instructions install the dependencies at system level with sudo -H
sudo -H python3 -m pip install -U pip
sudo -H python3 -m pip install python-magic
sudo -H python3 -m pip install git+https://osm.etsi.org/gerrit/osm/IM --upgrade

# Clone the osmclient repo and install OSM client from the git repo.
git clone https://osm.etsi.org/gerrit/osm/osmclient
curl -Lo osmclient/.git/hooks/commit-msg http://osm.etsi.org/gerrit/tools/hooks/commit-msg
chmod u+x osmclient/.git/hooks/commit-msg
# Install osmclient using pip3 with the --user and -e options
python3 -m pip install --user -e osmclient
# Note: You may need to logout and login in order to have "/home/ubuntu/.local/bin" at PATH before executing osm

Any changes done at your cloned osmclient folder applies to ‘osm’ command. Relevant code files are:

  • osmclient/scripts/osm.py: main entry when using CLI

  • osmclient/client.py: main entry when used as library

  • osmclient/sol005: main folder with a module per command set

To uninstall, just:

python3 -m pip uninstall osmclient

kafka messages

OSM modules interchange messages via kafka. Each message is composed by a topic, a key and a value..

Used kafka topics at OSM are: (in brakets is indicated modules that produce and consume them with the following leyend: [P: producer] [Ca: consumer by all instances, that is broadcasted] [C: consumer only by one instance, that is load balanced]):

  • admin: [P: NBI,LCM] [Ca: NBI,LCM] administrative or OSM internal issues

  • vnfd: [P: NBI] related to vnf descriptor

  • nsd: [P: NBI] related to ns descriptor

  • nst: [P: NBI] related to netslice template

  • pdu: [P: NBI] realted to physical data units

  • ns: [P: NBI] [C: NBI,LCM] related to network service

  • nsi: [P: NBI] [C: NBI,LCM] related to netslice instance

  • vim_account: [P: NBI] [C: LCM] related to VIM target

  • wim_account: [P: NBI] [C: LCM] related to WIM target

  • sdn: [P: NBI] [C: LCM] related to Software Define Network Controller

  • k8scluster: [P: NBI] [C: LCM] related to kubernetes cluster

  • k8srepo: [P: NBI] [C: LCM] related to kubernetes repository

  • pla: [P: LCM] [C: LCM] related to placement location algorithm

  • user: [P: NBI] related to OSM user

  • project: [P: NBI] related to OSM user

Used kafka keys are topic dependent, but common keys include and action to do (infinitive), or an action already done (past partiple):

  • create/created: to create/has been created

  • delete/deleted: to delete/has been deleted

  • edit/edited: to update or change/has been updated or changed

  • instantiate/instantiated: to instantiate a ns or nsi

  • terminate/terminated: to remove the instantiation of a ns or nsi

  • scale/scaled: to scale a ns or nsi

  • action/actioned: to perform a day-2 operation over a ns or nsi

  • ping: used for admin just to sent a keep alive

  • echo: used for admin for debuging and kakfa initialization. Receiver just log the content

  • revoke_token: used for admin at NBI to inform other peers to clean the token cache of one or all

  • get_placement/placement: used for pla to order a new placement calculation/receive a placement result

The content of kafka values are a dictionary, except for echo key that is a text. Dictionary contains at least the _id key (OSM identifier) needed for identigy the target element in the database. In the case of ns/nsi it contains also the operation id (nslcmop_id/nsilcmop_id).