OpenVIM installation

From OSM Public Wiki
Revision as of 12:39, 5 December 2018 by Garciadeblas (talk | contribs) (Garciadeblas moved page OpenVIM installation (Release THREE) to OpenVIM installation: Single page for latest release)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Infrastructure

In order to run openvim in normal mode (see below the available modes) and deploy dataplane VNFs, an appropriate infrastructure is required. Below a reference architecture for an openvim-based DC deployment.

Openvim Datacenter infrastructure

Openvim needs to be accessible from Resource Orchestrator (openmano). Openvim needs:

  • To make its API accesible from Resource Orchestrator (openmano). That's the purpose of the VIM mgmt network in the figure.
  • To be connected to all compute servers through a network, the DC infrastructure network in the figure.
  • To offer management IP addresses to VNFs for VNF configuration from CM (Juju server). That's the purpose of the Telco/VNF management network.

Compute nodes, besides being connected to the DC infrastructure network, must also be connected to two additional networks:

  • Telco/VNF management network, used by Configuration Manager (Juju Server) to configure the VNFs
  • Inter-DC network, optionally required to interconnect this datacenter to other datacenters (e.g. in MWC'16 demo, to interconnect the two sites).

VMs will be connected to these two networks at deployment time if requested by openmano.

VM creation (openvim server)

  • Requirements:
    • 1 vCPU (2 recommended)
    • 4 GB RAM (4 GB are required to run OpenDaylight controller; if the ODL controller runs outside the VM, 2 GB RAM are enough)
    • 40 GB disk
    • 3 network interfaces to:
      • OSM network (to interact with RO)
      • DC intfrastructure network (to interact with the compute servers and switches)
      • Telco/VNF management network (to provide IP addresses via DHCP to the VNFs)
  • Base image: ubuntu-16.04-server-amd64

Installation

Openvim is installed using a script:

wget -O install-openvim.sh "https://osm.etsi.org/gitweb/?p=osm/openvim.git;a=blob_plain;f=scripts/install-openvim.sh;hb=1ff6c02ecff38378a4d7366e223cefd30670602e"
chmod +x install-openvim.sh
sudo ./install-openvim.sh -q   # --help  for help on options
# NOTE: you can provide optionally the admin user (normally 'root') and password of the database.

Once installed, manage it with sudo service osm-openvim start|stop|restart

Logs are at /var/log/osm/openvim.log

Configuration file is at /etc/osm/openvimd.cfg

Thre is a CLI client called openvim. Type "openvim config" to see the configuration bash variables


Openflow controller

For normal or OF only openvim modes you will need a openflow controller. The following openflow controllers are supported:

Floodlight version 0.90

You can install e.g. floodlight-0.90. The script openvim/scripts/install-floodlight.sh makes this installation for you. And the script service-floodlight can be used to start/stop it in a screen with logs.

$ sudo openvim/scripts/install-floodlight.sh
$ service-floodlight start

ONOS

NOTE: This tutorial assumes you are developing ONOS in DevelVM and deploying it on DeployVM (which is the one in which OpenVIM runs)

System requirements

  • 2GB or more RAM (I personally recommend at least 4GB)
  • 2 or more processors
  • Ubuntu 14.04 LTS or 16.04 LTS (Checked with both distros)

Software requirements

Maven

Install Maven 3.3.9 on your Apps directory

$ cd ~
$ mkdir Apps
$ wget http://archive.apache.org/dist/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.tar.gz
$ tar -zxvf apache-maven-3.3.9-bin.tar.gz -C ./Apps/

NOTE: Although ONOS has been migrated to Buck, maven was used in earlier releases.

Karaf

Install Karaf 3.0.5 on your Apps directory

$ cd ~
$ wget http://archive.apache.org/dist/karaf/3.0.5/apache-karaf-3.0.5.tar.gz
$ tar -zxvf apache-karaf-3.0.5.tar.gz -C ./Apps/
Java 8

Install Java 8

$ sudo apt-get install software-properties-common -y
$ sudo add-apt-repository ppa:webupd8team/java -y
$ sudo apt-get update
$ sudo apt-get install oracle-java8-installer oracle-java8-set-default -y

Set your JAVA_HOME

export JAVA_HOME=/usr/lib/jvm/java-8-oracle

Verify it with the following command

$ env | grep JAVA_HOME

JAVA_HOME=/usr/lib/jvm/java-8-oracle

Download latest ONOS

$ git clone https://gerrit.onosproject.org/onos
$ cd onos
$ git checkout master

Edit onos/tools/dev/bash_profile and set the correct path for ONOS_ROOT, MAVEN and KARAF_ROOT

# Please note that I am using my absolute paths here, yours may be different
export ONOS_ROOT=${ONOS_ROOT:-~/onos}
export MAVEN=${MAVEN:-~/Apps/apache-maven-3.3.9}
export KARAF_ROOT=${KARAF_ROOT:-~/Apps/apache-karaf-$KARAF_VERSION}

Edit ~/.bashrc and add the following line at the end:

#Please note that I am specifying here the absolute path of the bash_profile file in my machine, it may be different in yours
. ~/onos/tools/dev/bash_profile

Reload .bashrc or log out and log in again to apply the changes

. ~/.bashrc

Build and deploy ONOS

If you are using an stable release below 1.7, please use maven, otherwise, use Buck. Depending on which tool you use to build ONOS, the deployment procedure is also different.

Build with maven
$ mci # Alias for mvn clean install
$ op
Build with Buck

NOTE: ONOS currently uses a modified version of Buck, which has been packaged with ONOS. Please use this version until our changes have been upstreamed and released as part of an official Buck release.

$ sudo apt-get install zip unzip 
$ cd $ONOS_ROOT
$ tools/build/onos-buck build onos --show-output
Updating Buck...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 43.8M  100 43.8M    0     0   172k      0  0:04:20  0:04:20 --:--:--  230k
Archive:  cache/buck-v2016.11.12.01.zip
  inflating: buck                    
 extracting: .buck_version           
   creating: plugins/
  inflating: plugins/onos.jar        
  inflating: plugins/yang.jar        
Successfully updated Buck in /home/alaitz/Code/onos/bin/buck to buck-v2016.11.12.01.zip

Not using buckd because watchman isn't installed.
[-] PROCESSING BUCK FILES...FINISHED 3.1s [100%] 🐳  New buck daemon
[+] DOWNLOADING... (0.00 B/S, TOTAL: 0.00 B, 0 Artifacts)
[+] BUILDING...1m47.9s [99%] (720/721 JOBS, 720 UPDATED, 720 [99.9%] CACHE MISS)
 |=> IDLE
 |=> IDLE
 |=> IDLE
 |=> //tools/package:onos-package...  9.9s (checking local cache)
 |=> IDLE
 |=> IDLE
 |=> IDLE
 |=> IDLE
The outputs are:
//tools/package:onos-package buck-out/gen/tools/package/onos-package/onos.tar.gz

Sources:

Run ONOS

$ cd $ONOS_ROOT
$ tools/build/onos-buck run onos-local -- clean debug

OpenDayLight

OpenDayLight integration has been tested with the Beryllium-SR4 release. The steps to integrate this version are the following:

Download the Beryllium release and extract it in the folder:

$ wget https://nexus.opendaylight.org/content/repositories/opendaylight.release/org/opendaylight/integration/distribution-karaf/0.4.4-Beryllium-SR4/distribution-karaf-0.4.4-Beryllium-SR4.tar.gz
$ tar xvf distribution-karaf-0.4.4-Beryllium-SR4.tar.gz

Then, configure the features you want to run with ODL (add the following features: odl-restconf-all,odl-dlux-core and odl-openflowplugin-flow-services-ui), and finally start the controller:

$ vi distribution-karaf-0.4.4-Beryllium-SR4/etc/org.apache.karaf.features.cfg
#
# Comma separated list of features to install at startup
#
featuresBoot=config,standard,region,package,kar,ssh,management,odl-restconf-all,odl-dlux-core,odl-openflowplugin-flow-services-ui

$ distribution-karaf-0.4.4-Beryllium-SR4/bin/start

In the openVim config file (/etc/osm/openvimd.cfg) you need to config the information about the SDN controller:

$ cat /etc/osm/openvimd.cfg
...
# Default openflow controller information
of_controller:      opendaylight           # Type of controller to be used.
                                                 # Valid controllers are 'opendaylight', 'floodlight' or <custom>
#of_controller_module:                           # Only needed for <custom>.  Python module that implement
                                                 # this controller. By default a file with the name  <custom>.py is used
# of_<other>:           value                    # Other parameters required by <custom> controller. Consumed by __init__
of_user:            admin                        # User credentials for the controller if needed
of_password:        admin                        # Password credentials for the controller if needed
of_controller_ip:   10.0.0.0                     # IP address where the Openflow controller is listening
of_controller_port: 8080                         # TCP port where the Openflow controller is listening (REST API server)
of_controller_dpid: 'XX:XX:XX:XX:XX:XX:XX:XX'    # Openflow Switch identifier (put here the right number)
# This option is used for those openflow switch that cannot deliver one packet to several output with different vlan tags
# When set to true, it fails when trying to attach different vlan tagged ports to the same net
of_controller_nets_with_same_vlan: false         # (by default, true)

And finally, export the following variables:

export OF_CONTROLLER_TYPE=opendaylight
export OF_CONTROLLER_USER=admin
export OF_CONTROLLER_PASSWORD=admin
export OF_CONTROLLER_IP=10.0.0.0
export OF_CONTROLLER_PORT=8080
export OF_CONTROLLER_DPID=XX:XX:XX:XX:XX:XX:XX:XX

Finally, restart openvim:

service osm-openvim restart

DHCP server (Bridge)

Openvim has two options for overlay network management 'bridge' and 'ovs'. (network_type at openvimd.cfg). For 'bridge' type, openvim relays on precreated bridges at compute nodes that has L2 connectivity using e.g. a switch in trunk mode. In this mode you should provide an external DHCP server for the management network. This section describes how to install such a dhcp server base on the isc-dhcp-server package.

It can be installed in the same or in a different machine where openvim is running, meanwhile it has L2 connectivity with the compute nodes bridges and ssh access from openvim (in case it is installed on a different machine)

Install the package:

Ubuntu 14.04:  sudo apt-get install dhcp3-server
Ubuntu 16.04:  sudo apt install isc-dhcp-server

Configure it editing file /etc/default/isc-dhcp-server to enable DHCP server in the appropriate interface, the one with L2 connectivity (e.g. eth1).

$ sudo vi /etc/default/isc-dhcp-server
INTERFACES="eth1"

Edit file /etc/dhcp/dhcpd.conf to specify the subnet, netmask and range of IP addresses to be offered by the server.

$ sudo vi /etc/dhcp/dhcpd.conf
ddns-update-style none;
default-lease-time 86400;
max-lease-time 86400;
log-facility local7;
option subnet-mask 255.255.0.0;
option broadcast-address 10.210.255.255;
subnet 10.210.0.0 netmask 255.255.0.0 {
 range 10.210.1.2 10.210.1.254;
}

Restart the service:

sudo service isc-dhcp-server restart

Create a script called "get_dhcp_lease.sh" accesible from PATH (e.g. at /usr/local/bin) with this content:

#!/bin/bash
awk '
 ($1=="lease" && $3=="{"){ lease=$2; active="no"; found="no" }
 ($1=="binding" && $2=="state" && $3=="active;"){ active="yes" }
 ($1=="hardware" && $2=="ethernet" && $3==tolower("'$1';")){ found="yes" }
 ($1=="client-hostname"){ name=$2 }
 ($1=="}"){ if (active=="yes" && found=="yes"){ target_lease=lease; target_name=name}}
 END{printf("%s", target_lease)} #print target_name
' /var/lib/dhcp/dhcpd.leases

Give execution rights to this file:

chmod +x /usr/local/bin/get_dhcp_lease.sh

Finally configure openvimd.cfg with the location and credentials of the installed dhcp_server:

dhcp_server:
  host:     host-ip-or-name
  provider: isc-dhcp-server  #dhcp-server type
  user:     user
  #provide password, or key if needed
  password: passwd
  #keyfile:     ssh-access-key
  #list of the previous bridges interfaces attached to this dhcp server
  bridge_ifaces:   [ virbrMan1 ]

OVS controller

Openvim has two options for overlay network management 'bridge' and 'ovs'. (network_type at openvimd.cfg). For 'ovs' type, openvim creates a ovs vxlan tunnel and launches a dhcp server in the ovs_controller. The ovs_controller can be on a different or on the same machine where openvim is running.

Some preparation is needed to configure the ovs_controller:

Execute scripts/configure-dhcp-server-UBUNTU16.0.4.sh on the machine where ovs_controller will run. Can be placed in the same Openvim VM or in a new one.

$sudo ./openvim/scripts/configure-dhcp-server-UBUNTU16.0.4.sh <user-name>

Modify openvimd.cfg and add net controller connection details:

network_type : ovs
ovs_controller_ip:        <net controller ip>    # dhcp controller IP address, must be change in 
                                                 # order to reach computes. 
ovs_controller_user:      <net controller user>  # User for the dchp controller for OVS networks
ovs_controller_file_path: '/var/lib/openvim'     # Net controller Path for dhcp daemon 
                                                 # configuration, by default '/var/lib/openvim

Ensure that automatic login from openvim to ovs_controller works without any prompt, and that openvim can run commands with root admin. It is recomended to add the public openvim ssh key to the autorized_keys at ovs_controller and set the autentication key to use at openvimd.cfg:

ovs_controller_keyfile:   /path/to/ssh-key-file  # ssh-access-key file to connect host

Configuration

- Edit file /etc/osm/openvimd.cfg. Note: by default it runs in mode: test where no real hosts neither openflow controller are needed. You can uses other modes:

mode Computes hosts Openflow controller Observations
test fake X No real deployment. Just for API test
normal needed needed Normal behavior
host only needed X No PT/SRIOV connections
develop needed X Force to cloud type deployment without EPA
OF only fake needed To test openflow controller without needed of compute hosts

Service must be restarted

sudo service osm-openvim restart

NOTE: the following steps (ONLY if openvim runs in test mode) are done automatically by script:

/opt/openvim/scripts/initopenvim.sh --insert-bashrc --force

- Let's configure the openvim CLI client. Needed if you have changed the /opt/openvim/openvimd.cfg file (WARNING not the ./openvim/openvimd.cfg)

openvim config                           # show openvim related variables
#To change variables run
export OPENVIM_HOST=<http_host of openvimd.cfg>
export OPENVIM_PORT=<http_port of openvimd.cfg>
export OPENVIM_ADMIN_PORT=<http_admin_port of openvimd.cfg>
#You can insert at .bashrc for authomatic loading at login:
echo "export OPENVIM_HOST=<...>" >> /{HOME}/.bashrc
...

Adding compute nodes

- Let's attach compute nodes

In test mode we need to provide fake compute nodes with all the necessary information:

openvim host-add /opt/openvim/test/hosts/host-example0.yaml 
openvim host-add /opt/openvim/test/hosts/host-example1.yaml 
openvim host-add /opt/openvim/test/hosts/host-example2.yaml 
openvim host-add /opt/openvim/test/hosts/host-example3.yaml 
openvim host-list                        #-v,-vv,-vvv for verbosity levels

In normal or host only mode, the process is a bit more complex. First, you need to configure appropriately the host following these guidelines. The current process is manual, although we are working on an automated process. For the moment, follow these instructions:

#copy /opt/openvim/scripts/host-add.sh and run at compute host for gather all the information
./host_add.sh <user> <ip_name> >> host.yaml
#NOTE: If the host contains interfaces connected to the openflow switch for dataplane,
# the switch port where the interfaces are connected must be provided manually, 
# otherwise these interfaces cannot be used. Follow one of two methods:
#   1) Fill openvim/database_utils/of_ports_pci_correspondence.sql ...
#   ... and load with mysql -uvim -p vim_db < openvim/database_utils/of_ports_pci_correspondence.sql
#   2) or add manually this information at generated host.yaml with a 'switch_port: <whatever>' 
#   ... entry at 'host-data':'numas': 'interfaces' 
# copy this generated file host.yaml to the openvim server, and add the compute host with the command:
openvim host-add host.yaml
# copy openvim ssh key to the compute node. If openvim user didn't have a ssh key generate it using ssh-keygen
ssh-copy-id <compute node user>@<IP address of the compute node>
        

Note: It must be noted that Openvim has been tested with servers based on Xeon E5 Intel processors with Ivy Bridge architecture. No tests have been carried out with Intel Core i3, i5 and i7 families, so there are no guarantees that the integration will be seamless.

Adding external networks

- Let's list the external networks:

openvim net-list

- Let's create some external networks in openvim. These networks are public and can be used by any VNF. In order to create external networks, use 'openvim net-create', specifying a file with the network information. To create a management network:

openvim net-create /opt/openvim/test/networks/net-example4.yaml

- Let's list the external networks:

openvim net-list
2c386a58-e2b5-11e4-a3c9-52540032c4fa   mgmt

You can build your own networks using the template 'templates/network.yaml'. Alternatively, you can use 'openvim net-create' without a file and answer the questions:

openvim net-create

You can delete a network, e.g. "mgmt", using the command:

openvim net-delete mgmt

Creating a new tenant

- Now let's create a new tenant "osm":

$ openvim tenant-create --name osm --description osm
<uuid>   osm Created

- Take the uuid of the tenant and update the environment variables used by openvim client:

export OPENVIM_TENANT=<obtained uuid>
#echo "export OPENVIM_TENANT=<obtained uuid>" >> /home/${USER}/.bashrc
openvim config                             #show openvim env variables

Additional information

Your feedback is most welcome!
You can send us your comments and questions to OSM_TECH@list.etsi.org
Or join the OpenSourceMANO Slack Workplace
See hereafter some best practices to report issues on OSM