OpenVIM installation to be used with OSM release 0

From OSM Public Wiki
Jump to: navigation, search


NOTE: This is obsolete documentation for OSM Release 0. Refer to latest documentation for OSM R One is at OpenVIM_installation_(Release_One)


Infrastructure

In order to run openvim and deploy dataplane VNFs, an appropriate infrastructure is required. Details on the required infrastructure can be found here. Below a reference architecture for an openvim-based DC deployment.

Openvim Datacenter infrastructure

Openvim needs to be accesible from Resource Orchestrator (openmano). Openvim needs:

  • To make its API accesible from Resource Orchestrator (openmano). That's the purpose of the VIM mgmt network in the figure.
  • To be connected to all compute servers through a network, the DC infrastructure network in the figure.
  • To offer management IP addresses to VNFs for VNF configuration from CM (Juju server). That's the purpose of the Telco/VNF management network.

Compute nodes, besides being connected to the DC infrastructure network, must also be connected to two additional networks:

  • Telco/VNF management network, used by Configuration Manager (Juju Server) to configure the VNFs
  • Inter-DC network, optionally required to interconnect this datacenter to other datacenters (e.g. in MWC'16 demo, to interconnect the two sites).

VMs will be connected to these two networks at deployment time if requested by openmano.

VM creation (openvim server)

  • Requirements:
    • 1 vCPU (2 recommended)
    • 4 GB RAM (4 GB are required to run OpenDaylight controller; if the ODL controller runs outside the VM, 2 GB RAM are enough)
    • 40 GB disk
    • 3 network interfaces to:
      • OSM network (to interact with RO)
      • DC intfrastructure network (to interact with the compute servers and switches)
      • Telco/VNF management network (to provide IP addresses via DHCP to the VNFs)
  • Base image: ubuntu-14.04.4-server-amd64

Installation

Installation process is detailed in Github openvim repo. This process must be followed to install required packages, openvim SW, an Openflow controller (either OpenDaylight or Floodlight) to control the underlay switch, and a DHCP server (isc-dhcp-server) to assign mgmt IP addresses to VNFs.

Configuration

Configuration details can be found here. For readers' convenience, it is also shown below:

Configuring Openflow Controller

Depending on the chosen controller, use the appropriate mechanism to start it.

OpenDaylight configuration

  • Start OpenDaylight
service-opendaylight start
#it creates a screen with name "flow" and start on it the openflow controller
screen -x flow                             # goes into screen
[Ctrl+a , d]                               # goes out of the screen (detaches the screen)
less openvim/logs/openflow.log

Floodlight configuration

  • Go to scripts folder and edit the file flow.properties setting the appropriate port values
  • Start FloodLight
service-floodlight start
#it creates a screen with name "flow" and start on it the openflow controller
screen -x flow                             # goes into screen
[Ctrl+a , d]                               # goes out of the screen (detaches the screen)
less openvim/logs/openflow.log

Configuring DHCP server

  • Edit file /etc/default/isc-dhcp-server to enable DHCP server in the appropriate interface, the one attached to Telco/VNF management network (e.g. eth1).
$ sudo vi /etc/default/isc-dhcp-server
INTERFACES="eth1"
  • Edit file /etc/dhcp/dhcpd.conf to specify the subnet, netmask and range of IP addresses to be offered by the server.
$ sudo vi /etc/dhcp/dhcpd.conf
ddns-update-style none;

default-lease-time 86400;
max-lease-time 86400;
log-facility local7;
option subnet-mask 255.255.0.0;
option broadcast-address 10.210.255.255;
subnet 10.210.0.0 netmask 255.255.0.0 {
  range 10.210.1.2 10.210.1.254;
}
  • Restart the service:
sudo service isc-dhcp-server restart
  • In case of error messages (e.g. "Job failed to start"), check the configuration because it is easy to forget ";" characters. Check file /var/log/syslog to see logs with the label dhcpd

Openvim server configuration

  • Go to openvim folder and edit openvimd.cfg. By default it runs in mode: test where neither real hosts nor openflow controller are needed. You can change to mode normal to use both hosts and openflow controller.
  • Start openvim server
service-openvim start
#it creates a screen with name "vim" and starts inside the "./openvim/openvimd.py" program
screen -x vim                             # goes into openvim screen
[Ctrl+a , d]                              # goes out of the screen (detaches the screen)
less openvim/logs/openvim.log

Openvim client configuration

  • Show openvim CLI client environment variables
openvim config                           # show openvim related variables
  • Change environment variables properly
#To change variables run
export OPENVIM_HOST=<http_host of openvimd.cfg>
export OPENVIM_PORT=<http_port of openvimd.cfg>
export OPENVIM_ADMIN_PORT=<http_admin_port of openvimd.cfg>
       
#You can insert at .bashrc for authomatic loading at login:
echo "export OPENVIM_HOST=<...>" >> /home/${USER}/.bashrc
...

Create a tenant in openvim to be used by OSM

  • Create a tenant that will be used by OSM:
openvim tenant-create --name osm --description "Tenant for OSM"
  • Take the uuid of the tenant and update the environment variables used by openvim client:
export OPENVIM_TENANT=<obtained uuid>
#echo "export OPENVIM_TENANT=<obtained uuid>" >> /home/${USER}/.bashrc
openvim config                             #show openvim env variables

Additional configuration

Finally, compute nodes as well as external networks are required to be added to openvim. Details on how to do it can be found here.

Configure a network for telco/VNF management network:

  • In MWC'16 demo setup, this network uses the bridge virbrMan1, specified in openvimd.cfg
openvim net-create '{"network": {"name":"mgmt", "type":"bridge_man", "provider:physical":"bridge:virbrMan1", "provider:vlan:":2101, "shared":true}}'

Configure another network for interDC communication. This is required, for instance, to connect VNFs in OpenVIM datacenter to other VNFs in other datacenters. A similar configuration is required in all datacenters. The physical interconnection of these datacenters is out of the scope of this doc.

  • In MWC'16 demo setup, this network is using a specific port in the switching infrastructure ("eth4/20").
openvim net-create '{"network": {"name":"interDC", "type":"data", "provider:physical":"openflow:eth4/20", "shared":true}}'

As a side note, if you plan to test the network scenarios from MWC'16, the names of the networks must be preserved (mgmt, interDC). If different network names are used, NS packages must be updated accordingly.


Your feedback is most welcome!
You can send us your comments and questions to OSM_TECH@list.etsi.org
Or join the OpenSourceMANO Slack Workplace
See hereafter some best practices to report issues on OSM