OpenVIM installation to be used with OSM release 0: Difference between revisions

From OSM Public Wiki
Jump to: navigation, search
No edit summary
No edit summary
 
(22 intermediate revisions by one other user not shown)
Line 1: Line 1:
__TOC__
__TOC__


=VM creation=
 
 
'''NOTE: This is obsolete documentation for OSM Release 0. Refer to latest documentation for OSM R One is at''' [[OpenVIM_installation_(Release_One)]]
 
 
=Infrastructure=
In order to run openvim and deploy dataplane VNFs, an appropriate infrastructure is required. Details on the required infrastructure can be found [https://github.com/nfvlabs/openvim/wiki/Getting-started#requirements here]. Below a reference architecture for an openvim-based DC deployment.
 
[[File:OpenvimDC.png|500px|Openvim Datacenter infrastructure]]
 
Openvim needs to be accesible from Resource Orchestrator (openmano).
Openvim needs:
* To make its API accesible from Resource Orchestrator (openmano). That's the purpose of the VIM mgmt network in the figure.
* To be connected to all compute servers through a network, the DC infrastructure network in the figure.
* To offer management IP addresses to VNFs for VNF configuration from CM (Juju server). That's the purpose of the Telco/VNF management network.
 
Compute nodes, besides being connected to the DC infrastructure network, must also be connected to two additional networks:
*Telco/VNF management network, used by Configuration Manager (Juju Server) to configure the VNFs
*Inter-DC network, optionally required to interconnect this datacenter to other datacenters (e.g. in MWC'16 demo, to interconnect the two sites).
 
VMs will be connected to these two networks at deployment time if requested by openmano.
 
=VM creation (openvim server)=
* Requirements:
* Requirements:
** 1 vCPU (2 recommended)
** 1 vCPU (2 recommended)
** 2GB RAM (4 recommended)
** 4 GB RAM (4 GB are required to run OpenDaylight controller; if the ODL controller runs outside the VM, 2 GB RAM are enough)
** 40GB disk
** 40 GB disk
** 3 network interfaces to:
** 3 network interfaces to:
*** OSM network (to interact with RO)
*** OSM network (to interact with RO)
Line 13: Line 35:


=Installation=
=Installation=
==Manual installation==
Installation process is detailed in [https://github.com/nfvlabs/openvim/wiki/Getting-started#installation-endusers Github openvim repo]. This process must be followed to install required packages, openvim SW, an Openflow controller (either OpenDaylight or Floodlight) to control the underlay switch, and a DHCP server (isc-dhcp-server) to assign mgmt IP addresses to VNFs.
* Install required packages
 
sudo apt-get install mysql-server git screen wget python-yaml python-libvirt python-bottle \
=Configuration=
  python-mysqldb python-jsonschema python-paramiko python-argcomplete python-requests
Configuration details can be found [https://github.com/nfvlabs/openvim/wiki/Getting-started#configuration here]. For readers' convenience, it is also shown below:
* Configure python-argcomplete
 
activate-global-python-argcomplete --user
==Configuring Openflow Controller==
echo ". /home/${USER}/.bash_completion.d/python-argcomplete.sh" >> ~/.bashrc
Depending on the chosen controller, use the appropriate mechanism to start it.
* Clone the git repository:
 
  git clone https://github.com/nfvlabs/openmano.git openmano
===OpenDaylight configuration===
* Database creation
*Start OpenDaylight
mysqladmin -u root -p create vim_db
service-opendaylight start
* Grant access privileges from localhost. Go to mysql console and use the following commands to create user vim and grant privileges to the databases:
#it creates a screen with name "flow" and start on it the openflow controller
mysql> CREATE USER 'vim'@'localhost' identified by 'vimpw';
screen -x flow                            # goes into screen
  mysql> GRANT ALL PRIVILEGES ON vim_db.* TO 'vim'@'localhost';
[Ctrl+a , d]                              # goes out of the screen (detaches the screen)
* Initialize database
  less openvim/logs/openflow.log
  openmano/openvim/database_utils/init_vim_db.sh -u vim -p vimpw
 
* Add openvim client and scripts to the PATH. It is enough to create the /home/${USER}/bin/ folder and add there the appropriate links:
===Floodlight configuration===
  mkdir /home/${USER}/bin/
*Go to scripts folder and edit the file '''flow.properties''' setting the appropriate port values
  ln -s ${PWD}/openmano/openvim/openvim /home/${USER}/bin/openvim
*Start FloodLight
  ln -s ${PWD}/openmano/scripts/service-openmano.sh /home/${USER}/bin/service-openmano
service-floodlight start
#it creates a screen with name "flow" and start on it the openflow controller
screen -x flow                            # goes into screen
[Ctrl+a , d]                              # goes out of the screen (detaches the screen)
  less openvim/logs/openflow.log
 
==Configuring DHCP server==
*Edit file '''/etc/default/isc-dhcp-server''' to enable DHCP server in the appropriate interface, the one attached to Telco/VNF management network (e.g. eth1).
  $ sudo vi /etc/default/isc-dhcp-server
INTERFACES="eth1"
 
*Edit file '''/etc/dhcp/dhcpd.conf''' to specify the subnet, netmask and range of IP addresses to be offered by the server.
  $ sudo vi /etc/dhcp/dhcpd.conf
 
ddns-update-style none;
default-lease-time 86400;
max-lease-time 86400;
 
  log-facility local7;
 
  option subnet-mask 255.255.0.0;
option broadcast-address 10.210.255.255;
 
subnet 10.210.0.0 netmask 255.255.0.0 {
  range 10.210.1.2 10.210.1.254;
}
 
*Restart the service:
sudo service isc-dhcp-server restart


==Automatic installation==
*In case of error messages (e.g. "Job failed to start"), check the configuration because it is easy to forget ";" characters. Check file '''/var/log/syslog''' to see logs with the label '''dhcpd'''
wget https://github.com/nfvlabs/openvim/raw/master/scripts/install-openvim.sh
chmod +x install-openmano.sh
sudo ./install-openmano.sh [<database-root-user> [<database-root-password>]]
#NOTE: you can provide optionally the DB root user and password. If you don't provide it, the script will prompt for it.


=Configuration=
==Openvim server configuration==
==Configuring openvim server==
*Go to openvim folder and edit '''openvimd.cfg'''. By default it runs in '''mode: test''' where neither real hosts nor openflow controller are needed. You can change to '''mode normal''' to use both hosts and openflow controller.
* Go to openvim folder and edit openvimd.cfg.
*Start openvim server
* Start openvim server
  service-openvim start
  service-openmano openvim start
  #it creates a screen with name "vim" and starts inside the "./openvim/openvimd.py" program
  #it creates a screen with name "vim" and starts inside the "./openvim/openvimd.py" program
  screen -x vim                            # goes into openvim screen
  screen -x vim                            # goes into openvim screen
  [Ctrl+a , d]                              # goes out of the screen (detaches the screen)
  [Ctrl+a , d]                              # goes out of the screen (detaches the screen)
  less openvim/logs/openvim.0
  less openvim/logs/openvim.log
 
==Openvim client configuration==
*Show openvim CLI client environment variables
openvim config                          # show openvim related variables
*Change environment variables properly     
#To change variables run
export OPENVIM_HOST=<http_host of openvimd.cfg>
export OPENVIM_PORT=<http_port of openvimd.cfg>
export OPENVIM_ADMIN_PORT=<http_admin_port of openvimd.cfg>
       
#You can insert at .bashrc for authomatic loading at login:
echo "export OPENVIM_HOST=<...>" >> /home/${USER}/.bashrc
...
 
==Create a tenant in openvim to be used by OSM==
*Create a tenant that will be used by OSM:
openvim tenant-create --name osm --description "Tenant for OSM"
*Take the uuid of the tenant and update the environment variables used by openvim client:
export OPENVIM_TENANT=<obtained uuid>
#echo "export OPENVIM_TENANT=<obtained uuid>" >> /home/${USER}/.bashrc
openvim config                            #show openvim env variables
 
==Additional configuration==
Finally, compute nodes as well as external networks are required to be added to openvim. Details on how to do it can be found [https://github.com/nfvlabs/openvim/wiki/Getting-started#configuration here].
 
Configure a network for telco/VNF management network:
*In MWC'16 demo setup, this network uses the bridge virbrMan1, specified in '''openvimd.cfg'''
openvim net-create '{"network": {"name":"mgmt", "type":"bridge_man", "provider:physical":"bridge:virbrMan1", "provider:vlan:":2101, "shared":true}}'
 
Configure another network for interDC communication. This is required, for instance, to connect VNFs in OpenVIM datacenter to other VNFs in other datacenters. A similar configuration is required in all datacenters. The physical interconnection of these datacenters is out of the scope of this doc.
*In MWC'16 demo setup, this network is using a specific port in the switching infrastructure ("eth4/20").
openvim net-create '{"network": {"name":"interDC", "type":"data", "provider:physical":"openflow:eth4/20", "shared":true}}'
 
As a side note, if you plan to test the network scenarios from MWC'16, the names of the networks must be preserved (mgmt, interDC). If different network names are used, NS packages must be updated accordingly.
 
 
{{Feedback}}

Latest revision as of 13:57, 20 October 2016


NOTE: This is obsolete documentation for OSM Release 0. Refer to latest documentation for OSM R One is at OpenVIM_installation_(Release_One)


Infrastructure

In order to run openvim and deploy dataplane VNFs, an appropriate infrastructure is required. Details on the required infrastructure can be found here. Below a reference architecture for an openvim-based DC deployment.

Openvim Datacenter infrastructure

Openvim needs to be accesible from Resource Orchestrator (openmano). Openvim needs:

  • To make its API accesible from Resource Orchestrator (openmano). That's the purpose of the VIM mgmt network in the figure.
  • To be connected to all compute servers through a network, the DC infrastructure network in the figure.
  • To offer management IP addresses to VNFs for VNF configuration from CM (Juju server). That's the purpose of the Telco/VNF management network.

Compute nodes, besides being connected to the DC infrastructure network, must also be connected to two additional networks:

  • Telco/VNF management network, used by Configuration Manager (Juju Server) to configure the VNFs
  • Inter-DC network, optionally required to interconnect this datacenter to other datacenters (e.g. in MWC'16 demo, to interconnect the two sites).

VMs will be connected to these two networks at deployment time if requested by openmano.

VM creation (openvim server)

  • Requirements:
    • 1 vCPU (2 recommended)
    • 4 GB RAM (4 GB are required to run OpenDaylight controller; if the ODL controller runs outside the VM, 2 GB RAM are enough)
    • 40 GB disk
    • 3 network interfaces to:
      • OSM network (to interact with RO)
      • DC intfrastructure network (to interact with the compute servers and switches)
      • Telco/VNF management network (to provide IP addresses via DHCP to the VNFs)
  • Base image: ubuntu-14.04.4-server-amd64

Installation

Installation process is detailed in Github openvim repo. This process must be followed to install required packages, openvim SW, an Openflow controller (either OpenDaylight or Floodlight) to control the underlay switch, and a DHCP server (isc-dhcp-server) to assign mgmt IP addresses to VNFs.

Configuration

Configuration details can be found here. For readers' convenience, it is also shown below:

Configuring Openflow Controller

Depending on the chosen controller, use the appropriate mechanism to start it.

OpenDaylight configuration

  • Start OpenDaylight
service-opendaylight start
#it creates a screen with name "flow" and start on it the openflow controller
screen -x flow                             # goes into screen
[Ctrl+a , d]                               # goes out of the screen (detaches the screen)
less openvim/logs/openflow.log

Floodlight configuration

  • Go to scripts folder and edit the file flow.properties setting the appropriate port values
  • Start FloodLight
service-floodlight start
#it creates a screen with name "flow" and start on it the openflow controller
screen -x flow                             # goes into screen
[Ctrl+a , d]                               # goes out of the screen (detaches the screen)
less openvim/logs/openflow.log

Configuring DHCP server

  • Edit file /etc/default/isc-dhcp-server to enable DHCP server in the appropriate interface, the one attached to Telco/VNF management network (e.g. eth1).
$ sudo vi /etc/default/isc-dhcp-server
INTERFACES="eth1"
  • Edit file /etc/dhcp/dhcpd.conf to specify the subnet, netmask and range of IP addresses to be offered by the server.
$ sudo vi /etc/dhcp/dhcpd.conf
ddns-update-style none;

default-lease-time 86400;
max-lease-time 86400;
log-facility local7;
option subnet-mask 255.255.0.0;
option broadcast-address 10.210.255.255;
subnet 10.210.0.0 netmask 255.255.0.0 {
  range 10.210.1.2 10.210.1.254;
}
  • Restart the service:
sudo service isc-dhcp-server restart
  • In case of error messages (e.g. "Job failed to start"), check the configuration because it is easy to forget ";" characters. Check file /var/log/syslog to see logs with the label dhcpd

Openvim server configuration

  • Go to openvim folder and edit openvimd.cfg. By default it runs in mode: test where neither real hosts nor openflow controller are needed. You can change to mode normal to use both hosts and openflow controller.
  • Start openvim server
service-openvim start
#it creates a screen with name "vim" and starts inside the "./openvim/openvimd.py" program
screen -x vim                             # goes into openvim screen
[Ctrl+a , d]                              # goes out of the screen (detaches the screen)
less openvim/logs/openvim.log

Openvim client configuration

  • Show openvim CLI client environment variables
openvim config                           # show openvim related variables
  • Change environment variables properly
#To change variables run
export OPENVIM_HOST=<http_host of openvimd.cfg>
export OPENVIM_PORT=<http_port of openvimd.cfg>
export OPENVIM_ADMIN_PORT=<http_admin_port of openvimd.cfg>
       
#You can insert at .bashrc for authomatic loading at login:
echo "export OPENVIM_HOST=<...>" >> /home/${USER}/.bashrc
...

Create a tenant in openvim to be used by OSM

  • Create a tenant that will be used by OSM:
openvim tenant-create --name osm --description "Tenant for OSM"
  • Take the uuid of the tenant and update the environment variables used by openvim client:
export OPENVIM_TENANT=<obtained uuid>
#echo "export OPENVIM_TENANT=<obtained uuid>" >> /home/${USER}/.bashrc
openvim config                             #show openvim env variables

Additional configuration

Finally, compute nodes as well as external networks are required to be added to openvim. Details on how to do it can be found here.

Configure a network for telco/VNF management network:

  • In MWC'16 demo setup, this network uses the bridge virbrMan1, specified in openvimd.cfg
openvim net-create '{"network": {"name":"mgmt", "type":"bridge_man", "provider:physical":"bridge:virbrMan1", "provider:vlan:":2101, "shared":true}}'

Configure another network for interDC communication. This is required, for instance, to connect VNFs in OpenVIM datacenter to other VNFs in other datacenters. A similar configuration is required in all datacenters. The physical interconnection of these datacenters is out of the scope of this doc.

  • In MWC'16 demo setup, this network is using a specific port in the switching infrastructure ("eth4/20").
openvim net-create '{"network": {"name":"interDC", "type":"data", "provider:physical":"openflow:eth4/20", "shared":true}}'

As a side note, if you plan to test the network scenarios from MWC'16, the names of the networks must be preserved (mgmt, interDC). If different network names are used, NS packages must be updated accordingly.


Your feedback is most welcome!
You can send us your comments and questions to OSM_TECH@list.etsi.org
Or join the OpenSourceMANO Slack Workplace
See hereafter some best practices to report issues on OSM