13. ANNEX 5: OpenVIM installation
13.1. Required infrastructure
In order to run OpenVIM in normal mode (see below the available modes) and deploy dataplane VNFs, an appropriate infrastructure is required. Below there is a reference architecture for an OpenVIM-based DC deployment.
OpenVIM needs to be accessible from OSM, requiring:
Its API to be accesible for OSM (more precisely, for RO module). That is the purpose of the VIM mgmt network in the figure.
To be connected to all compute servers through a network, the DC infrastructure network in the figure.
To offer management IP addresses to VNFs for VNF configuration from OSM (more precisely, for VCA module). That is the purpose of the VNF management network.
Besides being connected to the DC infrastructure network, Compute nodes must also be connected to two additional networks:
VNF management network, used by OSM to configure the VNFs
Inter-DC network, optionally required to interconnect this VIM to other VIMs/datacenters.
VMs will be connected to these two networks at deployment time if requested by OSM.
13.2. Requirements for OpenVIM controller
Minimal requirements:
1 vCPU (2 recommended)
4 GB RAM (4 GB are required to run OpenDaylight controller; if the ODL controller runs outside the VM, 2 GB RAM are enough)
40 GB disk
3 network interfaces to:
OSM network (to interact with RO)
DC intfrastructure network (to interact with the compute servers and switches)
Telco/VNF management network (to provide IP addresses via DHCP to the VNFs)
Base image:
ubuntu-16.04-server-amd64
13.3. Installation of the OpenVIM controller
OpenVIM controller is installed using a script:
wget -O install-openvim.sh "https://osm.etsi.org/gitweb/?p=osm/openvim.git;a=blob_plain;f=scripts/install-openvim.sh;hb=1ff6c02ecff38378a4d7366e223cefd30670602e"
chmod +x install-openvim.sh
#sudo ./install-openvim.sh -q # --help for help on options
# NOTE: you can provide optionally the admin user (normally 'root') and password of the database.
Once installed, you can manage the service by the regular procedures for services, such as sudo service osm-openvim start|stop|restart
or alternatives.
Logs are at /var/log/osm/openvim.log
Configuration file is at /etc/osm/openvimd.cfg
There is a CLI client called openvim
. Type openvim config
to see the configuration in bash variables
13.3.1. Openflow controller
For normal
or OF only
openvim modes you will need an OpenFlow controller as well. The following OpenFlow controllers are supported.
13.3.1.1. Floodlight v0.90
You can install e.g. floodlight-0.90. The script openvim/scripts/install-floodlight.sh
makes this installation for you. And the script service-floodlight
can be used to start/stop it in a screen with logs.
$ sudo openvim/scripts/install-floodlight.sh
$ service-floodlight start
13.3.1.2. ONOS
NOTE: This tutorial assumes you are developing ONOS in DevelVM and deploying it on DeployVM (which is the one in which OpenVIM runs)
13.3.1.2.1. System requirements
2GB or more RAM (I personally recommend at least 4GB)
2 or more processors
Ubuntu 14.04 LTS or 16.04 LTS (Checked with both distros)
13.3.1.2.2. Software requirements
13.3.1.2.2.1. Maven
Install Maven 3.3.9 on your Apps directory
$ cd ~
$ mkdir Apps
$ wget http://archive.apache.org/dist/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.tar.gz
$ tar -zxvf apache-maven-3.3.9-bin.tar.gz -C ./Apps/
NOTE: Although ONOS has been migrated to Buck, maven was used in earlier releases.
13.3.1.2.2.2. Karaf
Install Karaf 3.0.5 on your Apps directory
$ cd ~
$ wget http://archive.apache.org/dist/karaf/3.0.5/apache-karaf-3.0.5.tar.gz
$ tar -zxvf apache-karaf-3.0.5.tar.gz -C ./Apps/
13.3.1.2.2.3. Java 8
Install Java 8
$ sudo apt-get install software-properties-common -y
$ sudo add-apt-repository ppa:webupd8team/java -y
$ sudo apt-get update
$ sudo apt-get install oracle-java8-installer oracle-java8-set-default -y
Set your JAVA_HOME
export JAVA_HOME=/usr/lib/jvm/java-8-oracle
Verify it with the following command
$ env | grep JAVA_HOME
JAVA_HOME=/usr/lib/jvm/java-8-oracle
13.3.1.2.3. Download latest ONOS
$ git clone https://gerrit.onosproject.org/onos
$ cd onos
$ git checkout master
Edit onos/tools/dev/bash_profile
and set the correct path for ONOS_ROOT
, MAVEN
and KARAF_ROOT
# Please note that I am using my absolute paths here, yours may be different
export ONOS_ROOT=${ONOS_ROOT:-~/onos}
export MAVEN=${MAVEN:-~/Apps/apache-maven-3.3.9}
export KARAF_ROOT=${KARAF_ROOT:-~/Apps/apache-karaf-$KARAF_VERSION}
Edit ~/.bashrc
and add the following line at the end:
# Please note that I am specifying here the absolute path of the bash_profile file in my machine, it may be different in yours
. ~/onos/tools/dev/bash_profile
Reload .bashrc
or log out and log in again to apply the changes
. ~/.bashrc
13.3.1.2.4. Build and deploy ONOS
If you are using an stable release below 1.7, please use maven, otherwise, use Buck. Depending on which tool you use to build ONOS, the deployment procedure is also different.
13.3.1.2.4.1. Build with maven
#$ mci # Alias for mvn clean install
$ op
13.3.1.2.4.2. Build with Buck
NOTE: ONOS currently uses a modified version of Buck, which has been packaged with ONOS. Please use this version until our changes have been upstreamed and released as part of an official Buck release.
$ sudo apt-get install zip unzip
$ cd $ONOS_ROOT
$ tools/build/onos-buck build onos --show-output
Updating Buck...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 43.8M 100 43.8M 0 0 172k 0 0:04:20 0:04:20 --:--:-- 230k
Archive: cache/buck-v2016.11.12.01.zip
inflating: buck
extracting: .buck_version
creating: plugins/
inflating: plugins/onos.jar
inflating: plugins/yang.jar
Successfully updated Buck in /home/alaitz/Code/onos/bin/buck to buck-v2016.11.12.01.zip
Not using buckd because watchman isn't installed.
[-] PROCESSING BUCK FILES...FINISHED 3.1s [100%] 🐳 New buck daemon
[+] DOWNLOADING... (0.00 B/S, TOTAL: 0.00 B, 0 Artifacts)
[+] BUILDING...1m47.9s [99%] (720/721 JOBS, 720 UPDATED, 720 [99.9%] CACHE MISS)
|=> IDLE
|=> IDLE
|=> IDLE
|=> //tools/package:onos-package... 9.9s (checking local cache)
|=> IDLE
|=> IDLE
|=> IDLE
|=> IDLE
The outputs are:
//tools/package:onos-package buck-out/gen/tools/package/onos-package/onos.tar.gz
Sources:
13.3.1.2.5. Run ONOS
$ cd $ONOS_ROOT
$ tools/build/onos-buck run onos-local -- clean debug
13.3.1.3. OpenDayLight
OpenDayLight integration has been tested with the Beryllium-SR4 release. The steps to integrate this version are the following:
Download the Beryllium release and extract it in the folder:
$ wget https://nexus.opendaylight.org/content/repositories/opendaylight.release/org/opendaylight/integration/distribution-karaf/0.4.4-Beryllium-SR4/distribution-karaf-0.4.4-Beryllium-SR4.tar.gz
$ tar xvf distribution-karaf-0.4.4-Beryllium-SR4.tar.gz
Then, configure the features you want to run with ODL (add the following features: odl-restconf-all
, odl-dlux-core
and odl-openflowplugin-flow-services-ui
), and finally start the controller:
$ vi distribution-karaf-0.4.4-Beryllium-SR4/etc/org.apache.karaf.features.cfg
#
# Comma separated list of features to install at startup
#
featuresBoot=config,standard,region,package,kar,ssh,management,odl-restconf-all,odl-dlux-core,odl-openflowplugin-flow-services-ui
$ distribution-karaf-0.4.4-Beryllium-SR4/bin/start
In the OpenVIM config file (/etc/osm/openvimd.cfg
) you need to config the information about the SDN controller:
$ cat /etc/osm/openvimd.cfg
...
# Default openflow controller information
#of_controller: opendaylight # Type of controller to be used.
# Valid controllers are 'opendaylight', 'floodlight' or <custom>
#of_controller_module: # Only needed for <custom>. Python module that implement
# this controller. By default a file with the name <custom>.py is used
# of_<other>: value # Other parameters required by <custom> controller. Consumed by __init__
#of_user: admin # User credentials for the controller if needed
#of_password: admin # Password credentials for the controller if needed
#of_controller_ip: 10.0.0.0 # IP address where the Openflow controller is listening
#of_controller_port: 8080 # TCP port where the Openflow controller is listening (REST API server)
#of_controller_dpid: 'XX:XX:XX:XX:XX:XX:XX:XX' # Openflow Switch identifier (put here the right number)
# This option is used for those openflow switch that cannot deliver one packet to several output with different vlan tags
# When set to true, it fails when trying to attach different vlan tagged ports to the same net
#of_controller_nets_with_same_vlan: false # (by default, true)
And finally, export the following variables:
export OF_CONTROLLER_TYPE=opendaylight
export OF_CONTROLLER_USER=admin
export OF_CONTROLLER_PASSWORD=admin
export OF_CONTROLLER_IP=10.0.0.0
export OF_CONTROLLER_PORT=8080
export OF_CONTROLLER_DPID=XX:XX:XX:XX:XX:XX:XX:XX
Finally, restart openvim:
service osm-openvim restart
13.3.2. DHCP server (Bridge)
OpenVIM has two options for overlay network management bridge
and ovs
. (network_type at openvimd.cfg
). For bridge
type, openvim relays on pre-created bridges at compute nodes that has L2 connectivity using e.g. a switch in trunk mode. In this mode you should provide an external DHCP server for the management network. This section describes how to install such a dhcp server base on the isc-dhcp-server
package.
It can be installed in the same or in a different machine where openvim is running, meanwhile it has L2 connectivity with the compute nodes bridges and ssh access from OpenVIM (in case it is installed on a different machine)
Install the package:
Ubuntu 14.04:
sudo apt-get install dhcp3-server
Ubuntu 16.04:
sudo apt install isc-dhcp-server
Configure by editing the file /etc/default/isc-dhcp-server
to enable DHCP server in the appropriate interface, the one with L2 connectivity (e.g. eth1).
$ sudo vi /etc/default/isc-dhcp-server
INTERFACES="eth1"
Edit file /etc/dhcp/dhcpd.conf
to specify the subnet, netmask and range of IP addresses to be offered by the server.
$ sudo vi /etc/dhcp/dhcpd.conf
ddns-update-style none;
default-lease-time 86400;
max-lease-time 86400;
log-facility local7;
option subnet-mask 255.255.0.0;
option broadcast-address 10.210.255.255;
subnet 10.210.0.0 netmask 255.255.0.0 {
range 10.210.1.2 10.210.1.254;
}
<Restart the service:
sudo service isc-dhcp-server restart
Create a script called get_dhcp_lease.sh
accesible from PATH (e.g. at /usr/local/bin
) with this content:
#!/bin/bash
awk '
($1=="lease" && $3=="{"){ lease=$2; active="no"; found="no" }
($1=="binding" && $2=="state" && $3=="active;"){ active="yes" }
($1=="hardware" && $2=="ethernet" && $3==tolower("'$1';")){ found="yes" }
($1=="client-hostname"){ name=$2 }
($1=="}"){ if (active=="yes" && found=="yes"){ target_lease=lease; target_name=name}}
END{printf("%s", target_lease)} #print target_name
' /var/lib/dhcp/dhcpd.leases
Give execution rights to this file:
chmod +x /usr/local/bin/get_dhcp_lease.sh
Finally configure openvimd.cfg
with the location and credentials of the installed dhcp_server:
dhcp_server:
host: host-ip-or-name
provider: isc-dhcp-server #dhcp-server type
user: user
#provide password, or key if needed
password: passwd
#keyfile: ssh-access-key
#list of the previous bridges interfaces attached to this dhcp server
bridge_ifaces: [ virbrMan1 ]
13.3.3. OVS controller
OpenVIM has two options for overlay network management bridge
and ovs
. (network_type
at openvimd.cfg
). For ovs
type, OpenVIM creates a ovs vxlan tunnel and launches a dhcp server in the ovs_controller
. The ovs_controller
can be on a different or on the same machine where openvim is running.
Some preparation is needed to configure the ovs_controller
:
Execute scripts/configure-dhcp-server-UBUNTU16.0.4.sh
on the machine where ovs_controller will run. Can be placed in the same OpenVIM VM or in a new one.
$ sudo ./openvim/scripts/configure-dhcp-server-UBUNTU16.0.4.sh <user-name>
Modify openvimd.cfg
and add net controller connection details:
network_type : ovs
#ovs_controller_ip: <net controller ip> # dhcp controller IP address, must be change in
# order to reach computes.
#ovs_controller_user: <net controller user> # User for the dchp controller for OVS networks
#ovs_controller_file_path: '/var/lib/openvim' # Net controller Path for dhcp daemon
# configuration, by default '/var/lib/openvim
Ensure that automatic login from openvim to ovs_controller
works without any prompt, and that openvim can run commands with root admin. It is recomended to add the public openvim ssh key to the autorized_keys at ovs_controller
and set the autentication key to use at openvimd.cfg
:
#ovs_controller_keyfile: /path/to/ssh-key-file # ssh-access-key file to connect host
13.4. Setting up compute nodes for OpenVIM
13.4.1. Introduction
This article contains the general guidelines to configure a compute node for NFV based on a 64 bits Linux system OS with KVM, qemu and libvirt (e.g. RHEL7.1, RHEL7.0, CentOS 7.1, Ubuntu Server 16.04).
This article is general for all Linux systems, and tries to gather all the configuration steps. These steps have not been thoroughly tested in all Linux distros and there are no guarantees that the steps below will be 100% accurate for your case if it is not included in this list.
For additional details of the installation procedure for a specific distro, you might want also to check these sections:
Note: OpenVIM Controller has been tested with servers based on Xeon E5-based Intel processors with Ivy Bridge architecture, and with Intel X520 NICs based on Intel 82599 controller. No tests have been carried out with Intel Core i3, i5 and i7 families, so there are no guarantees that the integration will be seamless.
The configuration that must be applied to the compute node is the following:
BIOS setup
Install virtualization packages (kvm, qemu, libvirt, etc.)
Use a kernel with support of huge page TLB cache in IOMMU
Enable IOMMU
Enable 1G hugepages, and reserve enough hugepages for running the VNFs
Isolate CPUs so that the host OS is restricted to run on the first core of each NUMA node.
Enable SR-IOV
Enable all processor virtualization features in the BIOS;
Enable hyperthreading in the BIOS (optional)
Deactivate KSM
Pre-provision Linux bridges
Additional configuration to allow access from Openvim Controller, including the configuration to access the image repository and the creation of appropriate folders for image on-boarding
A full description of this configuration is detailed below.
13.4.2. BIOS setup
Ensure that virtualization options are active. If they are active, the following command should give a non empty output:
egrep "(vmx|svm)" /proc/cpuinfo
It is also recommended to activate hyper-threading. If it is active, the following command should give a non empty output:
egrep ht /proc/cpuinfo
Ensure no power saving option is enabled.
13.4.3. Installation of virtualization packages
Install the following packages in your host OS:
qemu-kvm libvirt-bin bridge-utils virt-viewer virt-manager
13.4.4. IOMMU TLB cache support
Use a kernel with support huge page TLB cache in IOMMU. For example RHEL7.1, Ubuntu 14.04, or a vanilla kernel 3.14 or higher. In case you are using a kernel without this support, you should update your kernel. For instance, you can use the following kernel for RHEL7.0 (not needed for RHEL7.1):
wget http://people.redhat.com/~mtosatti/qemu-kvm-take5/kernel-3.10.0-123.el7gig2.x86_64.rpm
rpm -Uvh kernel-3.10.0-123.el7gig2.x86_64.rpm --oldpackage
13.4.5. Enabling IOMMU
Enable IOMMU, by adding the following to the grub command line
intel_iommu=on
13.4.6. Enabling 1G hugepages
Enable 1G hugepages, by adding the following to the grub command line
default_hugepagesz=1G hugepagesz=1G
There are several options to indicate the memory to reserve:
At boot option, adding
hugepages=24
at grub, (reserves 24GB)With a
hugetlb-gigantic-pages.service
for modern kernels. For a RHEL based linux system you need to create a configuration file/usr/lib/systemd/system/hugetlb-gigantic-pages.service
with this content
[Unit]
Description=HugeTLB Gigantic Pages Reservation
DefaultDependencies=no
Before=dev-hugepages.mount
ConditionPathExists=/sys/devices/system/node
ConditionKernelCommandLine=hugepagesz=1G
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/lib/systemd/hugetlb-reserve-pages
[Install]
WantedBy=sysinit.target
Then set the huge pages at each NUMA node. For instance, in a system with 2 NUMA nodes, in case we want to reserve 4GB for the host OS (2GB on each NUMA node), and all remaining memory for hugepages:
totalmem=`dmidecode --type 17|grep Size |grep MB |gawk '{suma+=$2} END {print suma/1024}'`
hugepages=$(($totalmem-4))
echo $((hugepages/2)) > /sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages
echo $((hugepages/2)) > /sys/devices/system/node/node1/hugepages/hugepages-1048576kB/nr_hugepages
Copy the last two lines into
/usr/lib/systemd/hugetlb-reserve-pages
file for automatic execution after boot.
13.4.7. CPU isolation
Isolate CPUs so that the host OS is restricted to run on the first core of each NUMA node, by adding the isolcpus field to the grub command line. For instance:
isolcpus=1-9,11-19,21-29,31-39
The exact CPU numbers might differ depending on the CPU numbers presented by the host OS. In the previous example, CPUs 0, 10, 20 and 30 are excluded because CPU 0 and its sibling 20 correspond to the first core of NUMA node 0, and CPU 10 and its sibling 30 correspond to the first core of NUMA node 1. Running this awk script will suggest the value to use in your compute node:
gawk 'BEGIN{pre=-2;} ($1=="processor"){pro=$3;} ($1=="core" && $4!=0){ if (pre+1==pro){endrange="-" pro} else{cpus=cpus endrange sep pro; sep=","; endrange="";}; pre=pro;} END{printf("isolcpus=%s\n",cpus endrange);}' /proc/cpuinfo
13.4.8. Deactivating KSM
KSM enables the kernel to examine two or more already running programs and compare their memory. If any memory regions or pages are identical, KSM reduces multiple identical memory pages to a single page. This page is then marked copy on write. If the contents of the page is modified by a guest virtual machine, a new page is created for that guest virtual machine.
KSM has a performance overhead which may be too large for certain environments or host physical machine systems.
KSM can be deactivated by stopping the ksmtuned
and the ksm service. Stopping the services deactivates KSM but does not persist after restarting.
# service ksmtuned stop
Stopping ksmtuned: [ OK ]
# service ksm stop
Stopping ksm: [ OK ]
Persistently deactivate KSM with the chkconfig
command. To turn off the services, run the following commands:
# chkconfig ksm off
# chkconfig ksmtuned off
Check RHEL 7 - THE KSM TUNING SERVICE for more information.
13.4.9. Enabling SR-IOV
We assume that you are using Intel X520 NICs (based on Intel 82599 controller) or Intel Fortville NICs. In case you are using other NICs, the configuration might be different.
Configure several virtual functions (e.g. 8 is an appropriate value) on each 10G network interface. A larger number can be configured if desired. (This paragraph is provisional, since not always works for all nic cards!!!)
for iface in `ifconfig -a | grep ": " | cut -f 1 -d":" | grep -v -e "_" -e "\." -e "lo" -e "virbr" -e "tap"`
do
driver=`ethtool -i $iface| awk '($0~"driver"){print $2}'`
if [ "$driver" == "i40e" -o "$driver" == "ixgbe" ]
#Create 8 SR-IOV per PF
echo 0 > /sys/bus/pci/devices/`ethtool -i $iface | awk '($0~"bus-info"){print $2}'`/sriov_numvfs
echo 8 > /sys/bus/pci/devices/`ethtool -i $iface | awk '($0~"bus-info"){print $2}'`/sriov_numvfs
fi
done
For Niantic X520 NICs the parameter max_vfs must be set to workaround a bug with the ixgbe driver managing VFs by the sysfs interface:
echo "options ixgbe max_vfs=8" >> /etc/modprobe.d/ixgbe.conf
Blacklist the ixgbevf module, by adding the following to the grub command line. The reason for blacklisting this driver is because it causes that the VLAN tag of broadcast packets is not properly removed when received by an SRIOV port.
modprobe.blacklist=ixgbevf
13.4.10. Pre-provision of Linux bridges
Openvim relies on Linux bridges to interconnect VMs when there are no high performance requirements for I/O. This is the case of control plane VNF interfaces that are expected to carry a small amount of traffic.
A set of Linux bridges must be pre-provisioned on every host. Every Linux bridge must be attached to a physical host interface with a specific VLAN. In addition, a external switch must be used to interconnect those physical host interfaces. Bear in mind that the host interfaces used for data plane VM interfaces will be different from the host interfaces used for control plane VM interfaces.
For example, in RHEL7.0, to create a bridge associated to the physical “em1” interface, it is needed to add two files per bridge at /etc/sysconfig/network-scripts
folder:
File with name
ifcfg-virbrManX
with the content:
DEVICE=virbrManX
TYPE=Bridge
ONBOOT=yes
DELAY=0
NM_CONTROLLED=no
USERCTL=no
File with name
em1.200X
(using vlan tag 200X)
DEVICE=em1.200X
ONBOOT=yes
NM_CONTROLLED=no
USERCTL=no
VLAN=yes
BOOTPROTO=none
BRIDGE=virbrManX
The name of the bridge and the VLAN tag can be different. In case you use a different name for the bridge, you should take it into account in openvimd.cfg
.
13.4.11. Additional configuration to allow access from OpenVIM
Uncomment the following lines of
/etc/libvirt/libvirtd.conf
to allow external connection tolibvirtd
:
unix_sock_group = "libvirt"
unix_sock_rw_perms = "0770"
unix_sock_dir = "/var/run/libvirt"
auth_unix_rw = "none"
Create and configure a user to access the compute node from openvim. The user must belong to group libvirt.
#creates a new user
useradd -m -G libvirt <user>
#or modified an existing user
usermod -a -G libvirt <user>
Allow
<user>
to get root privileges without password, for example all members of group libvirt:
sudo visudo # add the line: %libvirt ALL=(ALL) NOPASSWD: ALL
Copy the ssh key of openvim into compute node. From the machine where OPENVIM is running (not from the compute node), run:
ssh-keygen #needed for generate ssh keys if not done before
ssh-copy-id <user>@<compute host>
After that, ensure that you can access directly without password prompt from openvim to compute host:
ssh <user>@<compute host>
Configure access to image repository
The way that openvim deals with images is a bit different from other CMS. Instead of copying the images when doing the on-boarding, openvim assumes that images are locally accessible on each compute node on a local folder, identical for all compute nodes. This does not mean that the images are forced to be copied on each compute node disk.
Typically this can be done by storing all images in a remote shared location accessible by all compute nodes through a NAS file system and mounting locally the shared folder via NFS on a specific local folder with identical on each compute node.
VNF descriptors contain image paths pointing to a location on that folder. When doing the on-boarding, the image will be copied from the image path (accessible through NFS) to the on-boarding folder, whose configuration is described next.
Create a local folder for image on-boarding and grant access from openvim. A local folder for image on-boarding must be created on each compute note (in default configuration, we assume that the folder is
/opt/VNF/images
). This folder must be created in a disk with enough space to store the images of the active VMs. If there is only a root partition in the server, the recommended procedure is to link the openvim required folder to the standard libvirt folder for holding images:
mkdir -p /opt/VNF/
ln -s /var/lib/libvirt/images /opt/VNF/images
chown -R <user>:nfvgroup /opt/VNF
chown -R root:nfvgroup /var/lib/libvirt/images
chmod g+rwx /var/lib/libvirt/images
In case there is a partition (e.g.
/home
) that contains more disk space than the/
partition, we suggest to use that partition, although a soft link can be created anywhere else. As an example, this is what our script for automatic installation in RHEL7.0 does:
mkdir -p /home/<user>/VNF_images
rm -f /opt/VNF/images
mkdir -p /opt/VNF/
ln -s /home/<user>/VNF_images /opt/VNF/images
chown -R <user> /opt/VNF
Besides, access to that folder must be granted to libvirt group in a SElinux system.
# SElinux management
semanage fcontext -a -t virt_image_t "/home/<user>/VNF_images(/.*)?"
cat /etc/selinux/targeted/contexts/files/file_contexts.local |grep virt_image
restorecon -R -v /home/<user>/VNF_images
13.4.12. Compute node configuration in special cases
13.4.12.1. Datacenter with different types of compute nodes
In a datacenter with different types of compute nodes, it might happen that compute nodes use different interface naming schemes. In that case, you can take the most used interface naming scheme as the default one, and make an additional configuration in the compute nodes that do not follow the default naming scheme.
In order to do that, you should create the file hostinfo.yaml
file inside the image local folder (e.g. typically /opt/VNF/images
). It contains entries with:
openvim-expected-name: local-iface-name
For example, if openvim contains a network using macvtap to the physical interface em1
(macvtap:em1
) but in this compute node the interface is called eth1
, creates a local-image-folder/hostinfo.yaml
file with this content:
em1: eth1
13.4.12.2. Configure compute node in ‘developer’ mode
In order to test a VM, it is not really required to have a full NFV environment with 10G data plane interfaces and Openflow switches. If the VM is able to run with virtio interfaces, you can configure a compute node in a simpler way and use the ‘developer mode’ in openvim. In that mode, during the instantiation phase, VMs are deployed without hugepages and with all data plane interfaces changed to virtio interfaces. It must be noticed that openvim flavors do not change and keep identical (including all EPA attributes), but openvim performs an intelligent translation during the instantiation phase.
The configuration of a compute node to be used in ‘developer mode’ removes the configuration that is not needed for testing purposes, that is:
IOMMU configuration is not required since no passthrough or SR-IOV interfaces will be used
Huge pages configuration is unnecessary. All memory will be assigned in 4KB pages, allowing oversubscription (as in traditional clouds).
No configuration of data plane interfaces (e.g. SR-IOV) is required.
A VNF developer will typically use the developer mode in order to test its VNF in its own computer. Although part of the configuration is not required, the rest of the compute node configuration is still necessary. In order to prepare your own computer or a separate one as a compute node for developing purposes, you can use the script configure-compute-node-develop.sh
, that can be found in OSM/openvim repo, under the scripts folder.
In order to execute the script, just run this command:
sudo ./configure-compute-node-develop.sh <user> <iface>
13.4.13. RHEL7.2 and CentOS7.2
In order to apply the configuration, you can do it automatically by using a script that performs all actions apart from BIOS configuration and user key sharing. This script is only for RHEL7.2 server and CentOS7.2 server.
wget -O install-openvim.sh "https://osm.etsi.org/gitweb/?p=osm/openvim.git;a=blob_plain;f=scripts/configure-compute-node-RHEL7.2.sh;hb=1ff6c02ecff38378a4d7366e223cefd30670602e"
chmod +x ./configure-compute-node-RHEL7.2.sh
sudo ./configure-compute-node-RHEL7.2.sh <user> <iface>
The variable <user>
is the host user used by openvim to connect via ssh (libvirt admin rights will be granted to that user). The variable <iface>
is the host interface where Linux bridges will be provisioned.
Of course, it is also possible to complete this process manually as described in the previous sections.
13.4.14. RHEL7.1 and CentOS7.1
In order to apply the configuration, you can do it automatically by using a script that performs all actions apart from BIOS configuration and user key sharing. This script is only for RHEL7.1 and CentOS7.1.
wget https://github.com/nfvlabs/openvim/raw/master/scripts/configure-compute-node-RHEL7.1.sh
chmod +x ./configure-compute-node-RHEL7.1.sh
sudo ./configure-compute-node-RHEL7.1.sh <user> <iface>
The variable <user>
is the host user used by openvim to connect via ssh (libvirt admin rights will be granted to that user). The variable <iface>
is the host interface where Linux bridges will be provisioned.
Of course, it is also possible to complete this process manually as described in the previous sections.
13.4.15. Ubuntu 16.04 LTS
TODO: Under elaboration.
13.5. Configuration of the OpenVIM controller
In order to configure OpenVIM, you will need to edit the file /etc/osm/openvimd.cfg
.
NOTE: In a default installation, it is pre-configured to run in test
mode (i.e. for developement), where no real hosts neither openflow controller are needed. You should enable other modes for running specific tests or use the normal
mode for using it for real:
mode | Computes hosts | Openflow controller | Observations |
---|---|---|---|
test | fake | X | No real deployment. Just for API test |
normal | needed | needed | Normal behavior |
host only | needed | X | No PT/SRIOV connections |
develop | needed | X | Force to cloud type deployment without EPA |
OF only | fake | needed | To test openflow controller without needed of compute hosts |
After a change, the service should be restarted:
sudo service osm-openvim restart
NOTE: The following steps are done automatically by the script ONLY if OpenVIM is running in test
mode. They would be needed for the rest of the nodes.
/opt/openvim/scripts/initopenvim.sh --insert-bashrc --force
Let’s configure the openvim CLI client. Needed if you have changed the
/opt/openvim/openvimd.cfg
file (WARNING not the./openvim/openvimd.cfg
)
#openvim config # show openvim related variables
#To change variables run
export OPENVIM_HOST=<http_host of openvimd.cfg>
export OPENVIM_PORT=<http_port of openvimd.cfg>
export OPENVIM_ADMIN_PORT=<http_admin_port of openvimd.cfg>
#You can insert at .bashrc for authomatic loading at login:
echo "export OPENVIM_HOST=<...>" >> /{HOME}/.bashrc
...
13.5.1. Adding compute nodes
Let’s attach compute nodes:
In test
mode we need to provide fake compute nodes with all the necessary information:
openvim host-add /opt/openvim/test/hosts/host-example0.yaml
openvim host-add /opt/openvim/test/hosts/host-example1.yaml
openvim host-add /opt/openvim/test/hosts/host-example2.yaml
openvim host-add /opt/openvim/test/hosts/host-example3.yaml
openvim host-list #-v,-vv,-vvv for verbosity levels
In normal
or host only
mode, the process is a bit more complex. First, you need to configure appropriately the host following these guidelines. The current process is manual, although we are working on an automated process. For the moment, follow these instructions:
#copy /opt/openvim/scripts/host-add.sh and run at compute host for gather all the information
./host_add.sh <user> <ip_name> >> host.yaml
#NOTE: If the host contains interfaces connected to the openflow switch for dataplane,
# the switch port where the interfaces are connected must be provided manually,
# otherwise these interfaces cannot be used. Follow one of two methods:
# 1) Fill openvim/database_utils/of_ports_pci_correspondence.sql ...
# ... and load with mysql -uvim -p vim_db < openvim/database_utils/of_ports_pci_correspondence.sql
# 2) or add manually this information at generated host.yaml with a 'switch_port: <whatever>'
# ... entry at 'host-data':'numas': 'interfaces'
# copy this generated file host.yaml to the openvim server, and add the compute host with the command:
openvim host-add host.yaml
# copy openvim ssh key to the compute node. If openvim user didn't have a ssh key generate it using ssh-keygen
ssh-copy-id <compute node user>@<IP address of the compute node>
Note: It must be noted that Openvim has been tested with servers based on Xeon E5 Intel processors with Ivy Bridge architecture. No tests have been carried out with Intel Core i3, i5 and i7 families, so there are no guarantees that the integration will be seamless.
13.5.2. Adding external networks
Let’s list the external networks:
openvim net-list
Let’s create some external networks in openvim. These networks are public and can be used by any VNF. In order to create external networks, use
openvim net-create
, specifying a file with the network information. To create a management network:
openvim net-create /opt/openvim/test/networks/net-example4.yaml
Let’s list the external networks:
openvim net-list
2c386a58-e2b5-11e4-a3c9-52540032c4fa mgmt
You can build your own networks using the template templates/network.yaml
. Alternatively, you can use openvim net-create
without a file and answer the questions:
openvim net-create
You can delete a network, e.g. “mgmt”, using the command:
openvim net-delete mgmt
13.5.3. Creating a new tenant
Now let’s create a new tenant “osm”:
$ openvim tenant-create --name osm --description osm
<uuid> osm Created
Take the uuid of the tenant and update the environment variables used by openvim client:
export OPENVIM_TENANT=<obtained uuid>
#echo "export OPENVIM_TENANT=<obtained uuid>" >> /home/${USER}/.bashrc
openvim config #show openvim env variables
13.6. OpenVIM Logs and Troubleshooting
13.6.1. Service and Logs
The service is called osm-openvim
, and your can manage it with regular Linux procedures:
sudo service osm-openvim status #restart start stop
OpenVIM logs are at file /var/log/osm/openvim.log
Configuration is at file /etc/osm/openvimd.cfg
. OpenVIM running modes and log level (debug
by default) can be set here.
13.6.2. Troubleshooting
13.6.2.1. OpenVIM status
The status of the openvimd
process can be checked by running the following command as root:
sudo service osm-openvim status
● osm-openvim.service - openvim server
Loaded: loaded (/etc/systemd/system/osm-openvim.service; enabled; vendor preset: enabled)
Active: active (running) since jue 2017-06-08 15:41:34 CEST; 3 days ago
Main PID: 1995 (python)
Tasks: 8
Memory: 32.6M
CPU: 25.295s
CGroup: /system.slice/osm-openvim.service
└─1995 python /opt/openvim/openvimd -c /etc/osm/openvimd.cfg --log-file=/var/log/osm/openvim.log
In case it is not running, try to see the last logs at /var/log/osm/openvim.log
13.6.2.2. Known error messages in openvim and their solution
13.6.2.2.1. Internal Server Error at host-add
SYMPTOM: an error is raised when trying to add a compute node in “normal” mode
CAUSE: invalid credentials or invalid ip address at compute node, net controller or both
SOLUTION:
Check that the credentials you have provide to access the compute node are ok. You can use user without password (not recomended) or user with ssh-key-file. The ssh-key-file can be set both at configuration file (/etc/osm/openvimd.cfg
host_ssh_keyfile
) or individually at each compute-node.yaml
file (keyfile
) (later precedes).
Ensure the know-hosts is already set. Try to execute ssh compute-user@compute-node
with the same user that osm-openvim service is running (normally root); you must be able to enter without any prompt for host authentication confirmation or password.
Use for the ip_name field at compute-node.yaml the real IP address (not the name), because it will fail to set a ovs connection if you use the name.
User for the ovs_controller_ip
at /etc/osm/openvimd.cfg
the IP address (current version fails with the default ‘localhost’). Ensure you can enter in this host (regardless it is localhost) with the user ovs_controller_ip
and the ovs_controller_keyfile
or ovs_controller_password
. You must be able to ssh without any prompt. Ensuse that once entered, you can run sudo commands without any password.
13.6.2.2.2. Wrong database version
SYMPTOM: osm-openvim service fails. At openvim logs it appears:
2017-06-12T10:56:30 CRITICAL openvim openvimd:278 DATABASE wrong version '19'. Try to upgrade/downgrade to version '20' with '/home/atierno/OSM/osm/openvim/osm_openvim/../database_utils/migrate_vim_db.sh 20'
CAUSE: OpenVIM has been upgraded with a new version that requieres a new database version.
SOLUTION: To upgrade de database version run the command at logs, provide credentials if needed (by default database user is vim
, and database password is vimpw
)
13.6.3. Software upgrade (source code)
openvim is being upgraded periodically to fix bugs that are being reported. Last version corresponds with the tag v2.0.1.
Execute:
service osm-openvim stop
#git -C /opt/openvim stash #required if the original config file has changed
git -C /opt/openvim pull --rebase
git -C /opt/openvim checkout tags/v2.0.1
#git -C /opt/openvim stash pop #required if the original file has changed
/opt/openvim/database_utils/migrate_vim_db.sh
service osm-openvim start
13.6.4. Software upgrade (binaries)
TODO: Under elaboration.