OSM Release THREE

From OSM Public Wiki
Jump to: navigation, search

Open Source MANO (OSM) is the open source community that aims to deliver a production-quality MANO stack for NFV, capable of consuming openly published information models, available to everyone, suitable for all VNFs, operationally significant and VIM-independent. OSM is aligned to NFV ISG information models while providing first-hand feedback based on its implementation experience.

Interaction with VIMs and VNFs

The following figure shows OSM interaction with VIM and VNFs.

OSM Release 1 connectivity 1


In simpler setups, OSM only requires a single interface as long as both VIM and VNF IP addresses are reachable

OSM Release 1 connectivity 2

Install OSM

Install from binaries (Recommended)

All you need to run OSM Release THREE is a single server or VM with the following requirements:

  • MINIMUM: 4 CPUs, 8 GB RAM, 40GB disk and a single interface with Internet access
  • RECOMMENDED: 8 CPUs, 16 GB RAM, 80GB disk and a single interface with Internet access
  • Ubuntu16.04 as base image (http://releases.ubuntu.com/16.04/), configured to run LXD containers. If you don't have LXD configured, you can follow the instructions here (LXD configuration).

Note: If you wish to install OSM Release THREE from inside a LXD container, you will need to enable nested containers following instructions here (Nested containers).

Once you have prepared the host with the previous requirements, all you need to do is:

wget https://osm-download.etsi.org/ftp/osm-3.0-three/install_osm.sh
chmod +x install_osm.sh
./install_osm.sh

Install from source

To install OSM Release THREE from source, requirements are the following:

  • 8 CPUs, 16 GB RAM, 100GB disk and a single interface with Internet access
  • Ubuntu16.04 as base image (http://releases.ubuntu.com/16.04/), configured to run LXD containers. If you don't have LXD configured, you can follow the instructions here (LXD configuration).

Note: If you wish to install OSM Release THREE from inside a LXD container, you will need to enable nested containers following instructions here (Nested containers).

Once you have prepared the host with the previous requirements, all you need to do is:

wget https://osm-download.etsi.org/ftp/osm-3.0-three/install_osm.sh
chmod +x install_osm.sh
./install_osm.sh --source

If you need to install from latest master (recommended for advanced users only), please use:

./install_osm.sh -b master --source

Checking your installation

Please note that in OSM 3, authentication is performed using OpenIDConnect and OAuth2.0.
An identity provider has been added to the platform and provided as a service in the SO container running on port 8009.
This means both the browser and the UI server components (that run on the SO container) need to be able to access the SO container using identical URIs.
In short, the tuple of scheme://location:port needs to be reachable via both the browser accessing the system and the UI server running on the SO container (e.g. https://10.66.202.206:8009 for a sample deployment).
This means that if your SO container is behind a NAT that cannot reach the public address of the host, authentication and authorization will not be possible and you will not be able to proceed using the UI.

After some time, you will get a fresh OSM Release THREE installation. You can access to the UI in the following URL (user:admin, password: admin):

You can connect to the service via a web browser (Google Chrome version 50 or later is recommended). Open a browser and connect to https://1.2.3.4:8443 , replacing 1.2.3.4 with the IP address of your host. Note that it uses https, not http. Google Chrome is recommended. If you are using Firefox and plan to use the self-signed certificate provided in the installation, please follow instructions at Using untrusted, self-signed certificates Alternatively, you can run Launchpad with trusted CA signed SSL certs as per Using a certificate signed by a trusted CA or, run Launchpad with SSL disabled as per Run Launchpad with SSL Disabled

OSM login window

Make sure that port 8443 is accessible, as well as the following required ports: 8000, 4567, 8008, 80, 9090.

As a result of the installation, three LXD containers are created in the host: RO, VCA, and SO-ub (running the SO and the UI), as shown in the figure below.

OSM Release THREE installation result

Adding VIM accounts

Before proceeding, make sure that you have a site with a VIM configured to run with OSM. Three different kinds of VIMs are currently supported by OSM:

OSM can manage external SDN controllers to perform the dataplane underlay network connectivity on behalve of the VIM. See Configure VIM SDN

OpenVIM site

Using OSM cli

osm vim-create --name openvim-site --auth_url http://10.10.10.10:9080/openvim --account_type openvim --description "Openvim site" --user dummy --password dummy --tenant dummy

Adding it directly in the RO

  • Go into the RO container:
lxc exec RO -- bash
  • Execute the following commands, using the appropriate parameters (e.g. site name: "openvim-site", IP address: 10.10.10.10, VIM tenant: "osm")
export OPENMANO_TENANT=osm
openmano datacenter-create openvim-site http://10.10.10.10:9080/openvim --type openvim --description "Openvim site" 
openmano datacenter-attach openvim-site --vim-tenant-name=osm
openmano datacenter-list
exit     #or Ctrl+D to get out of the RO container
  • Go to the GUI:
ACCOUNTS > OSMOPENMANO > Click 'REFRESH STATUS' button

Openstack site

Using OSM cli

osm vim-create --name openstack-site --user admin --password userpwd --auth_url http://10.10.10.11:5000/v2.0 --tenant admin --account_type openstack

Adding it directly in the RO

  • Go into the RO container:
lxc exec RO -- bash
  • Execute the following commands, using the appropriate parameters (e.g. site name: "openstack-site", IP address: 10.10.10.11, VIM tenant: "admin", user: "admin", password: "userpwd")
export OPENMANO_TENANT=osm
openmano datacenter-create openstack-site http://10.10.10.11:5000/v2.0 --type openstack --description "OpenStack site"
openmano datacenter-attach openstack-site --user=admin --password=userpwd --vim-tenant-name=admin
openmano datacenter-list
exit     #or Ctrl+D to get out of the RO container
  • Go to the GUI:
ACCOUNTS > OSMOPENMANO > Click 'REFRESH STATUS' button

VMware vCloud Director site

  • Go into the RO container:
lxc exec RO -- bash
  • Execute the following commands, using the appropriate parameters (e.g. site name: "vmware-site", IP address: 10.10.10.12, VIM tenant: "vmware-tenant", user: "osm", password: "osm4u", admin user: "admin", admin password: "adminpwd", organization: "orgVDC")
openmano datacenter-create vmware-site https://10.10.10.12 --type vmware --description "VMware site" --config '{admin_password: adminpwd, admin_username: admin, orgname: orgVDC}'
openmano datacenter-attach vmware-site --user=osm --password=osm4u --vim-tenant-name=vmware-tenant
openmano datacenter-list
exit     #or Ctrl+D to get out of the RO container
  • Go to the GUI:
ACCOUNTS > OSMOPENMANO > Click 'REFRESH STATUS' button


VMware Integrated Openstack (VIO) site

  • Go into the RO container:
lxc exec RO -- bash
  • Execute the following commands, using the appropriate parameters (e.g. site name: "openstack-site-vio4", IP address: 10.10.10.12, VIM tenant: "admin", user: "admin", password: "passwd")
openmano datacenter-create openstack-site-vio4 http://10.10.10.12:5000/v3 --type openstack --description "VMware integrated openstack site vio4" --config '{insecure: true, vim_type: VIO}'
openmano datacenter-attach openstack-site-vio4 --user=admin --password=passwd --vim-tenant-name=admin 
--config='{APIversion: v3.3, dataplane_physical_net: dvs-46, "use_internal_endpoint":true,"dataplane_net_vlan_range":["1-5","7-10"]}'

Additional configuration for VIO:

  • vim_type: Set to "VIO" to use VMware Integrated openstack as VIM
  • use_internal_endpoint: When true it allows use of private API endpoints
  • dataplane_physical_net: The configured network_vlan_ranges at neutron for the SRIOV (binding direct) and passthrough (binding direct-physical) networks, e.g. 'physnet_sriov' in the above configuration. In case of VMware Integrated Openstack (VIO) provide moref ID of distributed virtual switch, e.g 'dvs-46' in above configuration.
  • dataplane_net_vlan_range: In case of VMware Integrated Openstack (VIO) provide vlan ranges for the SRIOV (binding direct) networks in format ['start_ID - end_ID']
openmano datacenter-list
exit     #or Ctrl+D to get out of the RO container
  • Go to the GUI:
ACCOUNTS > OSMOPENMANO > Click 'REFRESH STATUS' button

Amazon Web Services (AWS) site

  • Go into the RO container:
lxc exec RO -- bash
  • Execute the following commands, using the appropriate parameters (e.g. site name: "aws-site", IP address: 10.10.10.11, VIM tenant: "admin", user: "admin", password: "userpwd")
export OPENMANO_TENANT=osm
openmano datacenter-create aws-site https://aws.amazon.com --type aws --description "AWS Site" --config '{region: us-west-2}' 
openmano datacenter-attach aws-site --user=AWS_USER_SECRET_KEY --password=AWS_USER_SECRET_ACCESS_KEY --vim-tenant-name=admin
openmano datacenter-list
exit     #or Ctrl+D to get out of the RO container
  • Go to the GUI:
ACCOUNTS > OSMOPENMANO > Click 'REFRESH STATUS' button

Deploying your first Network Service

In this example we will deploy the following Network Service, consisting of two simple VNFs based on CirrOS connected by a simple VLD.

NS with 2 CirrOS VNF

Before going on, download the required VNF and NS packages from this URL: https://osm-download.etsi.org/ftp/osm-3.0-three/examples/cirros_2vnf_ns/

Uploading VNF image to the VIM

Get the cirros 0.3.4 image from the following link: http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img

Then, onboard the image into the VIM. The instruction differs from one VIM to another:

  • In Openstack:
openstack image create --file="./cirros-0.3.4-x86_64-disk.img" --container-format=bare --disk-format=qcow2 cirros034
  • In openvim:
#copy your image to the NFS shared folder (e.g. /mnt/openvim-nfs)
cp ./cirros-0.3.4-x86_64-disk.img /mnt/openvim-nfs/
openvim image-create --name cirros034 --path /mnt/openvim-nfs/cirros-0.3.4-x86_64-disk.img

Onboarding a VNF

  • From the UI:
    • Go to Catalog
    • Click on the import button, then VNFD
    • Drag and drop the VNF package file cirros_vnf.tar.gz in the importing area.

Onboarding a VNF

  • From the SO CLI:
    • From the SO-ub container ("lxc exec SO-ub bash"), execute the following:
/root/SO/rwlaunchpad/plugins/rwlaunchpadtasklet/scripts/onboard_pkg -s 127.0.0.1 -u cirros_vnf.tar.gz

Onboarding a NS

  • From the UI:
    • Go to Catalog
    • Click on the import button, then NSD
    • Drag and drop the NS package file cirros_2vnf_ns.tar.gz in the importing area.
  • From the SO CLI:
    • From the SO-ub container ("lxc exec SO-ub bash"), execute the following command:
/root/SO/rwlaunchpad/plugins/rwlaunchpadtasklet/scripts/onboard_pkg -s 127.0.0.1  -u  cirros_2vnf_ns.tar.gz

Instantiating the NS

  • From the UI:
    • Go to Launchpad > Instantiate
    • Select the NS descriptor to be instantiated, and click on Next
    • Add a name to the NS instance, and click on Launch.

Instantiating a NS (step 1) Instantiating a NS (step 2)

  • From the SO CLI:
    • From the SO-ub container ("lxc exec SO-ub bash"), execute the following command:
/root/SO/rwlaunchpad/plugins/rwlaunchpadtasklet/scripts/onboard_pkg  -i <ns-instance-name>  -d <nsd-id> -D <data-center-id>

Note: The nsd-id and data-center-id need to be replaced with the values from your setup. Issue the following commands from the SO CLI(See the next section "Accessing CLI for viewing instantiated NS details" on how to access SO CLI) to determine the nsd-id and data-center-id,

  • show nsd-catalog nsd - Displays the nsds in the catalog. Find the id of the cirros_2vnf_nsd NSD
  • show datacenters - Displays the list of data centers configured in the RO. Choose the data center where the network service need to be instantiated.

Wait for the message that the NS has been successfully deployed, and that's all!

Accessing CLI for viewing instantiated NS details

From the SO-ub container ("lxc exec SO-ub bash"), execute the following command to bring up SO CLI (username:admin password:admin)

/usr/rift/rift-shell -r -i /usr/rift -a /usr/rift/.artifacts -- rwcli 

The CLI can be used to both configure the system and show operational-data from the system. For instance:

 rift# show nsd-catalog # show the nsd catalog
 rift# show vnfd-catalog # show vnfd catalog
 rift# show ns-instance-config nsr # Lists instantiated network service
 rift# show ns-instance-opdata nsr # Lists of instantiated network service op-data

Additional information

Your feedback is most welcome!
You can send us your comments and questions to OSM_TECH@list.etsi.org
Or join the OpenSourceMANO Slack Workplace
See hereafter some best practices to report issues on OSM