OSM Release ONE: Difference between revisions
No edit summary |
Garciadeblas (talk | contribs) |
||
(15 intermediate revisions by 5 users not shown) | |||
Line 23: | Line 23: | ||
All you need to run OSM Release One is a single server or VM with the following requirements: | All you need to run OSM Release One is a single server or VM with the following requirements: | ||
* 8 CPUs, 16 GB RAM, 100GB disk and a single interface with Internet access | * 8 CPUs, 16 GB RAM, 100GB disk and a single interface with Internet access | ||
* Ubuntu16.04 as base image (http://releases.ubuntu.com/16.04/), configured to run LXD containers. If you don't have LXD configured, you can follow the instructions here ([[LXD configuration for OSM | * Ubuntu16.04 as base image (http://releases.ubuntu.com/16.04/), configured to run LXD containers. If you don't have LXD configured, you can follow the instructions here ([[LXD configuration for OSM Release ONE|LXD configuration]]). | ||
Note: If you wish to install OSM Release One from inside a LXD container, you will need to enable nested containers following instructions here ([[LXD within LXD|Nested containers]]). | Note: If you wish to install OSM Release One from inside a LXD container, you will need to enable nested containers following instructions here ([[LXD within LXD|Nested containers]]). | ||
Line 31: | Line 31: | ||
chmod +x install_from_source.sh | chmod +x install_from_source.sh | ||
./install_from_source.sh | ./install_from_source.sh | ||
If you need to install from latest master (recommended for advanced users only), please use, | |||
./install_from_source.sh --develop | |||
After some time, you will get a fresh OSM Release One installation. You can access to the UI in the following URL (user:admin, password: admin): | After some time, you will get a fresh OSM Release One installation. You can access to the UI in the following URL (user:admin, password: admin): | ||
Line 75: | Line 77: | ||
*Go into the RO container: | *Go into the RO container: | ||
lxc exec RO -- bash | lxc exec RO -- bash | ||
*Execute the following commands, using the appropriate parameters (e.g. site name: "vmware-site", IP address: 10.10.10.12, VIM tenant: "vmware-tenant", user: "osm", password: "osm4u", admin user: "admin", admin password: "adminpwd") | *Execute the following commands, using the appropriate parameters (e.g. site name: "vmware-site", IP address: 10.10.10.12, VIM tenant: "vmware-tenant", user: "osm", password: "osm4u", admin user: "admin", admin password: "adminpwd", organization: "orgVDC") | ||
openmano datacenter-create vmware-site https://10.10.10.12 --type vmware --description "VMware site" --config '{admin_password: adminpwd, admin_username: admin}' | openmano datacenter-create vmware-site https://10.10.10.12 --type vmware --description "VMware site" --config '{admin_password: adminpwd, admin_username: admin, orgname: orgVDC}' | ||
openmano datacenter-attach vmware-site --user=osm --password=osm4u --vim-tenant-name=vmware-tenant | openmano datacenter-attach vmware-site --user=osm --password=osm4u --vim-tenant-name=vmware-tenant | ||
openmano datacenter-list | openmano datacenter-list | ||
exit #or Ctrl+D to get out of the RO container | exit #or Ctrl+D to get out of the RO container | ||
=Deploying your first Network Service= | =Deploying your first Network Service= | ||
Line 95: | Line 95: | ||
Then, onboard the image into the VIM. The instruction differs from one VIM to another: | Then, onboard the image into the VIM. The instruction differs from one VIM to another: | ||
*In Openstack: | *In Openstack: | ||
openstack image create --file="./cirros-0.3.4-x86_64-disk.img" --container-format=bare --disk-format=qcow2 | openstack image create --file="./cirros-0.3.4-x86_64-disk.img" --container-format=bare --disk-format=qcow2 cirros034 | ||
*In openvim: | *In openvim: | ||
openvim image-create --name cirros034 --path /mnt/ | #copy your image to the NFS shared folder (e.g. /mnt/openvim-nfs) | ||
cp ./cirros-0.3.4-x86_64-disk.img /mnt/openvim-nfs/ | |||
openvim image-create --name cirros034 --path /mnt/openvim-nfs/cirros-0.3.4-x86_64-disk.img | |||
==Onboarding a VNF== | ==Onboarding a VNF== | ||
Line 132: | Line 134: | ||
Issue the following commands from the SO CLI(See the next section "Accessing CLI for viewing instantiated NS details" on how to access SO CLI) to determine the nsd-id and data-center-id, | Issue the following commands from the SO CLI(See the next section "Accessing CLI for viewing instantiated NS details" on how to access SO CLI) to determine the nsd-id and data-center-id, | ||
* <code>show nsd-catalog nsd </code> - Displays the nsds in the catalog. Find the id of the cirros_2vnf_nsd NSD | |||
* <code>show datacenters</code> - Displays the list of data centers configured in the RO. Choose the data center where the network service need to be instantiated. | |||
Wait for the message that the NS has been successfully deployed, and that's all! | Wait for the message that the NS has been successfully deployed, and that's all! | ||
Line 157: | Line 159: | ||
*[[Creating your own VNF package (Release ONE)|Create your own VNF package]] | *[[Creating your own VNF package (Release ONE)|Create your own VNF package]] | ||
*[[Reference VNF and NS Descriptors (Release ONE)|Reference VNF and NS Descriptors]] | *[[Reference VNF and NS Descriptors (Release ONE)|Reference VNF and NS Descriptors]] | ||
*[[Creating your own VNF charm (Release_ONE)|Creating your own VNF charm]] | |||
*[[How to report issues (Release ONE)|Have you detected any bug? Check this guide to see how to report issues]] | *[[How to report issues (Release ONE)|Have you detected any bug? Check this guide to see how to report issues]] | ||
*[[Logs and troubleshooting (Release ONE)|Logs and troubleshooting]] | *[[Logs and troubleshooting (Release ONE)|Logs and troubleshooting]] | ||
*[[Life Cycle Management of VNFs from the RO (Release ONE)|Life Cycle Management of VNFs from the RO]] | |||
*[[Release ONE Data Model details|Data Model Details]] | *[[Release ONE Data Model details|Data Model Details]] | ||
*[https://osm.etsi.org/images/OSM-Whitepaper-TechContent-ReleaseONE-FINAL.pdf OSM White Paper - Release ONE Technical Overview] | *[https://osm.etsi.org/images/OSM-Whitepaper-TechContent-ReleaseONE-FINAL.pdf OSM White Paper - Release ONE Technical Overview] |
Latest revision as of 11:04, 23 March 2017
Open Source MANO (OSM) is the open source community that aims to deliver a production-quality MANO stack for NFV, capable of consuming openly published information models, available to everyone, suitable for all VNFs, operationally significant and VIM-independent. OSM is aligned to NFV ISG information models while providing first-hand feedback based on its implementation experience.
Interaction with VIMs and VNFs
The following figure shows OSM interaction with VIM and VNFs.
In simpler setups, OSM only requires a single interface as long as both VIM and VNF IP addresses are reachable
Install OSM
Install from source
All you need to run OSM Release One is a single server or VM with the following requirements:
- 8 CPUs, 16 GB RAM, 100GB disk and a single interface with Internet access
- Ubuntu16.04 as base image (http://releases.ubuntu.com/16.04/), configured to run LXD containers. If you don't have LXD configured, you can follow the instructions here (LXD configuration).
Note: If you wish to install OSM Release One from inside a LXD container, you will need to enable nested containers following instructions here (Nested containers).
Once you have prepared the host with the previous requirements, all you need to do is:
wget https://osm-download.etsi.org/ftp/osm-1.0-one/install_from_source.sh chmod +x install_from_source.sh ./install_from_source.sh
If you need to install from latest master (recommended for advanced users only), please use,
./install_from_source.sh --develop
After some time, you will get a fresh OSM Release One installation. You can access to the UI in the following URL (user:admin, password: admin):
You can connect to the service via a web browser (Google Chrome version 50 or later is recommended). Open a browser and connect to https://1.2.3.4:8443 , replacing 1.2.3.4 with the IP address of your host. Note that it uses https, not http. Google Chrome is recommended. If you are using Firefox and plan to use the self-signed certificate provided in the installation, please follow instructions at Using untrusted, self-signed certificates Alternatively, you can run Launchpad with trusted CA signed SSL certs as per Using a certificate signed by a trusted CA or, run Launchpad with SSL disabled as per Run Launchpad with SSL Disabled
Make sure that port 8443 is accessible, as well as the following required ports: 8000, 4567, 8008, 80, 9090.
As a result of the installation, three LXD containers are created in the host: RO, VCA, and SO-ub (running the SO and the UI), as shown in the figure below.
Adding a VIM account
Before proceeding, make sure that you have a site with a VIM configured to run with OSM. Three different kinds of VIMs are currently supported by OSM:
- OpenVIM. Check the following link to know how to install and use openvim for OSM: OpenVIM installation (Release One)
- OpenStack. Check the following link to learn how to configure OpenStack to be used by OSM: Openstack configuration (Release ONE)
- VMware vCloud Director. Check the following link to learn how to configure VMware VCD to be used by OSM: Configuring VMware vCloud Director for OSM Release One
OpenVIM site
- Go into the RO container:
lxc exec RO -- bash
- Execute the following commands, using the appropriate parameters (e.g. site name: "openvim-site", IP address: 10.10.10.10, VIM tenant: "osm")
export OPENMANO_TENANT=osm openmano datacenter-create openvim-site http://10.10.10.10:9080/openvim --type openvim --description "Openvim site" openmano datacenter-attach openvim-site --vim-tenant-name=osm openmano datacenter-list exit #or Ctrl+D to get out of the RO container
Openstack site
- Go into the RO container:
lxc exec RO -- bash
- Execute the following commands, using the appropriate parameters (e.g. site name: "openstack-site", IP address: 10.10.10.11, VIM tenant: "admin", user: "admin", password: "userpwd")
export OPENMANO_TENANT=osm openmano datacenter-create openstack-site http://10.10.10.11:5000/v2.0 --type openstack --description "OpenStack site" openmano datacenter-attach openstack-site --user=admin --password=userpwd --vim-tenant-name=admin openmano datacenter-list exit #or Ctrl+D to get out of the RO container
VMware site
- Go into the RO container:
lxc exec RO -- bash
- Execute the following commands, using the appropriate parameters (e.g. site name: "vmware-site", IP address: 10.10.10.12, VIM tenant: "vmware-tenant", user: "osm", password: "osm4u", admin user: "admin", admin password: "adminpwd", organization: "orgVDC")
openmano datacenter-create vmware-site https://10.10.10.12 --type vmware --description "VMware site" --config '{admin_password: adminpwd, admin_username: admin, orgname: orgVDC}' openmano datacenter-attach vmware-site --user=osm --password=osm4u --vim-tenant-name=vmware-tenant openmano datacenter-list exit #or Ctrl+D to get out of the RO container
Deploying your first Network Service
In this example we will deploy the following Network Service, consisting of two simple VNFs based on CirrOS connected by a simple VLD.
Before going on, download the required VNF and NS packages from this URL: https://osm-download.etsi.org/ftp/examples/cirros_2vnf_ns/
Uploading VNF image to the VIM
Get the cirros 0.3.4 image from the following link: http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
Then, onboard the image into the VIM. The instruction differs from one VIM to another:
- In Openstack:
openstack image create --file="./cirros-0.3.4-x86_64-disk.img" --container-format=bare --disk-format=qcow2 cirros034
- In openvim:
#copy your image to the NFS shared folder (e.g. /mnt/openvim-nfs) cp ./cirros-0.3.4-x86_64-disk.img /mnt/openvim-nfs/ openvim image-create --name cirros034 --path /mnt/openvim-nfs/cirros-0.3.4-x86_64-disk.img
Onboarding a VNF
- From the UI:
- Go to Catalog
- Click on the import button, then VNFD
- Drag and drop the VNF package file cirros_vnf.tar.gz in the importing area.
- From the SO CLI:
- From the SO-ub container ("lxc exec SO-ub bash"), execute the following:
/root/SO/rwlaunchpad/plugins/rwlaunchpadtasklet/scripts/onboard_pkg -s 127.0.0.1 -u cirros_vnf.tar.gz
Onboarding a NS
- From the UI:
- Go to Catalog
- Click on the import button, then NSD
- Drag and drop the NS package file cirros_2vnf_ns.tar.gz in the importing area.
- From the SO CLI:
- From the SO-ub container ("lxc exec SO-ub bash"), execute the following command:
/root/SO/rwlaunchpad/plugins/rwlaunchpadtasklet/scripts/onboard_pkg -s 127.0.0.1 -u cirros_2vnf_ns.tar.gz
Instantiating the NS
- From the UI:
- Go to Launchpad > Instantiate
- Select the NS descriptor to be instantiated, and click on Next
- Add a name to the NS instance, and click on Launch.
- From the SO CLI:
- From the SO-ub container ("lxc exec SO-ub bash"), execute the following command:
/root/SO/rwlaunchpad/plugins/rwlaunchpadtasklet/scripts/onboard_pkg -i <ns-instance-name> -d <nsd-id> -D <data-center-id>
Note: The nsd-id and data-center-id need to be replaced with the values from your setup. Issue the following commands from the SO CLI(See the next section "Accessing CLI for viewing instantiated NS details" on how to access SO CLI) to determine the nsd-id and data-center-id,
show nsd-catalog nsd
- Displays the nsds in the catalog. Find the id of the cirros_2vnf_nsd NSDshow datacenters
- Displays the list of data centers configured in the RO. Choose the data center where the network service need to be instantiated.
Wait for the message that the NS has been successfully deployed, and that's all!
Accessing CLI for viewing instantiated NS details
From the SO-ub container ("lxc exec SO-ub bash"), execute the following command to bring up SO CLI (username:admin password:admin)
/usr/rift/rift-shell -r -i /usr/rift -a /usr/rift/.artifacts -- rwcli --rift_var_root /usr/rift/var/rift
The CLI can be used to both configure the system and show operational-data from the system. For instance:
rift# show nsd-catalog # show the nsd catalog rift# show vnfd-catalog # show vnfd catalog rift# show ns-instance-config nsr # Lists instantiated network service rift# show ns-instance-opdata nsr # Lists of instantiated network service op-data
Additional information
- Check other VNF packages and NS packages in the links below:
- Deploy advanced Network Services
- Create your own VNF package
- Reference VNF and NS Descriptors
- Creating your own VNF charm
- Have you detected any bug? Check this guide to see how to report issues
- Logs and troubleshooting
- Life Cycle Management of VNFs from the RO
- Data Model Details
- OSM White Paper - Release ONE Technical Overview
- Technical FAQ
Your feedback is most welcome! You can send us your comments and questions to OSM_TECH@list.etsi.org Or join the OpenSourceMANO Slack Workplace See hereafter some best practices to report issues on OSM