diff --git a/.gitignore b/.gitignore new file mode 100644 index 0000000000000000000000000000000000000000..e35d8850c9688b1ce82711694692cc574a799396 --- /dev/null +++ b/.gitignore @@ -0,0 +1 @@ +_build diff --git a/04-vim-setup.md b/04-vim-setup.md index 0ebe5c9b1fe650e8536e7cd48527ad75aeb8ee09..66e3d4c28ec8c7a18a43191ac144d5e1bc661df9 100644 --- a/04-vim-setup.md +++ b/04-vim-setup.md @@ -458,7 +458,7 @@ vnfd:vnfd-catalog: cp: eth0 ``` -**`alpinens.yaml`** +**`alpinens.yaml`**: ```yaml nsd:nsd-catalog: @@ -650,7 +650,7 @@ $ osm ns-list ####### Step 6: Interact with deployed VNFs -```bash +```text # connect to ping VNF container (in another terminal window): $ sudo docker exec -it mn.dc1_test-1-ubuntu-1 /bin/bash # show network config @@ -996,7 +996,7 @@ dfcd6ca2-4768-11e7-8f07-00163e1229e4 mydc 2017-06-02T07:55:41 #### Adding a port mapping -A sample of sdn port mapping can be found in RO/sdn/sdn_port_mapping.yaml +A sample of sdn port mapping can be found in RO/sdn/sdn\_port\_mapping.yaml ```bash root@RO:~# tail -n 24 RO/sdn/sdn_port_mapping.yaml @@ -1102,14 +1102,14 @@ Besides the instructions above for any Openstack, you should do extra configurat - The compute nodes need to have a whitelist for the interfaces with SRIOV and passthrough enabled, and those interface need to be associated to a physical network label e.g. `physnet`. This can be done in the file `/etc/nova/nova.conf`: -```json +```text pci_passthrough_whitelist=[{"devname": "p3p1", "physical_network": "physnet"}, {"devname": "p3p2", "physical_network": "physnet"}] ``` - The neutron controller needs to be updated to add `sriovnicswitch` to the `mechanism_drivers`. This can be done in the file `/etc/neutron/plugins/ml2/ml2_conf.ini` ```text -mechanism_drivers =openvswitch,sriovnicswitch +mechanism_drivers=openvswitch,sriovnicswitch ``` - The neutron controller needs to be updated to set the vlans to be used for the defined physical network label. This can be done in the file `/etc/neutron/plugins/ml2/ml2_conf.ini`. For instance, to set the vlans from 2000 to 3000: diff --git a/05-osm-usage.md b/05-osm-usage.md index 70ca0b48e1a0ab4537f8a591b2b80f1ff208a9ea..e0968a31791109b067ee5aba487191956388ff15 100644 --- a/05-osm-usage.md +++ b/05-osm-usage.md @@ -235,7 +235,7 @@ osm ns-create --ns_name hf12 --nsd_name hackfest2-ns --vim_account openstack1 -- In a generic way, the mapping can be specified in the following way, where `VM1` is the name of the VDU, `Storage1` is the volume name in VNF descriptor and `05301095-d7ee-41dd-b520-e8ca08d18a55` is the volume id: -```yaml +```bash --config '{vnf: [ {member-vnf-index: "1", vdu: [ {id: VM1, volume: [ {name: Storage1, vim-volume-id: 05301095-d7ee-41dd-b520-e8ca08d18a55} ] } ] } ] }' ``` @@ -247,8 +247,13 @@ With the previous hackfest example, according [VNF data model](http://osm-downlo ```yaml volumes: - - name: Storage1 - size: 'Size of the volume' + - name: Storage1 + size: 'Size of the volume' +``` + +Then: + +```bash osm ns-create --ns_name h1 --nsd_name hackfest1-ns --vim_account openstack1 --config '{vnf: [ {member-vnf-index: "1", vdu: [ {id: hackfest1VM, volume: [ {name: Storage1, vim-volume-id: 8ab156fd-0f8e-4e01-b434-a0fce63ce1cf} ] } ] } ] }' ``` @@ -306,7 +311,7 @@ There are two types of charms: - Native charms: the set of scripts run inside the VNF components. This kind of charms are new in Release 7. - Proxy charms: the set of scripts run in LXC containers in an OSM-managed machine (which could be where OSM resides), which use ssh or other methods to get into the VNF instances and configure them. -![OSM Proxy Charms](assets/800px-Osm_proxycharms.png) +![OSM Proxy Charms](assets/800px-OSM_proxycharms.png) These charms can run with three scopes: @@ -373,7 +378,7 @@ monitoring-param: As you can see, a list of "NFVI metrics" is defined first at the VDU level, which contains an ID and the corresponding normalized metric name (in this case, `cpu_utilization` and `average_memory_utilization`) Then, at the VNF level, a list of `monitoring-params` is referred, with an ID, name, aggregation-type and their source (`vdu-monitoring-param` in this case) -###### Additional notes + Additional notes - Available attributes and values can be directly explored at the [OSM Information Model](11-osm-im.md) - A complete VNFD example can be downloaded from [here](https://osm-download.etsi.org/ftp/osm-4.0-four/4th-hackfest/packages/webserver_vimmetric_autoscale_vnfd.tar.gz). @@ -693,7 +698,7 @@ vdu: vnf-monitoring-param-ref: vnf_cpu_util ``` -Regarding how to configure alarms through VNFDs for the auto-scaling use case, follow the [auto-scaling documentation](06-03-03-autoscaling.md) +Regarding how to configure alarms through VNFDs for the auto-scaling use case, follow the [auto-scaling documentation](#autoscaling) #### Experimental functionality @@ -757,7 +762,7 @@ docker run --rm --name curator --net host --entrypoint curator_cli bobrik/curato ### Autoscaling -### Reference diagram +#### Reference diagram The following diagram summarizes the feature: @@ -765,9 +770,9 @@ The following diagram summarizes the feature: - Scaling descriptors can be included and be tied to automatic reaction to VIM/VNF metric thresholds. - Supported metrics are both VIM and VNF metrics. More information about metrics collection can be found at the [Performance Management documentation](#performance-management) -- An internal alarm manager has been added to MON through the 'mon-evaluator' module, so that both VIM and VNF metrics can also trigger threshold-violation alarms and scaling actions. More information about this module can be found at the [Fault Management documentation](fault-management) +- An internal alarm manager has been added to MON through the 'mon-evaluator' module, so that both VIM and VNF metrics can also trigger threshold-violation alarms and scaling actions. More information about this module can be found at the [Fault Management documentation](#fault-management) -### Scaling Descriptor +#### Scaling Descriptor The scaling descriptor is part of a VNFD. Like the example below shows, it mainly specifies: @@ -798,12 +803,12 @@ scaling-group-descriptor: vdu-id-ref: vdu01 ``` -### Example +#### Example This will launch a Network Service formed by an HAProxy load balancer and an (autoscalable) Apache web server. Please check: 1. Your VIM has an accesible 'public' network and a management network (in this case called "PUBLIC" and "vnf-mgmt") -2. Your VIM has the 'haproxy_ubuntu' and 'apache_ubuntu' images, which can be found [here](https://osm-download.etsi.org/ftp/osm-4.0-four/4th-hackfest/images/) +2. Your VIM has the 'haproxy\_ubuntu' and 'apache\_ubuntu' images, which can be found [here](https://osm-download.etsi.org/ftp/osm-4.0-four/4th-hackfest/images/) 3. You run the following command to match your VIM metrics telemetry system's granularity, if different than 300s (recommended for this example is 60s or Gnocchi's `medium archive-policy`): ```bash @@ -835,7 +840,7 @@ osm ns-show web01 Testing: 1. To ensure the NS is working, visit the Load balancer's IP at the public network using a browser, the page should show an OSM logo and active VDUs. -2. To check metrics at Prometheus, visit `http://[OSM_IP]:9091` and look for osm_cpu_utilization and `osm_average_memory_utilization` (initial values could take some some minutes depending on your telemetry system's granularity). +2. To check metrics at Prometheus, visit `http://[OSM_IP]:9091` and look for `osm_cpu_utilization` and `osm_average_memory_utilization` (initial values could take some some minutes depending on your telemetry system's granularity). 3. To check metrics at Grafana, just install the OSM preconfigured version (`./install_osm.sh -o pm_stack`) and visit `http://[OSM_IP]:3000` (`admin`/`admin`), you will find a sample dashboard (the two top charts correspond to this example). 4. To increase CPU in this example to auto-scale the web server, install Apache Bench in a client within reach (could be the OSM host) and run it towards `test.php`. diff --git a/06-osm-platform-configuration.md b/06-osm-platform-configuration.md index 2cf85912c99a94416156a4cbbd7587691ef6792b..cf368f402a2255756adb98f822840ffa602766d1 100644 --- a/06-osm-platform-configuration.md +++ b/06-osm-platform-configuration.md @@ -228,7 +228,7 @@ DynPaC needs to first be installed as an application in the ONOS controller mana When adding OpenStack VIMs to be used with the DynPaC OSM WIM, the following needs to be passed to the `--config` parameter of the `osm vim-create` command: -```yaml +```bash --config '{"user_domain_name": "", "project_domain_name": "", "dataplane_physical_net": "", "external_connections": [{"condition": {"provider:physical_network": "", "provider:network_type": "vlan"}, "vim_external_port": {"switch": "", "port": ""}}]}' ``` diff --git a/12-osm-nbi.md b/12-osm-nbi.md index 5ae315a7ae470162f50b5a1343eae524b697515d..f19c48598c7929947ccc804986cb549460b7967a 100644 --- a/12-osm-nbi.md +++ b/12-osm-nbi.md @@ -456,7 +456,16 @@ primitive_params: dict # Optional. Maps [NSD.ns-configuration or VNFD.vnf-conf - Example of content: ```json -'{scaleType: SCALE_VNF, scaleVnfData: {scaleVnfType: SCALE_OUT, scaleByStepData: {member-vnf-index: , scaling-group-descriptor: } } }' # Use SCALE_IN instead of SCALE OUT depending of desired type. +{ + scaleType: SCALE_VNF, + scaleVnfData: { + scaleVnfType: SCALE_OUT|SCALE_IN, + scaleByStepData: { + member-vnf-index: , + scaling-group-descriptor: + } + } +} ``` `/nslcm/v1/ns_instances/nsi_lcm_op_occs`. (rbac: `ns_instances:opps`) @@ -524,20 +533,18 @@ NetSlice Instance Lifecycle Management - POST: (rbac: `slice_instances:content:post`) (Asynchronous). Creates and Instantiate a Network Slice Instance. It returns the `netsliceInstanceId` in the response header `'Location'`. Example of request content: ```yaml - nstId: name of the Network Slice Template #mandatory - nsiName: name of the Network Slice Instance # mandatory - vimAccountId: internal-id # mandatory - ssh_keys: comma separated list of keys to inject to vnfs - nsiDescription: description of the Network Slice Instance - additionalParamsForNsi: {param: value, ...} - netslice-subnet: [ Same content as section #NSLCM_Details /nslcm/v1/ns_instances_content - ], - netslice-vld: [ - name: TEXT, - vim-network-name: TEXT or DICT with the name for each vim account: {vimAccountId: network-name, ...}, - vim-network-id: TEXT or DICT with the id for each vim account {vimAccountId: network-id}, - ip-profile: Profile of the vld - ] +nstId: name of the Network Slice Template #mandatory +nsiName: name of the Network Slice Instance # mandatory +vimAccountId: internal-id # mandatory +ssh_keys: comma separated list of keys to inject to vnfs +nsiDescription: description of the Network Slice Instance +additionalParamsForNsi: {param: value, ...} +netslice-subnet: [ Same content as section #NSLCM_Details /nslcm/v1/ns_instances_content ], +netslice-vld: +- name: TEXT, + vim-network-name: TEXT or DICT with the name for each vim account: {vimAccountId: network-name, ...}, + vim-network-id: TEXT or DICT with the id for each vim account {vimAccountId: network-id}, + ip-profile: Profile of the vld ``` `/nsilcm/v1/netslice_instances_content/`. (rbac: `slice_instances:id`) diff --git a/13-openvim-installation.md b/13-openvim-installation.md index 54aad45ca6e46b9785c59af6cda40d9a0ab87178..4fd872047b8efdf011a5f344f3492d7ce678a019 100644 --- a/13-openvim-installation.md +++ b/13-openvim-installation.md @@ -780,7 +780,7 @@ openvim host-add /opt/openvim/test/hosts/host-example3.yaml openvim host-list #-v,-vv,-vvv for verbosity levels ``` -In `normal` or `host only` mode, the process is a bit more complex. First, you need to configure appropriately the host following these [guidelines](10-01-openvim-compute-install.md). The current process is manual, although we are working on an automated process. For the moment, follow these instructions: +In `normal` or `host only` mode, the process is a bit more complex. First, you need to configure appropriately the host following these [guidelines](#setting-up-compute-nodes-for-openvIM). The current process is manual, although we are working on an automated process. For the moment, follow these instructions: ```bash #copy /opt/openvim/scripts/host-add.sh and run at compute host for gather all the information diff --git a/Dockerfile b/Dockerfile new file mode 100644 index 0000000000000000000000000000000000000000..fad35e92809a4693270b997fd67abb7af5af87da --- /dev/null +++ b/Dockerfile @@ -0,0 +1,5 @@ +FROM python:alpine3.7 +COPY . /osm-doc +WORKDIR /osm-doc +RUN pip install -r requirements.txt +CMD [ "python", "./my_script.py" ] diff --git a/Makefile b/Makefile new file mode 100644 index 0000000000000000000000000000000000000000..d4bb2cbb9eddb1bb1b4f366623044af8e4830919 --- /dev/null +++ b/Makefile @@ -0,0 +1,20 @@ +# Minimal makefile for Sphinx documentation +# + +# You can set these variables from the command line, and also +# from the environment for the first two. +SPHINXOPTS ?= +SPHINXBUILD ?= sphinx-build +SOURCEDIR = . +BUILDDIR = _build + +# Put it first so that "make" without argument is like "make help". +help: + @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) + +.PHONY: help Makefile + +# Catch-all target: route all unknown targets to Sphinx using the new +# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). +%: Makefile + @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) diff --git a/conf.py b/conf.py new file mode 100644 index 0000000000000000000000000000000000000000..53879d276b523554276ea0ae19729c2a05fb3c00 --- /dev/null +++ b/conf.py @@ -0,0 +1,73 @@ +# Configuration file for the Sphinx documentation builder. +# +# This file only contains a selection of the most common options. For a full +# list see the documentation: +# https://www.sphinx-doc.org/en/master/usage/configuration.html + +# -- Path setup -------------------------------------------------------------- + +# If extensions (or modules to document with autodoc) are in another directory, +# add these directories to sys.path here. If the directory is relative to the +# documentation root, use os.path.abspath to make it absolute, like shown here. +# +# import os +# import sys +# sys.path.insert(0, os.path.abspath('.')) + + +# -- Project information ----------------------------------------------------- + +project = 'Open Source MANO' +copyright = '2019, ETSI OSM' +author = 'ETSI OSM' + +# The full version, including alpha/beta/rc tags +release = '6.0' + + +# -- General configuration --------------------------------------------------- + +# Add any Sphinx extension module names here, as strings. They can be +# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom +# ones. +extensions = ['sphinx.ext.autodoc', + 'sphinx.ext.doctest', + 'sphinx.ext.todo', + 'sphinx.ext.viewcode', + 'sphinx.ext.githubpages', + 'recommonmark', +] + +source_suffix = { + '.rst': 'restructuredtext', + '.txt': 'markdown', + '.md': 'markdown', +} + +# The master toctree document. +master_doc = 'index' + +# Add any paths that contain templates here, relative to this directory. +templates_path = ['_templates'] + +# List of patterns, relative to source directory, that match files and +# directories to ignore when looking for source files. +# This pattern also affects html_static_path and html_extra_path. +exclude_patterns = ['_build', 'TO-BE-MOVED-TO-OTHER-REPOS', 'Thumbs.db', '.DS_Store', + 'navigation.md', 'index.md', 'requirements.txt'] + + +# -- Options for HTML output ------------------------------------------------- + +# The theme to use for HTML and HTML Help pages. See the documentation for +# a list of builtin themes. +# +#html_theme = 'alabaster' +#html_theme = 'pyramid' +#html_theme = 'bizstyle' +html_theme = 'sphinx_rtd_theme' + +# Add any paths that contain custom static files (such as style sheets) here, +# relative to this directory. They are copied after the builtin static files, +# so a file named "default.css" will overwrite the builtin "default.css". +html_static_path = ['_static'] diff --git a/index.rst b/index.rst new file mode 100644 index 0000000000000000000000000000000000000000..545517abee7db999290332438504aed650a52374 --- /dev/null +++ b/index.rst @@ -0,0 +1,30 @@ +.. Open Source MANO documentation master file, created by + sphinx-quickstart on Wed Nov 20 23:11:37 2019. + You can adapt this file completely to your liking, but it should at least + contain the root `toctree` directive. + +Welcome to Open Source MANO's documentation! +============================================ + +.. toctree:: + :numbered: + :maxdepth: 2 + :caption: Table of Contents + :name: mastertoc + :titlesonly: + + 01-quickstart.md + 02-osm-architecture-and-functions.md + 03-installing-osm.md + 04-vim-setup.md + 05-osm-usage.md + 06-osm-platform-configuration.md + 07-what-to-read-next.md + 08-how-to-contribute-to-docs.md + 09-troubleshooting.md + 10-osm-client-commands-reference.md + 11-osm-im.md + 12-osm-nbi.md + 13-openvim-installation.md + 14-tests-for-vim-validation.md + diff --git a/make.bat b/make.bat new file mode 100644 index 0000000000000000000000000000000000000000..2119f51099bf37e4fdb6071dce9f451ea44c62dd --- /dev/null +++ b/make.bat @@ -0,0 +1,35 @@ +@ECHO OFF + +pushd %~dp0 + +REM Command file for Sphinx documentation + +if "%SPHINXBUILD%" == "" ( + set SPHINXBUILD=sphinx-build +) +set SOURCEDIR=. +set BUILDDIR=_build + +if "%1" == "" goto help + +%SPHINXBUILD% >NUL 2>NUL +if errorlevel 9009 ( + echo. + echo.The 'sphinx-build' command was not found. Make sure you have Sphinx + echo.installed, then set the SPHINXBUILD environment variable to point + echo.to the full path of the 'sphinx-build' executable. Alternatively you + echo.may add the Sphinx directory to PATH. + echo. + echo.If you don't have Sphinx installed, grab it from + echo.http://sphinx-doc.org/ + exit /b 1 +) + +%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O% +goto end + +:help +%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O% + +:end +popd diff --git a/requirements.txt b/requirements.txt new file mode 100644 index 0000000000000000000000000000000000000000..315fad413cf67be1c373fb8fcddfe585fc965ae1 --- /dev/null +++ b/requirements.txt @@ -0,0 +1,4 @@ +sphinx +sphinx_rtd_theme +sphinxcontrib-versioning +recommonmark