5. OSM Usage

5.1. Deploying your first Network Service

In this example we will deploy the following Network Service, consisting of two simple VNFs based on CirrOS connected by a simple VLD.

NS with 2 CirrOS VNF

Before going on, download the required VNF and NS packages from this URL: https://osm-download.etsi.org/ftp/osm-3.0-three/examples/cirros_2vnf_ns/

5.1.1. Onboarding a VNF

The onboarding of a VNF in OSM involves preparing and adding the corresponding VNF package to the system. This process also assumes, as a pre-condition, that the corresponding VM images are available in the VIM(s) where it will be instantiated.

5.1.1.1. Uploading VM image(s) to the VIM(s)

In this example, only a vanilla CirrOS 0.3.4 image is needed. It can be obtained from the following link: http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img

If not available, it would be required to upload the image into the VIM. Instructions differ from one VIM to another (please check the reference of your type of VIM).

For instance, this is the OpenStack command for uploading images:

openstack image create --file="./cirros-0.3.4-x86_64-disk.img" --container-format=bare --disk-format=qcow2 cirros034

And this one is the appropriate command in OpenVIM:

#copy your image to the NFS shared folder (e.g. /mnt/openvim-nfs)
cp ./cirros-0.3.4-x86_64-disk.img /mnt/openvim-nfs/
openvim image-create --name cirros034 --path /mnt/openvim-nfs/cirros-0.3.4-x86_64-disk.img

5.1.1.2. Onboarding a VNF Package

  • From the UI:

    • Go to ‘VNF Packages’ on the ‘Packages’ menu to the left

    • Drag and drop the VNF package file cirros_vnf.tar.gz in the importing area.

Onboarding a VNF

  • From OSM client:

osm vnfd-create cirros_vnf.tar.gz
osm vnfd-list

5.1.2. Onboarding a NS Package

  • From the UI:

    • Go to ‘NS Packages’ on the ‘Packages’ menu to the left

    • Drag and drop the NS package file cirros_2vnf_ns.tar.gz in the importing area.

Onboarding a NS

  • From OSM client:

osm nsd-create cirros_2vnf_ns.tar.gz
osm nsd-list

5.1.3. Instantiating the NS

5.1.3.1. Instantiating a NS from the UI

  • Go to ‘NS Packages’ on the ‘Packages’ menu to the left

  • Next the NS descriptor to be instantiated, click on the ‘Instantiate NS’ button.

Instantiating a NS (assets/600px-Nsd_list.png)

  • Fill in the form, adding at least a name, description and selecting the VIM:

Instantiating a NS (assets/600px-New_ns.png)

5.1.3.2. Instantiating a NS from the OSM client

osm ns-create --nsd_name cirros_2vnf_ns --ns_name <ns-instance-name> --vim_account <vim-target-name>
osm ns-list

5.2. Advanced instantiation: using instantiation parameters

OSM allows the parametrization of NS or NSI upon instantiation (Day-0 and Day-1), so that the user can easily decide on the key parameters of the service without any need of changing the original set of validated packages.

Thus, when creating a NS instance, it is possible to pass instantiation parameters to OSM using the --config option of the client or the config parameter of the UI. In this section we will illustrate through some of the existing examples how to specify those parameters using OSM client. Since this is one of the most powerful features of OSM, this section is intended to provide a thorough overview of this functionality with practical use cases.

5.2.1. Specify a VIM network name for a NS VLD

In a generic way, the mapping can be specified in the following way, where vldnet is the name of the network in the NS descriptor and netVIM1 is the existing VIM network that you want to use:

--config '{vld: [ {name: vldnet, vim-network-name: netVIM1} ] }'

You can try it using one of the examples of the hackfest (descriptors: hackfest-basic_vnfd, hackfest-basic_nsd; images: ubuntu1604; presentation: creating a basic VNF and NS) in the following way:

osm ns-create --ns_name hf-basic --nsd_name hackfest-basic_nsd --vim_account openstack1 --config '{vld: [ {name: mgmtnet, vim-network-name: mgmt} ] }'

5.2.2. Specify a VIM network name for an internal VLD of a VNF

In this scenario, the mapping can be specified in the following way, where "1" is the member vnf index of the constituent vnf in the NS descriptor, internal is the name of internal-vld in the VNF descriptor and netVIM1 is the VIM network that you want to use:

--config '{vnf: [ {member-vnf-index: "1", internal-vld: [ {name: internal, vim-network-name: netVIM1} ] } ] }'

TODO: update example with latest Hackfest

You can try it using one of the examples of the hackfest (descriptors: hackfest2-vnf, hackfest2-ns; images:ubuntu1604, presentation: modeling multi-VDU VNF) in the following way:

osm ns-create --ns_name hf2 --nsd_name hackfest2-ns --vim_account openstack1  --config '{vnf: [ {member-vnf-index: "1", internal-vld: [ {name: internal, vim-network-name: mgmt} ] } ] }'

5.2.3. Specify IP profile information and IP for a NS VLD

In a generic way, the mapping can be specified in the following way, where datanet is the name of the network in the NS descriptor, ip-profile is where you have to fill the associated parameters from the data model ( NS data model ), and vnfd-connection-point-ref is the reference to the connection point:

--config '{vld: [ {name: datanet, ip-profile: {...}, vnfd-connection-point-ref: {...} } ] }'

TODO: update example with latest Hackfest

You can try it using one of the examples of the hackfest (descriptors: hackfest2-vnf, hackfest2-ns; images:ubuntu1604, presentation: modeling multi-VDU VNF) in the following way:

osm ns-create --ns_name hf2 --nsd_name hackfest2-ns --vim_account openstack1 --config '{vld: [ {name: datanet, ip-profile: {ip-version: ipv4 ,subnet-address: "192.168.100.0/24", gateway-address: "0.0.0.0", dns-server: [{address: "8.8.8.8"}],dhcp-params: {count: 100, start-address: "192.168.100.20", enabled: true}}, vnfd-connection-point-ref: [ {member-vnf-index-ref: "1", vnfd-connection-point-ref: vnf-data, ip-address: "192.168.100.17"}]}]}'

5.2.4. Specify IP profile information for an internal VLD of a VNF

In this scenario, the mapping can be specified in the following way, where "1" is the member vnf index of the constituent vnf in the NS descriptor, internal is the name of internal-vld in the VNF descriptor and ip-profile is where you have to fill the associated parameters from the data model (VNF data model):

--config '{vnf: [ {member-vnf-index: "1", internal-vld: [ {name: internal, ip-profile: {...} ] } ] }'

TODO: update example with latest Hackfest

You can try it using one of the examples of the hackfest (descriptors: hackfest2-vnf, hackfest2-ns; images:ubuntu1604, presentation: modeling multi-VDU VNF) in the following way:

osm ns-create --ns_name hf2 --nsd_name hackfest2-ns --vim_account openstack1 --config '{vnf: [ {member-vnf-index: "1", internal-vld: [ {name: internal, ip-profile:  {ip-version: ipv4 ,subnet-address: "192.168.100.0/24", gateway-address: "0.0.0.0", dns-server: [{address: "8.8.8.8"}] ,dhcp-params: {count: 100, start-address: "192.168.100.20", enabled: true}}}]}]} '

5.2.5. Specify IP address and/or MAC address for an interface

5.2.5.1. Specify IP address for an interface

In this scenario, the mapping can be specified in the following way, where "1" is the member vnf index of the constituent vnf in the NS descriptor, ‘internal’ is the name of internal-vld in the VNF descriptor, ip-profile is where you have to fill the associated parameters from the data model (VNF data model), id1 is the internal-connection-point id and a.b.c.d is the IP that you have to specify for this scenario:

--config '{vnf: [ {member-vnf-index: "1", internal-vld: [ {name: internal, ip-profile: {...}, internal-connection-point: [{id-ref: id1, ip-address: "a.b.c.d"}] ] } ] }'

TODO: update example with latest Hackfest

You can try it using one of the examples of the hackfest (descriptors: hackfest2-vnf, hackfest2-ns; images:ubuntu1604, presentation: modeling multi-VDU VNF) in the following way:

 osm ns-create --ns_name hf2 --nsd_name hackfest2-ns --vim_account ost4 --config '{vnf: [ {member-vnf-index: "1", internal-vld: [ {name: internal, ip-profile: {ip-version: ipv4 ,subnet-address: "192.168.100.0/24", gateway-address: "0.0.0.0", dns-server: [{address: "8.8.8.8"}] ,dhcp-params: {count: 100, start-address: "192.168.100.20", enabled: true}}, internal-connection-point: [{id-ref: mgmtVM-internal, ip-address: "192.168.100.3"}]}]}]}'

5.2.5.2. Specify MAC address for an interface

In this scenario, the mapping can be specified in the following way, where "1" is the member vnf index of the constituent vnf in the NS descriptor, id1 is the id of VDU in the VNF descriptor and interf1 is the name of the interface to which you want to add the MAC address:

--config '{vnf: [ {member-vnf-index: "1", vdu: [ {id: id1, interface: [{name: interf1, mac-address: "aa:bb:cc:dd:ee:ff" }]} ] } ] } '

TODO: update example with latest Hackfest

You can try it using one of the examples of the hackfest (descriptors: hackfest1-vnf, hackfest1-ns; images: ubuntu1604, presentation: creating a basic VNF and NS) in the following way:

osm ns-create --ns_name hf12 --nsd_name hackfest1-ns --vim_account openstack1 --config '{vnf: [ {member-vnf-index: "1", vdu: [ {id: hackfest1VM, interface: [{name: vdu-eth0, mac-address: "52:33:44:55:66:21"}]} ] } ] } '

5.2.5.3. Specify IP address and MAC address for an interface

In the following scenario, we will bring together the two previous cases.

TODO: update example with latest Hackfest

You can try it using one of the examples of the hackfest (descriptors: hackfest2-vnf, hackfest2-ns; images:ubuntu1604, presentation: modeling multi-VDU VNF) in the following way:

osm ns-create --ns_name hf12 --nsd_name hackfest2-ns --vim_account ost4 --config '{vnf: [ {member-vnf-index: "1", internal-vld: [ {name: internal , ip-profile: {ip-version: ipv4, subnet-address: "192.168.100.0/24", gateway-address: "0.0.0.0", dns-server: [{address: "8.8.8.8"}] , dhcp-params: {count: 100, start-address: "192.168.100.20", enabled: true} }, internal-connection-point: [ {id-ref: mgmtVM-internal, ip-address: "192.168.100.3"} ] }, ], vdu: [ {id: mgmtVM, interface: [{name: mgmtVM-eth0, mac-address: "52:33:44:55:66:21"}]} ] } ] } '

5.2.6. Force floating IP address for an interface

In a generic way, the mapping can be specified in the following way, where id1 is the name of the VDU in the VNF descriptor and interf1 is the name of the interface:

--config '{vnf: [ {member-vnf-index: "1", vdu: [ {id: id1, interface: [{name: interf1, floating-ip-required: True }]} ] } ] } '

TODO: update example with latest Hackfest

You can try it using one of the examples of the hackfest (descriptors: hackfest2-vnf, hackfest2-ns; images:ubuntu1604, presentation: modeling multi-VDU VNF) in the following way:

osm ns-create --ns_name hf2 --nsd_name hackfest2-ns --vim_account openstack1 --config '{vnf: [ {member-vnf-index: "1", vdu:[ {id: mgmtVM, interface: [{name: mgmtVM-eth0, floating-ip-required: True }]} ] } ] } '

Make sure that the target specified in vim-network-name of the NS Package is made available from outside to be able to use the parameter floating-ip-required.

5.2.7. Multi-site deployments (specifying different VIM accounts for different VNFs)

In this scenario, the mapping can be specified in the following way, where "1" and "2" are the member vnf index of the constituent vnfs in the NS descriptor, vim1 and vim2 are the names of vim accounts and netVIM1 and netVIM2 are the VIM networks that you want to use:

--config '{vnf: [ {member-vnf-index: "1", vim_account: vim1}, {member-vnf-index: "2", vim_account: vim2} ], vld: [ {name: datanet, vim-network-name: {vim1: netVIM1, vim2: netVIM2} } ] }'
 # NOTE: From release SIX (current master) add 'wim_account: False' (inside --config) to avoid wim network connectivity if you have not a WIM in your system

TODO: update example with latest Hackfest

You can try it using one of the examples of the hackfest (descriptors: hackfest2-vnf, hackfest2-ns; images:ubuntu1604, presentation: modeling multi-VDU VNF) in the following way:

osm ns-create --ns_name hf12 --nsd_name hackfest2-ns --vim_account openstack1 --config '{vnf: [ {member-vnf-index: "1", vim_account: openstack1}, {member-vnf-index: "2", vim_account: openstack3} ], vld: [ {name: mgmtnet, vim-network-name: {openstack1: mgmt, openstack3: mgmt} } ] }'

5.2.8. Specifying a volume ID for a VNF volume

In a generic way, the mapping can be specified in the following way, where VM1 is the name of the VDU, Storage1 is the volume name in VNF descriptor and 05301095-d7ee-41dd-b520-e8ca08d18a55 is the volume id:

--config '{vnf: [ {member-vnf-index: "1", vdu: [ {id: VM1, volume: [ {name: Storage1, vim-volume-id: 05301095-d7ee-41dd-b520-e8ca08d18a55} ] } ] } ] }'

TODO: update example with latest Hackfest

You can try it using one of the examples of the hackfest (descriptors: hackfest1-vnf, hackfest1-ns; images: ubuntu1604, presentation: creating a basic VNF and NS) in the following way:

With the previous hackfest example, according VNF data model you will add in VNF Descriptor:

     volumes:
        - name: Storage1
          size: 'Size of the volume'

Then:

osm ns-create --ns_name h1 --nsd_name hackfest1-ns --vim_account openstack1 --config '{vnf: [ {member-vnf-index: "1", vdu: [ {id: hackfest1VM, volume: [ {name: Storage1, vim-volume-id: 8ab156fd-0f8e-4e01-b434-a0fce63ce1cf} ] } ] } ] }'

5.2.9. Adding additional parameters

Since OSM Release SIX, additional user parameters can be added, and they land at vdu:cloud-init (Jinja2 format) and/or vnf-configuration primitives (enclosed by <>). Here is an example of a VNF descriptor that uses two parameters called touch_filename and touch_filename2.

vnfd:
    ...
    vnf-configuration:
        config-primitive:
        -   name: touch
            parameter:
            -   data-type: STRING
                default-value: <touch_filename2>
                name: filename
        initial-config-primitive:
        -   name: config
            parameter:
            -   name: ssh-hostname
                value: <rw_mgmt_ip>  # this parameter is internal
            -   name: ssh-username
                value: ubuntu
            -   name: ssh-password
                value: osm4u
            seq: '1'
        -   name: touch
            parameter:
            -   name: filename
                value: <touch_filename>
            seq: '2'

And they can be provided with:

--config '{additionalParamsForVnf: [{member-vnf-index: "1", additionalParams: {touch_filename: your-value,  touch_filename2: your-value2}}]}'

5.3. Understanding Day-1 and Day-2 Operations

VNF configuration is done in three “days”:

  • Day-0: The machine gets ready to be managed (e.g. import ssh-keys, create users/pass, network configuration, etc.)

  • Day-1: The machine gets configured for providing services (e.g.: Install packages, edit config files, execute commands, etc.)

  • Day-2: The machine configuration and management is updated (e.g.: Do on-demand actions, like dump logs, backup databases, update users etc.)

In OSM, Day-0 is usually covered by cloud-init, as it just implies basic configurations.

Day-1 and Day-2 are both managed by the VCA (VNF Configuration & Abstraction) module, which consists of a Juju Controller that interacts with VNFs through “charms”, a generic set of scripts for deploying and operating software which can be adapted to any use case.

There are two types of charms:

  • Native charms: the set of scripts run inside the VNF components. This kind of charms are new in Release 7.

  • Proxy charms: the set of scripts run in LXC containers in an OSM-managed machine (which could be where OSM resides), which use ssh or other methods to get into the VNF instances and configure them.

OSM Proxy Charms

These charms can run with three scopes:

  • VDU: running a per-vdu charm, with individual actions for each.

  • VNF: running globally for the VNF, for the management VDU that represents it.

  • NS: running for the whole NS, after VNFs have been configured, to handle interactions between them.

For detailed instructions on how to add cloud-init or charms to your VNF, visit the following references:

Furthermore, you can find a good explanation and examples in this presentation

5.4. Monitoring and autoscaling

5.4.1. Performance Management

5.4.1.1. VNF Metrics Collection

OSM MON features a “mon-collector” module which will collect metrics whenever specified at the descriptor level. For metrics to be collected, they have to exist first at any of these two levels:

  • NFVI - made available by VIM’s Telemetry System

  • VNF - made available by OSM VCA (Juju Metrics)

Reference diagram:

OSM Performance Management Reference Diagram

5.4.1.1.1. VIM Metrics

For VIM metrics to be collected, your VIM should support a Telemetry system. As of Release 7.0, metric collection works with:

  • OpenStack VIM legacy or Gnocchi-based telemetry services.

  • VMware vCloud Director with vRealizeOperations.

Next step is to activate metrics collection at your VNFDs. Every metric to be collected from the VIM for each VDU has to be described both at the VDU level, and then at the VNF level. For example:

vdu:
id: vdu1
  ...  
  monitoring-param:
        - id: metric_vdu1_cpu
          nfvi-metric: cpu_utilization
        - id: metric_vdu1_memory
          nfvi-metric: average_memory_utilization
...
monitoring-param:
-   id: metric_vim_vnf1_cpu
    name: metric_vim_vnf1_cpu
    aggregation-type: AVERAGE
    vdu-monitoring-param:
      vdu-ref: vdu1
      vdu-monitoring-param-ref: metric_vdu1_cpu
-   id: metric_vim_vnf1_memory
    name: metric_vim_vnf1_memory
    aggregation-type: AVERAGE
    vdu-monitoring-param:
      vdu-ref: vdu1
      vdu-monitoring-param-ref: metric_vdu1_memory

As you can see, a list of “NFVI metrics” is defined first at the VDU level, which contains an ID and the corresponding normalized metric name (in this case, cpu_utilization and average_memory_utilization) Then, at the VNF level, a list of monitoring-params is referred, with an ID, name, aggregation-type and their source (vdu-monitoring-param in this case)

5.4.1.1.1.1. Additional notes
  • Available attributes and values can be directly explored at the OSM Information Model

  • A complete VNFD example can be downloaded from here.

  • Normalized metric names are: cpu_utilization, average_memory_utilization, disk_read_ops, disk_write_ops, disk_read_bytes, disk_write_bytes, packets_received, packets_sent, packets_out_dropped, packets_in_dropped

5.4.1.1.1.2. OpenStack-specific notes

Since Rel SIX onwards, MON collects the last measure for the corresponding metric, so no further configuration (i.e. granularity) is needed anymore.

5.4.1.1.1.3. VMware vCD specific notes

Since REL6 onwards, MON collects all the normalized metrics, with the following exceptions:

  • packets_in_dropped is not available and will always return 0.

  • packets_received cannot be measured. Instead the number of bytes received for all interfaces is returned.

  • packets_sent cannot be measured. Instead the number of bytes sent for all interfaces is returned.

The rolling average for vROPS metrics is always 5 minutes. The collection interval is also 5 minutes, and can be changed, however, it will still report the rolling average for the past 5 minutes, just updated according to the collection interval. See https://kb.vmware.com/s/article/67792 for more information.

Although it is not recommended, if a more frequent interval is desired, the following procedure can be used to change the collection interval:

  • Log into vROPS as an admin.

  • Navigate to Administration and expand Configuration.

  • Select Inventory Explorer.

  • Expand the Adapter Instances and select vCenter Server.

  • Edit the vCenter Server instance and expand the Advanced Settings.

  • Edit the Collection Interval (Minutes) value and set to the desired value.

  • Click OK to save the change.

5.4.1.1.2. VNF Metrics/Indicators

Metrics can also be collected directly from VNFs using VCA, through the Juju Metrics framework. A simple charm containing a metrics.yaml file at its root folder specifies the metrics to be collected and the associated command.

For example, the following metrics.yaml file collects three metrics from the VNF, called ‘users’, ‘load’ and ‘load_pct’

metrics:
   users:
     type: gauge
     description: "# of users"
     command: who|wc -l
   load:
     type: gauge
     description: "5 minute load average"
     command: cat /proc/loadavg |awk '{print $1}'
   load_pct:
     type: gauge
     description: "1 minute load average percent"
     command: cat /proc/loadavg  | awk '{load_pct=$1*100.00} END {print load_pct}'

Please note that the granularity of this metric collection method is fixed to 5 minutes and cannot be changed at this point.

After metrics.yaml is available, there are two options for describing the metric collection in the VNFD:

5.4.1.1.2.1. 1) VNF-level VNF metrics
mgmt-interface:
 cp: vdu_mgmt # is important to set the mgmt VDU or CP for metrics collection
vnf-configuration:
  initial-config-primitive:
  ...
  juju:
    charm: testmetrics
  metrics:
    - name: load
    - name: load_pct
    - name: users  
...
monitoring-param:
-   id: metric_vim_vnf1_load
    name: metric_vim_vnf1_load
    aggregation-type: AVERAGE
    vnf-metric:
      vnf-metric-name-ref: load
-   id: metric_vim_vnf1_loadpct
    name: metric_vim_vnf1_loadpct
    aggregation-type: AVERAGE
    vnf-metric:
      vnf-metric-name-ref: load_pct

Additional notes:

  • Available attributes and values can be directly explored at the OSM Information Model

  • A complete VNFD example with VNF metrics collection (VNF-level) can be downloaded from here.

5.4.1.1.2.2. 2) VDU-level VNF metrics
vdu:
- id: vdu1
  ...
  interface:
  - ...
    mgmt-interface: true ! is important to set the mgmt interface for metrics collection
    ...
  vdu-configuration:
    initial-config-primitive:
    ...
    juju:
      charm: testmetrics
    metrics:
      - name: load
      - name: load_pct
      - name: users  
...
monitoring-param:
-   id: metric_vim_vnf1_load
    name: metric_vim_vnf1_load
    aggregation-type: AVERAGE
    vdu-metric:
      vdu-ref: vdu1
      vdu-metric-name-ref: load
-   id: metric_vim_vnf1_loadpct
    name: metric_vim_vnf1_loadpct
    aggregation-type: AVERAGE
    vdu-metric:
      vdu-ref: vdu1
      vdu-metric-name-ref: load_pct

Additional notes:

  • Available attributes and values can be directly explored at the OSM Information Model

  • A complete VNFD example with VNF metrics collection (VDU-level) can be downloaded from here.

As in VIM metrics, a list of “metrics” is defined first either at the VNF or VDU “configuration” level, which contain a name that comes from the metrics.yaml file. Then, at the VNF level, a list of monitoring-params is referred, with an ID, name, aggregation-type and their source, which can be a “vdu-metric” or a “vnf-metric” in this case.

5.4.1.2. Infrastructure Status Collection

OSM MON collects, automatically, “status metrics” for:

  • VIMs - each VIM that OSM establishes contact with, the metric will be reflected with the name osm_vim_status in the TSDB.

  • VMs - VMs for each VDU that OSM has instantiated, the metric will be reflected with the name osm_vm_status in the TSDB.

Metrics will be “1” or “0” depending on the element availability.

5.4.1.3. System Metrics

OSM collects system-wide metrics directly using Prometheus exporters. The way these metrics are collected is highly dependant on how OSM was installed:

OSM on Kubernetes OSM on Docker Swarm
Components Prometheus Operator Chart / Other charts: MongoDB, MySQL and Kafka exporters Node exporter / CAdvisor exporter
Implements Multiple Grafana dashboards for a comprehensive health check of the system. Single Grafana dashboard with the most important system metrics.

The name with which these metrics are stored in Prometheus also depends on the installation, so Grafana Dashboards will be available by default, already showing these metrics. Please note that the K8 installation requires the optional Monitoring stack.

Screenshot of OSM System Metrics at Grafana

5.4.1.4. Retrieving OSM metrics from Prometheus TSDB

Once the metrics are being collected, they are stored in the Prometheus Time-Series DB with an ‘osm_’ prefix, and there are a number of ways in which you can retrieve them.

5.4.1.4.1. 1) Visualizing metrics in Prometheus UI

Prometheus TSDB includes its own UI, which you can visit at http://[OSM_IP]:9091.

From there, you can:

  • Type any metric name (i.e. osm_cpu_utilization) in the ‘expression’ field and see its current value or a histogram.

  • Visit the Status –> Target menu, to monitor the connection status between Prometheus and MON (through mon-exporter)

Screenshot of OSM Prometheus UI

5.4.1.4.2. 2) Visualizing metrics in Grafana

Starting in Release 7, OSM includes by default its own Grafana installation (deprecating the former experimental pm_stack)

Access Grafana with its default credentials (admin / admin) at http://[OSM_IP_address]:3000 and by clicking the ‘Manage’ option at the ‘Dashboards’ menu (to the left), you will find a sample dashboard containing two graphs for VIM metrics, and two graphs for VNF metrics. You can easily change them or add more, as desired.

Screenshot of OSM Grafana UI

5.4.1.4.2.1. Dashboard Automation

Starting in Release 7, Grafana Dashboards are created by default in OSM. This is done by the “dahboarder” service in MON, which provisions Grafana following changes in the common DB.

Updates in Automates these dashboards
OSM installation System Metrics, Admin Project-scoped
OSM Projects Project-scoped
OSM Network Services NS-scoped sample dashboard
5.4.1.4.3. 3) Querying metrics through OSM SOL005-based NBI

For collecting metrics through the NBI, the following URL format should be followed:

https://<host-ip>:<nbi-port>/osm/nspm/v1/pm_jobs/<project-id>/reports/<network-service-id>

Where:

  • <host-ip>: Is the machine where OSM is installed.

  • <nbi-port>: The NBI port, i.e. 9999

  • <project-id>: Currently it can be any string.

  • <network-service-id>: It is the NS ID got after instantiation of network service.

Please note that a token should be obtained first in order to query a metric. More information on this can be found in the OSM NBI Documentation

In response, you would get a list of the available VNF metrics, for example:

   performanceMetric: osm_cpu_utilization
   performanceValue:
       performanceValue:
           performanceValue: '0.9563615332000001'
           vduName: test_fet7912-2-ubuntuvnf2vdu1-1
           vnfMemberIndex: '2'
       timestamp: 1568977549.065
5.4.1.4.4. 4) Interacting with Prometheus directly through its API

The Prometheus HTTP API is always directly available to gather any metrics. A couple of examples are shown below:

Example with Date range query

curl 'http://localhost:9091/api/v1/query_range?query=osm_cpu_utilization&start=2018-12-03T14:10:00.000Z&end=2018-12-03T14:20:00.000Z&step=15s'

Example with Instant query

curl 'http://localhost:9091/api/v1/query?query=osm_cpu_utilization&time=2018-12-03T14:14:00.000Z'

Further examples and API calls can be found at the Prometheus HTTP API documentation.

5.4.1.4.5. 5) Interacting directly with MON Collector

The way Prometheus TSDB stores metrics is by querying Prometheus ‘exporters’ periodically, which are set as ‘targets’. Exporters expose current metrics in a specific format that Prometheus can understand, more information can be found here

OSM MON features a “mon-exporter” module that exports current metrics through port 8000. Please note that this port is by default not being exposed outside the OSM docker’s network.

A tool that understands Prometheus ‘exporters’ (for example, Elastic Metricbeat) can be plugged-in to integrate directly with “mon-exporter”. To get an idea on how metrics look alike in this particular format, you could:

5.4.1.4.5.1. 1. Get into MON console
docker exec -ti osm_mon.1.[id] bash
5.4.1.4.5.2. 2. Install curl
apt -y install curl
5.4.1.4.5.3. 3. Use curl to get the current metrics list
curl localhost:8000

Please note that as long as the Prometheus container is up, it will continue retrieving and storing metrics in addition to any other tool/DB you connect to mon-exporter.

5.4.1.4.6. 6) Using your own TSDB

OSM MON integrates Prometheus through a plugin/backend model, so if desired, other backends can be developed. If interested in contributing with such option, you can ask for details at our Slack #service-assurance channel or through the OSM Tech mailing list.

5.4.2. Fault Management

Reference diagram:

Diagram of OSM FM and ELK Experimental add-ons

5.4.2.1. Basic functionality

5.4.2.1.1. Logs & Events

Logs can be monitored on a per-container basis via command line, like this:

docker logs <container id or name>

For example:

docker logs osm_lcm.1.tkb8yr6v762d28ird0edkunlv

Logs can also be found in the corresponding volume of the host filesystem: /var/lib/containers/[container-id]/[container-id].json.log

Furthermore, there are some important events flowing between components through the Kafka bus, which can be monitored on a per-topic basis by external tools.

5.4.2.1.2. Alarm Manager for Metrics

As of Release FIVE, MON includes a new module called ‘mon-evaluator’. The only use case supported today by this module is the configuration of alarms and evaluation of thresholds related to metrics, for the Policy Manager module (POL) to take actions such as auto-scaling.

Whenever a threshold is crossed and an alarm is triggered, the notification is generated by MON and put in the Kafka bus so other components, like POL can consume them. This event is today logged by both MON (generates notification) and POL (consumes notification, for its auto-scaling or webhook actions)

By default, threshold evaluation occurs every 30 seconds. This value can be changed by setting an environment variable, for example:

docker service update --env-add OSMMON_EVALUATOR_INTERVAL=15 osm_mon

To configure alarms that send webhooks to a web service, add the following to the VNF descriptor:

vdu:
-   alarm:
    -   alarm-id: alarm-1
        operation: LT
        value: 20
        actions:
          alarm:
            - url: https://webhook.site/1111
          ok:
            - url: https://webhook.site/2222
          insufficient-data:
            - url: https://webhook.site/3333
        vnf-monitoring-param-ref: vnf_cpu_util

Regarding how to configure alarms through VNFDs for the auto-scaling use case, follow the auto-scaling documentation

5.4.2.2. Experimental functionality

An optional ‘OSM ELK’ stack is available to allow for events visualization, consisting of the following tools:

  • Elastisearch - scalable search engine and event database.

  • Filebeat & Metricbeat - part of Elastic ‘beats’, which evolve the former Logstash component to provide generic logs and metrics collection, respectively.

  • Kibana - Graphical tool for exploring all the collected events and generating customized views and dashboards.

5.4.2.2.1. Enabling the OSM ELK Stack

If you want to install OSM along with the ELK stack, run the installer as follows:

./install_osm.sh --elk_stack

If you just want to add the ELK stack to an existing OSM installation, run the installer as follows:

 ./install_osm.sh -o elk_stack

This will install four additional docker containers (Elasticsearch, Filebeat, Metricbeat and Kibana), as well as download a Docker image for an auxiliary tool named Curator (bobrik/curator)

If you need to remove it at some point in time, just run the following command:

docker stack rm osm_elk

If you need to deploy the stack again after being removed:

docker stack deploy -c /etc/osm/docker/osm_elk/docker-compose.yml osm_elk

IMPORTANT: As time passes and more events are generated in your system, and depending on your configured searches, views and dashboards, Elasticsearch database which become very big, which may not be desirable in testing environments. In order to delete your data periodically, you can launch a Curator container that will delete the saved indexes, freeing the associated disk space.

For example, to delete all the data older than the last day:

docker run --rm --name curator --net host --entrypoint curator_cli bobrik/curator:5.5.4 --host localhost delete_indices --filter_list '[{"filtertype":"age","source":"creation_date","direction":"older","unit":"days","unit_count":1}]'

Or to delete the data older than 2 hours:

docker run --rm --name curator --net host --entrypoint curator_cli bobrik/curator:5.5.4 --host localhost delete_indices --filter_list '[{"filtertype":"age","source":"creation_date","direction":"older","unit":"hours","unit_count":2}]'
5.4.2.2.2. Testing the OSM ELK Stack
  1. Download the sample dashboards to your desktop from this link (right click, save link as): https://osm-download.etsi.org/ftp/osm-4.0-four/4th-hackfest/other/osm_kibana_dashboards.json

  2. Visit Kibana at http://[OSM_IP]:5601 and:

    1. Go to “Management” –> Saved Objects –> Import (select the downloaded file)

    2. Go to “Dashboard” and select the “OSM System Dashboard”, which connects to other three sub-dashboards (You may need to redefine “filebeat-*” as the default ‘index-pattern’ by selecting it, marking the star and revisiting the Dashboards)

    3. Metrics (from Metricbeat) and logs (from Filebeat) should appear at the corresponding visualizations.

OSM Kibana Sample Dashboard

5.4.3. Autoscaling

5.4.3.1. Reference diagram

The following diagram summarizes the feature:

Diagram explaining auto-scaling support

  • Scaling descriptors can be included and be tied to automatic reaction to VIM/VNF metric thresholds.

  • Supported metrics are both VIM and VNF metrics. More information about metrics collection can be found at the Performance Management documentation

  • An internal alarm manager has been added to MON through the ‘mon-evaluator’ module, so that both VIM and VNF metrics can also trigger threshold-violation alarms and scaling actions. More information about this module can be found at the Fault Management documentation

5.4.3.2. Scaling Descriptor

The scaling descriptor is part of a VNFD. Like the example below shows, it mainly specifies:

  • An existing metric to be monitored, which should be pre-defined in the monitoring-param list (vnf-monitoring-param-ref).

  • The VDU to be scaled (vdu-id-ref) and the amount of instances to scale per event (count)

  • The thresholds to monitor (scale-in/out-threshold)

  • The minimum and maximum amount of scaled instances to produce.

  • The minimum time it should pass between scaling operations (cooldown-time)

scaling-group-descriptor:
-   name: "cpu_autoscaling_descriptor"
    min-instance-count: 0
    max-instance-count: 10
    scaling-policy:
    -   name: "cpu_scaling_policy"
        scaling-type: "automatic"
        cooldown-time: 120
        scaling-criteria:
        -   name: "cpu_autoscaling_criteria"
            scale-in-threshold: 20
            scale-in-relational-operation: "LT"
            scale-out-threshold: 80
            scale-out-relational-operation: "GT"
            vnf-monitoring-param-ref: "vnf01_cpu_util"
    vdu:
    -   count: 1
        vdu-id-ref: vdu01

5.4.3.3. Example

This will launch a Network Service formed by an HAProxy load balancer and an (autoscalable) Apache web server. Please check:

  1. Your VIM has an accesible ‘public’ network and a management network (in this case called “PUBLIC” and “vnf-mgmt”)

  2. Your VIM has the ‘haproxy_ubuntu’ and ‘apache_ubuntu’ images, which can be found here

  3. You run the following command to match your VIM metrics telemetry system’s granularity, if different than 300s (recommended for this example is 60s or Gnocchi’s medium archive-policy):

docker service update --env-add OS_DEFAULT_GRANULARITY=60 osm_mon

Get the descriptors:

wget https://osm-download.etsi.org/ftp/osm-4.0-four/4th-hackfest/packages/webserver_vimmetric_autoscale_nsd.tar.gz
wget https://osm-download.etsi.org/ftp/osm-4.0-four/4th-hackfest/packages/webserver_vimmetric_autoscale_vnfd.tar.gz

Onboard them:

osm vnfd-create webserver_vimmetric_autoscale_vnfd.tar.gz
osm nsd-create webserver_vimmetric_autoscale_nsd.tar.gz

Launch the NS:

osm ns-create --ns_name web01 --nsd_name webserver_vimmetric_autoscale_ns --vim_account <VIM_ACCOUNT_NAME>|<VIM_ACCOUNT_ID>
osm ns-list
osm ns-show web01

Testing:

  1. To ensure the NS is working, visit the Load balancer’s IP at the public network using a browser, the page should show an OSM logo and active VDUs.

  2. To check metrics at Prometheus, visit http://[OSM_IP]:9091 and look for osm_cpu_utilization and osm_average_memory_utilization (initial values could take some some minutes depending on your telemetry system’s granularity).

  3. To check metrics at Grafana, just install the OSM preconfigured version (./install_osm.sh -o pm_stack) and visit http://[OSM_IP]:3000 (admin/admin), you will find a sample dashboard (the two top charts correspond to this example).

  4. To increase CPU in this example to auto-scale the web server, install Apache Bench in a client within reach (could be the OSM host) and run it towards test.php.

sudo apt install apache2-utils
ab -n 5000000 -c 2 http://[load-balancer-ip]/test.php
# This will stress CPU to 100% and trigger a scale-out operation in POL.
# In this test, scaling will usually go up to 3 web servers before HAProxy spreads to load to reach a normal CPU level (w/ 60s granularity, 180s cooldown)

Any of the VMs can be accessed through SSH to further monitor (with htop, for example), and there is an HAProxy UI at port http://[HAProxy_IP]:32700 (all credentials are osm/osm2018)

5.5. Using Network Slices

In order to illustrate better how network slicing works in OSM, it will be discussed in the context of a running example.

5.5.1. Resources

This example of use network slicing requires a set of resources (VNFs, NSs, NSTs) that are available in the following link

NS:

NST:

5.5.2. Network Slice Template Diagram

The diagram below shows the Network Slice Template created for the example. As is shown in the picture, three network slice subnets are connected by Virtual Links Descriptors (VLDs) through the connection points of the network services. We have a Virtual Link for management slice_vld_mgmt and two Virtual links for data, slice_vld_data1 and slice_vld_data2. In the middle, we have a network-slice-subnet that interconnects the Netslice subnets we have on both sides.

nst diagram

5.5.2.1. Virtual Network Functions

We use two VNFs for this example. The difference between them is the number of network interfaces to create connections. While the slice_hackfest_middle_vnfd VNF have three interfaces (mgmt, data1, data2), the slice_hackfest_vnfd have only two (mgmt, data). The specifications vCPU (1), RAM (1GB), disk (10GB), and image-name (‘US1604’) are the same in both VNFs.

vnfd

middle vnfd

5.5.2.2. Network Services

We use two network services in this example. They are differentiated by 1) the number of interfaces that posses, 2) the VNF contained inside the Network service, 3) the NS slice_hackfest_nsd have two VLDs, one for data and other for management 4) the slice_hackfest_middle_nsd has three VLDs, one for management and the other two for data1 and data2.

The slice_hackfest_middle_nsd have inside the slice_hackfest_middle_vnfd and the slice_hackfest_nsd has the vnf slice_hackfest_vnfd.

The diagram below shows the slice_hackfest_nsd and slice_hackfest_middle_nsd, its connection points, VLDs and VNFs.

nsd

middle nsd

5.5.3. Creating a Network Slice Template (NST)

Based on the OSM information model for Network slice templates here it is possible to start writing the YAML descriptor for the NST.

nst:
-   id: slice_hackfest_nst
    name: slice_hackfest_nst
    SNSSAI-identifier:
        slice-service-type: eMBB
    quality-of-service:
        id: 1

The snippet above contains the mandatory fields for the NST. Additionally, we can find the description below of the netslice-subnet and netslice-vld sections. When we create an NST, the id references the Network Slice Template, and the name is the name set to the NST. Additionally, the required parameter SNSSAI-identifier is a reference to which kind of service is inside this slice. In OSM we have three types of slice-service-type. Enhanced mobile broadband (eMBB), Ultra-reliable low-latency communications (URLLC) or massive machine type communications (mMTC). Moreover, we add a quality-of-service parameter that is related to the 5G QoS Indicator (5QI).

The section netslice-subnet shown below is the place to allocate the network services that compose the slice. Each item of the netslice-subnet list has:

  1. An id to identify the netslice-subnet.

  2. The option is-shared-nss is a boolean flag to determine if the NSS is shared among Network Slice Instances that use this Netslice Subnet.

  3. An optional description.

  4. The nsd-ref is the reference to the Network Service descriptor that forms the netslice subnet.

 netslice-subnet:
    -   id: slice_hackfest_nsd_1
        is-shared-nss: false
        description: NetSlice Subnet (service) composed by 1 vnf with 2 cp
        nsd-ref: slice_hackfest_nsd

    -   id: slice_hackfest_nsd_2
        is-shared-nss: true
        description: NetSlice Subnet (service) composed by 1 vnf with 3 cp
        nsd-ref: slice_hackfest_middle_nsd

    -   id: slice_hackfest_nsd_3
        is-shared-nss: false
        description: NetSlice Subnet (service) composed by 1 vnf with 2 cp
        nsd-ref: slice_hackfest_nsd

Finally, it is defined the connections among the netslice-subnets in section netslice-vld as is shown below:

netslice-vld:
    -   id: slice_vld_mgmt
        name: slice_vld_mgmt
        type: ELAN
        mgmt-network: true
        nss-connection-point-ref:
        -   nss-ref: slice_hackfest_nsd_1
            nsd-connection-point-ref: nsd_cp_mgmt
        -   nss-ref: slice_hackfest_nsd_2
            nsd-connection-point-ref: nsd_cp_mgmt
        -   nss-ref: slice_hackfest_nsd_3
            nsd-connection-point-ref: nsd_cp_mgmt
    -   id: slice_vld_data1
        name: slice_vld_data1
        type: ELAN
        nss-connection-point-ref:
        -   nss-ref: slice_hackfest_nsd_1
            nsd-connection-point-ref: nsd_cp_data
        -   nss-ref: slice_hackfest_nsd_2
            nsd-connection-point-ref: nsd_cp_data1

Having the network slice template ready is needed to onboard the resources to the OSM before upload the network slice template. The following commands help you to onboard packages to OSM:

  • VNF package:

    • List Virtual Network Functions Descriptors

      • osm vnfd-list

    • Upload the slice_hackfest_vnf package

      • osm vnfd-create slice_hackfest_vnf.tar.gz

    • Upload the slice_hackfest_middle_vnf package

      • osm vnfd-create slice_hackfest_middle_vnf.tar.gz

    • Show if slice_hackfest_vnf was uploaded correctly to OSM

      • osm vnfd-show slice_hackfest_vnfd

    • Show if slice_hackfest_vnf was uploaded correctly to OSM

      • osm vnfd-show slice_hackfest_middle_vnfd

  • NS package:

    • List Network Service Descriptors

      • osm nsd-list

    • Upload the slice_hackfest_ns package

      • osm nsd-create slice_hackfest_ns.tar.gz

    • Upload the slice_hackfest_middle_ns package

      • osm nsd-create slice_hackfest_middle_ns.tar.gz

    • Show if slice_hackfest_nsd was uploaded correctly to OSM

      • osm nsd-show slice_hackfest_nsd

    • Show if slice_hackfest_middle_nsd was uploaded correctly to OSM

      • osm nsd-show slice_hackfest_middle_nsd

  • NST:

    • List network slice templates

      • osm nst-list

    • Upload the slice_hackfest_nst.yaml template

      • osm nst-create slice_hackfest_nst.yaml

    • Upload the slice_hackfest2_nst.yaml template

      • osm nst-create slice_hackfest2_nst.yaml

    • Show if slice_hackfest_nst was uploaded correctly to OSM

      • osm nst-show slice_hackfest_nst

    • Show if slice_hackfest2_nst was uploaded correctly to OSM

      • osm nst-show slice_hackfest2_nst

With all resources already available in OSM, it is possible to create the Network Slice Instance (NSI) using the slice_hackfest_nst. You can find below the help of the command to create a network slice instance:

osm nsi-create --help
Usage: osm nsi-create [OPTIONS]

  creates a new Network Slice Instance (NSI)

Options:
  --nsi_name TEXT     name of the Network Slice Instance
  --nst_name TEXT     name of the Network Slice Template
  --vim_account TEXT  default VIM account id or name for the deployment
  --ssh_keys TEXT     comma separated list of keys to inject to vnfs
  --config TEXT       Netslice specific yaml configuration:
                      netslice_subnet: [
                      id: TEXT, vim_account: TEXT,
                      vnf: [member-vnf-index: TEXT, vim_account: TEXT]
                      vld: [name: TEXT,
                            vim-network-name: TEXT or DICT with vim_account,
                            vim_net entries]
                      additionalParamsForNsi: {param: value, ...}
                      additionalParamsForsubnet: [{id: SUBNET_ID,
                      additionalParamsForNs: {},
                      additionalParamsForVnf: {}}]
                      ],
                      netslice-vld: [name: TEXT,
                      vim-network-name: TEXT or DICT with vim_account,
                      vim_net entries]
  --config_file TEXT  nsi specific yaml configuration file
  --wait              do not return the control immediately, but keep it
                      until the operation is completed, or timeout
  -h, --help          Show this message and exit.

To instantiate the network slice template use the following command:

osm nsi-create\
--nsi_name my_first_slice\
--nst_name slice_hackfest_nst\
--vim_account <replace_vim_account_name>\
--config 'netslice-vld: [{ "name": "slice_vld_mgmt", "vim-network-name": <replace_vim_external_network> }]'

Where:

  • --nsi-name is the name of the Network Slice Instance: my_first_slice

  • --nst-name is the name of the Network Slice Template: slice_hackfest_nst

  • --vim_account is the default VIM account id or name to be used by the NSI

  • --config is the configuration parameter used for the slice. For example, it is possible to attach the NS management network to an external network of the VIM to have access to the VNF deployed in the slice. In this case, netslice-vld list, contains the name of the VLD slice_vld_mgmt used to attach the external network of the VIM by vim-network-name key.

The commands to operate the slice are:

  • List Network Slice Instances

    • osm nsi-list

  • Delete Network Slice Instance

    • osm nsi-delete <nsi_name> or <nsi_id>

The result of the deployment in Openstack looks like:

Network Slice Instance

Network Slice Instance Openstack

In the picture above, it is shown three VNFs deployed in OpenStack connected to management OpenStack network osm-ext and also connected among them, following the VLDs described in the network slice template.

5.5.4. Sharing a Network Slice Subnet

To test the feature of sharing a network slice subnet, we create a new network slice template that uses the shared netslice subnet from the previous instantiation. The picture below shows the Network Slice Template.

Sharing network slice subnet

The network slice template used for sharing a network slice subnet is slice_hackfest2_nst.yaml and it is available in the resources section.

nst:
-   id: slice_hackfest2_nst
    name: slice_hackfest2_nst
    SNSSAI-identifier:
        slice-service-type: eMBB
    quality-of-service:
        id: 1

    netslice-subnet:
    -   id: slice_hackfest_nsd_2
        is-shared-nss: true
        description: NetSlice Subnet (service) composed by 1 vnf with 3 cp
        nsd-ref: slice_hackfest_middle_nsd
    -   id: slice_hackfest_nsd_3
        is-shared-nss: false
        description: NetSlice Subnet (service) composed by 1 vnf with 2 cp
        nsd-ref: slice_hackfest_nsd

    netslice-vld:
    -   id: slice_vld_mgmt
        name: slice_vld_mgmt
        type: ELAN
        mgmt-network: true
        nss-connection-point-ref:
        -   nss-ref: slice_hackfest_nsd_2
            nsd-connection-point-ref: nsd_cp_mgmt
        -   nss-ref: slice_hackfest_nsd_3
            nsd-connection-point-ref: nsd_cp_mgmt
    -   id: slice_vld_data2
        name: slice_vld_data2
        type: ELAN
        nss-connection-point-ref:
        -   nss-ref: slice_hackfest_nsd_2
            nsd-connection-point-ref: nsd_cp_data2
        -   nss-ref: slice_hackfest_nsd_3
            nsd-connection-point-ref: nsd_cp_data

The YAML above contains 2 netslice-subnet, one with the flag is-shared-nss as true and the other one with the flag is-shared-nss as false. The netslice-vlds will connect the slice_hackfest_middle_nsd nss with management interface and data2 with the slice_hackfest_nsd via nsd_cp_data.

To instantiate this network slice, we will use the same command used previously but changing the nst_name to slice_hackfest2_nst:

osm nsi-create\
--nsi_name my_shared_slice\
--nst_name slice_hackfest2_nst\
--vim_account <replace_vim_account_name>\
--config 'netslice-vld: [{ "name": "slice_vld_mgmt", "vim-network-name": <replace_vim_external_network> }]'

You can see the result of the instantiation in the picture below:

shared nsi

shared nsi openstack

Only one Network Slice Subnet was instantiated since the middle Network Slice Subnet is shared with this second NSI.

5.5.4.1. Result of deleting the Network Slice Instance 1

What would happens with the shared Network Slice Subnet and the second Network Slice Instance if we delete the first Network Slice Instance?

With the command osm nsi-delete my_first_slice we can delete the first Network Slice Instance. The result is that the middle Network Slice Subnet (shared) belongs to the NSI2, and it is not deleted when NSI1 is deleted. All networks and services created for NSS middle are kept. In the picture below, is shown the result in Openstack and the logical result of the deletion of NSI1:

nsi1 deletion

nsi1 deletion openstack

To remove the NSI2 run the command: osm nsi-delete my_shared_slice.

5.6. Using Kubernetes-based VNFs (KNFs)

From Release SEVEN, OSM supports Kubernetes-based Network Functions (KNF). This feature unlocks more than 20.000 packages that can be deployed besides VNFs and PNFs. This section guides you to deploy your first KNF, from the installation of multiple ways of Kubernetes clusters until the selection of the package and deployment.

5.6.1. Kubernetes installation

KNFs feature requires an operative Kubernetes cluster. There are several ways to have that Kubernetes running. From the OSM perpective, the Kubernetes cluster is not an isolated element, but it is a technology that enables the deployment of microservices in a cloud-native way. To handle the networks and facilitate the conection to the infrastructure, the cluster have to be associated to a VIM. There is an special case where the Kubernetes cluster is installed in a baremetal environment without the management of the networking part but in general, OSM consider that the Kubernetes cluster is located in a VIM.

For OSM you can use one of these three different ways to install your Kubernetes cluster:

  1. OSM Kubernetes cluster Network Service

  2. Self-managed Kubernetes cluster in a VIM

  3. Kubernetes baremetal installation

5.6.2. OSM Kubernetes requirements

After the Kubernetes installation is completed, you need to check if you have the following components in your cluster.

  1. Kubernetes Loadbalancer: to expose your KNFs to the network

  2. Kubernetes default Storageclass: to support persistent volumes.

5.6.3. Adding kubernetes cluster to OSM

In order to test Kubernetes-based VNF (KNF), you require a K8s cluster connected to a network in the VIM (e.g. vim-net). If you have a baremetal installation of Kubernetes, you will need to add a VIM in order to add the Kubernetes cluster.

You will have to add the K8s cluster to OSM. For that purpose, you can use these instructions:

osm k8scluster-add --creds clusters/kubeconfig-cluster.yaml --version '1.15' --vim <VIM_NAME|VIM_ID> --description "My K8s cluster" --k8s-nets '{"net1": "vim-net"}' cluster
osm k8scluster-list
osm k8scluster-show cluster

The options used to add the cluster are the following:

  • --creds: Is the location of the kubeconfig file where you have the cluster credentials

  • --version: Current version of your Kubernetes cluster

  • --vim: The name of the VIM where the Kubernetes cluster is deployed

  • --description: Give a description to your Kubernetes cluster

  • --k8s-nets: It is a dictionary of the cluster network, where the key is an arbitrary name and the value of the dictionary is the name of the network in the VIM. In case your k8s cluster is not located in a VIM, you could use ‘{net1: null}’

5.6.4. Adding repositories to OSM

You might need to add some repos from where to download helm charts required by the KNF:

osm repo-add --type helm-chart --description "Bitnami repo" bitnami https://charts.bitnami.com/bitnami
osm repo-add --type helm-chart --description "Cetic repo" cetic https://cetic.github.io/helm-charts
osm repo-add --type helm-chart --description "Elastic repo" elastic https://helm.elastic.co
osm repo-list
osm repo-show bitnami

5.6.5. KNF Service on-boarding

KNFs can be on-boarded using Helm Charts or Juju Bundles. In the following section is shown an example with Helm Chart and for Juju Bundles.

5.6.5.1. KNF Helm Chart

Once the cluster is attached to your OSM, you can work with KNF in the same way as you do with any VNF. You can onboard them. For instance, you can use the example below of a KNF consisting of a single Kubernetes deployment unit based on OpenLDAP helm chart.

wget http://osm-download.etsi.org/ftp/Packages/hackfests/openldap_knf.tar.gz
wget http://osm-download.etsi.org/ftp/Packages/hackfests/openldap_ns.tar.gz
osm nfpkg-create openldap_knf.tar.gz
osm nspkg-create openldap_ns.tar.gz

You can instantiate two NS instances:

osm ns-create --ns_name ldap --nsd_name openldap_ns --vim_account <VIM_NAME|VIM_ID>
osm ns-create --ns_name ldap2 --nsd_name openldap_ns --vim_account <VIM_NAME|VIM_ID> --config '{additionalParamsForVnf: [{"member-vnf-index": "openldap", additionalParamsForKdu: [{ kdu_name: "ldap", "additionalParams": {"replicaCount": "2"}}]}]}'

Check in the cluster that pods are properly created:

  • The pods associated to ldap should be using version openldap:1.2.1 and have 1 replica

  • The pods associated to ldap2 should be using version openldap:1.2.1 and have 2 replicas

Now you can upgrade both NS instances:

osm ns-action ldap --vnf_name openldap --kdu_name ldap --action_name upgrade --params '{kdu_model: "stable/openldap:1.2.2"}'
osm ns-action ldap2 --vnf_name openldap --kdu_name ldap --action_name upgrade --params '{kdu_model: "stable/openldap:1.2.1", "replicaCount": "3"}'

Check that both operations are marked as completed:

osm ns-op-list ldap
osm ns-op-list ldap2

Check in the cluster that both actions took place:

  • The pods associated to ldap should be using version openldap:1.2.2

  • The pods associated to ldap2 should be using version openldap:1.2.1 and have 3 replicas

Rollback both NS instances:

osm ns-action ldap --vnf_name openldap --kdu_name ldap --action_name rollback
osm ns-action ldap2 --vnf_name openldap --kdu_name ldap --action_name rollback

Check that both operations are marked as completed:

osm ns-op-list ldap
osm ns-op-list ldap2

Check in the cluster that both actions took place:

  • The pods associated to ldap should be using version openldap:1.2.1

  • The pods associated to ldap2 should be using version openldap:1.2.1 and have 2 replicas

Delete both instances:

osm ns-delete ldap
osm ns-delete ldap2

Delete the packages:

osm nspkg-delete openldap_ns
osm nfpkg-delete openldap_knf

Optionally, remove the repos and the cluster

#Delete repos
osm repo-delete cetic
osm repo-delete bitnami
osm repo-delete elastic
#Delete cluster
osm k8scluster-delete cluster

5.6.5.2. KNF Juju Bundle

This is an example on how to onboard a service that uses a Juju Bundle. For this example the service to be onboarded is a mediawiki that is comprised by a mariadb-k8s database and a mediawiki-k8s frontend.

wget http://osm-download.etsi.org/ftp/Packages/hackfests/mediawiki_cnf.tar.gz
wget http://osm-download.etsi.org/ftp/Packages/hackfests/mediawiki_cnf_ns.tar.gz
osm nfpkg-create mediawiki_cnf.tar.gz
osm nspkg-create mediawiki_cnf_ns.tar.gz

You can instantiate the Network Service:

osm ns-create --ns_name hf-k8s --nsd_name ubuntu-cnf-ns --vim_account <VIM_NAME|VIM_ID>

To check the status of the deployment you can run the following command:

osm ns-op-list hf-k8s
+--------------------------------------+-------------+-------------+-----------+---------------------+--------+
| id                                   | operation   | action_name | status    | date                | detail |
+--------------------------------------+-------------+-------------+-----------+---------------------+--------+
| 364c1378-ba86-447e-ad00-93fc1bf1bdd5 | instantiate | N/A         | COMPLETED | 2020-02-24T13:49:03 | -      |
+--------------------------------------+-------------+-------------+-----------+---------------------+--------+

To remove the network service you can:

osm ns-delete hf-k8s