Commit 342dd1c4 authored by garciadeblas's avatar garciadeblas
Browse files

05-osm-usage.md: documented support of K8s-based NF

parent 1325dd81
Loading
Loading
Loading
Loading
+109 −6
Original line number Diff line number Diff line
@@ -467,7 +467,7 @@ Once the metrics are being collected, they are stored in the Prometheus Time-Ser

##### 1) Visualizing metrics in Prometheus UI

Prometheus TSDB includes its own UI, which you can visit at http://[OSM_IP]:9091
Prometheus TSDB includes its own UI, which you can visit at http://[OSM\_IP]:9091

From there, you can:

@@ -581,7 +581,7 @@ Furthermore, there are some important events flowing between components through

##### Alarm Manager for Metrics

As of Release FIVE, MON includes a new module called 'mon-evaluator'. The only use case supported today by this module is the configuration of alarms and evaluation of thresholds related to metrics, for the Policy Manager module (POL) to take actions such as [auto-scaling](06-03-03-autoscaling.md).
As of Release FIVE, MON includes a new module called 'mon-evaluator'. The only use case supported today by this module is the configuration of alarms and evaluation of thresholds related to metrics, for the Policy Manager module (POL) to take actions such as [auto-scaling](#autoscaling).

Whenever a threshold is crossed and an alarm is triggered, the notification is generated by MON and put in the Kafka bus so other components can consume them. This event is today logged by both MON (generates notification) and POL (consumes notification, for its auto-scaling action)

@@ -591,7 +591,7 @@ By default, threshold evaluation occurs every 30 seconds. This value can be chan
docker service update --env-add OSMMON_EVALUATOR_INTERVAL=15 osm_mon
```

Further information regarding how to configure alarms through VNFDs for the supported use case can be found at the [auto-scaling documentation](06-03-03-autoscaling.md)
Further information regarding how to configure alarms through VNFDs for the supported use case can be found at the [auto-scaling documentation](#autoscaling)

Reference diagram:

@@ -666,8 +666,8 @@ The following diagram summarizes the feature:
![Diagram explaining auto-scaling support](assets/800px-Osm_pol_as.png)

- Scaling descriptors can be included and be tied to automatic reaction to VIM/VNF metric thresholds.
- Supported metrics are both VIM and VNF metrics. More information about metrics collection can be found at the [Performance Management documentation](06-03-01-performance-management.md)
- An internal alarm manager has been added to MON through the 'mon-evaluator' module, so that both VIM and VNF metrics can also trigger threshold-violation alarms and scaling actions. More information about this module can be found at the [Fault Management documentation](06-03-02-fault-management.md)
- Supported metrics are both VIM and VNF metrics. More information about metrics collection can be found at the [Performance Management documentation](#performance-management)
- An internal alarm manager has been added to MON through the 'mon-evaluator' module, so that both VIM and VNF metrics can also trigger threshold-violation alarms and scaling actions. More information about this module can be found at the [Fault Management documentation](05-osm-usage.md#fault-management)

### Scaling Descriptor

@@ -756,4 +756,107 @@ TODO: Page in elaboration. Meanwhile, you can find a good explanation and exampl

## Using Kubernetes-based VNFs (KNFs)

TODO: Page in elaboration.
From Release SEVEN, OSM supports Kubernetes-based VNF (KNF). In order to test it, you require a K8s cluster connected to a network in the VIM (e.g. "vim-net"

You will have to add the K8s cluster to OSM. For that purpose, you 

```bash
osm k8scluster-add --creds clusters/kubeconfig-cluster.yaml --version '1.15' --vim <VIM_NAME|VIM_ID> --description "My K8s cluster" --k8s-nets '{"net1": "vim-net"}' cluster
osm k8scluster-list
osm k8scluster-show cluster
```

Then, you might need to add some repos from where to download helm charts required by the KNF:

```bash
osm repo-add --type helm-chart --description "Bitnami repo" bitnami https://charts.bitnami.com/bitnami
osm repo-add --type helm-chart --description "Cetic repo" cetic https://cetic.github.io/helm-charts
osm repo-add --type helm-chart --description "Elastic repo" elastic https://helm.elastic.co
osm repo-list
osm repo-show bitnami
```

Once done, you can work with KNF in the same way as you do with any VNF. You can onboard them. For instance, you can use the example below of a KNF consisting of a single Kubernetes deployment unit based on OpenLDAP helm chart.

```bash
wget http://osm-download.etsi.org/ftp/Packages/hackfests/openldap_knf.tar.gz
wget http://osm-download.etsi.org/ftp/Packages/hackfests/openldap_ns.tar.gz
osm nfpkg-create openldap_knf.tar.gz
osm nspkg-create openldap_ns.tar.gz
```

You can instantiate two NS instances:

```bash
osm ns-create --ns_name ldap --nsd_name openldap_ns --vim_account <VIM_NAME|VIM_ID>
osm ns-create --ns_name ldap2 --nsd_name openldap_ns --vim_account <VIM_NAME|VIM_ID> --config '{additionalParamsForVnf: [{"member-vnf-index": "openldap", "additionalParams": {"replicaCount": "2"}}]}'
```

Check in the cluster that pods are properly created:

- The pods associated to ldap should be using version openldap:1.2.1 and have 1 replica
- The pods associated to ldap2 should be using version openldap:1.2.1 and have 2 replicas

Now you can upgrade both NS instances:

```bash
osm ns-action ldap --vnf_name openldap --kdu_name ldap --action_name upgrade --params '{kdu_model: "stable/openldap:1.2.2"}'
osm ns-action ldap2 --vnf_name openldap --kdu_name ldap --action_name upgrade --params '{kdu_model: "stable/openldap:1.2.1", "replicaCount": "3"}'
```

Check that both operations are marked as completed:

```bash
osm ns-op-list ldap
osm ns-op-list ldap2
```

Check in the cluster that both actions took place:

- The pods associated to ldap should be using version openldap:1.2.2
- The pods associated to ldap2 should be using version openldap:1.2.1 and have 3 replicas

Rollback both NS instances:

```bash
osm ns-action ldap --vnf_name openldap --kdu_name ldap --action_name rollback
osm ns-action ldap2 --vnf_name openldap --kdu_name ldap --action_name rollback
```

Check that both operations are marked as completed:

```bash
osm ns-op-list ldap
osm ns-op-list ldap2
```

Check in the cluster that both actions took place:

- The pods associated to ldap should be using version openldap:1.2.1
- The pods associated to ldap2 should be using version openldap:1.2.1 and have 2 replicas

Delete both instances:

```bash
osm ns-delete ldap
osm ns-delete ldap2
```

Delete the packages:

```bash
osm nspkg-delete openldap_ns
osm nfpkg-delete openldap_knf
```

Optionally, remove the repos and the cluster

```bash
#Delete repos
osm repo-delete cetic
osm repo-delete bitnami
osm repo-delete elastic
#Delete cluster
osm k8scluster-delete cluster
```