OSM Performance Management: Difference between revisions

From OSM Public Wiki
Jump to: navigation, search
m (removing some metrics not supported by OpenStack)
No edit summary
 
(37 intermediate revisions by 5 users not shown)
Line 1: Line 1:
This is new feature available since the Release FOUR Lightweight build, which allows to continuously monitor VNF metrics and visualize them.
'''THIS PAGE IS DEPRECATED'''. OSM User Guide has been moved to a new location: '''https://osm.etsi.org/docs/user-guide/'''


==Basic functionality==
---
By default, OSM MON allows to grab any metric, from any VDU and put it on the Kafka Bus, specifically in the 'metrics_response' topic.


Starting with OSM R4, an OSM CLI command is available to export a metric to from the VIM to the bus (in the example: NS name: "vnf01", VNF index: 1, VDU name: "ubuntuvnf_vnfd-VM", metric type: "cpu_utilization")
This documentation corresponds to Release SIX 6.0.0, previous documentation related to Performance Management has been deprecated.


osm ns-metric-export --ns vnf01 --vnf 1 --vdu ubuntuvnf_vnfd-VM --metric cpu_utilization
==Activating VNF Metrics Collection==
OSM MON features a "mon-collector" module which will collect metrics whenever specified at the descriptor level.
For metrics to be collected, they have to exist first at any of these two levels:
* NFVI - made available by VIM's Telemetry System
* VNF - made available by OSM VCA (Juju Metrics)


Possible metric names are: cpu_utilization, average_memory_utilization, disk_read_ops,disk_write_ops,disk_read_bytes,disk_write_bytes,packets_dropped_<nic number>,packets_received, packets_sent
Reference diagram:


The specific result can be read directly from the Kafka bus topic using a external tool, or can be seen in the MON logs by running a docker logs <MON Container ID>
[[File:osm_pm_rel5.png|800px|OSM Performance Management Reference Diagram]]


You can also add the '--interval' option to leave it running continuously, for example, every 10 seconds:
=== VIM Metrics ===


  osm ns-metric-export --ns vnf01 --vnf 1 --vdu ubuntuvnf_vnfd-VM --metric cpu_utilization --interval 10
For VIM metrics to be collected, your VIM should support a Telemetry system. As of Release 5.0.5, metric collection has been tested with:
* OpenStack VIM with Keystone v3 authentication and legacy or Gnocchi-based telemetry services. 
* VMware vCloud Director with vRealizeOperations.
Other VIM types will soon be added during the Release FIVE cycle.


Finally, you can leave it running in the background using Linux default tools:
Next step is to activate metrics collection at your VNFDs. Every metric to be collected from the VIM for each VDU has to be described both at the VDU level, and then at the VNF level.  For example:


  osm ns-metric-export --ns vnf01 --vnf 1 --vdu ubuntuvnf_vnfd-VM --metric cpu_utilization --interval 10 > /dev/null 2>&1 &
  vdu:
id: vdu1
  ... 
  monitoring-param:
        - id: metric_vdu1_cpu
          nfvi-metric: cpu_utilization
        - id: metric_vdu1_memory
          nfvi-metric: average_memory_utilization
...
monitoring-param:
-   id: metric_vim_vnf1_cpu
    name: metric_vim_vnf1_cpu
    aggregation-type: AVERAGE
    vdu-monitoring-param:
      vdu-ref: vdu1
      vdu-monitoring-param-ref: metric_vdu1_cpu
-  id: metric_vim_vnf1_memory
    name: metric_vim_vnf1_memory
    aggregation-type: AVERAGE
    vdu-monitoring-param:
      vdu-ref: vdu1
      vdu-monitoring-param-ref: metric_vdu1_memory


Please note that:
As you can see, a list of "NFVI metrics" is defined first at the VDU level, which contains an ID and the corresponding normalized metric name (in this case, "cpu_utilization" and "average_memory_utilization")
* As of Release 4.0.0, metric export has been tested with OpenStack VIM with Keystone v3 authentication and legacy or Gnocchi-based telemetry services.  VNF metrics and other VIM types will soon be added during the Release Four cycle.
Then, at the VNF level, a list of monitoring-params is referred, with an ID, name, aggregation-type and their source ('vdu-monitoring-param' in this case)
* For metrics to be exported, they have to exist at the VIM, so for recently created VDUs, it might take some time after they start appearing at the bus.


==Experimental functionality==
Some extensions have been added to the OSM installer to include an optional 'OSM Performance Management' stack, consisting of a 'Kafka Exporter' component that reads the metrics from the bus, exposes them in Prometheus so it can store them in its TSDB, and presents them in Grafana.


Basic architecture is as follows:
====Additional notes====


[[File:OSM Metrics Architecture 4.png|800px|Diagram of OSM Metrics Experimental add-ons]]
* Available attributes and values can be directly explored at the [https://osm.etsi.org/wikipub/index.php/OSM_Information_Model OSM Information Model]
* A complete VNFD example can be downloaded from [https://osm-download.etsi.org/ftp/osm-4.0-four/4th-hackfest/packages/webserver_vimmetric_autoscale_vnfd.tar.gz here].
* Normalized metric names are: cpu_utilization, average_memory_utilization, disk_read_ops, disk_write_ops, disk_read_bytes, disk_write_bytes, packets_received, packets_sent, packets_out_dropped, packets_in_dropped


=== Enabling the OSM Performance Management Stack ===
If you want to install OSM along with the PM stack, run the installer as follows:


./install_osm_release.sh --pm_stack
====OpenStack specific notes====


If you just want to add the PM stack to an existing OSM R4 Lightweight build, run the installer as follows:
Since REL6 onwards, MON collects the last measure for the corresponding metric, so no further configuration is needed.


  ./install_osm_release.sh -o pm_stack
====VMware vCD specific notes====


This will install three additional docker containers (Kafka Exporter, Prometheus and Grafana)
Since REL6 onwards, MON collects all the normalized metrics, with the following exceptions:


If you need to remove it at some point in time, just run the following command:
* packets_in_dropped is not available and will always return 0.
   
* packets_received cannot be measured. Instead the number of bytes received for all interfaces is returned.
  docker stack rm osm_metrics
* packets_sent cannot be measured. Instead the number of bytes sent for all interfaces is returned.


=== Testing the OSM PM Stack ===
The rolling average for vROPS metrics is always 5 minutes. The collection interval is also 5 minutes, and can be changed, however, it will still report the rolling average for the past 5 minutes, just updated according to the collection interval.  See https://kb.vmware.com/s/article/67792 for more information.
1. Create a continuous metric, that runs in the background as indicated in the first section.


2. Check if the Kafka Exporter is serving the metric to Prometheus, by visiting http://1.2.3.4:12340/metrics, replacing 1.2.3.4 with the IP address of your host.  Metrics should appear in the following text format:
Although it is not receommended, if a more frequent interval is desired, the following procedure can be used to change the collection interval:


# HELP kafka_exporter_topic_average_memory_utilization
* Log into vROPS as an admin.
# TYPE kafka_exporter_topic_average_memory_utilization gauge
* Navigate to Administration and expand Configuration.
kafka_exporter_topic_average_memory_utilization{resource_uuid="5599ce48-a830-4c51-995e-a663e590952f",} 200.0
* Select Inventory Explorer.
# HELP kafka_exporter_topic_cpu_utilization
* Expand the Adapter Instances and select vCenter Server.
# TYPE kafka_exporter_topic_cpu_utilization gauge
* Edit the vCenter Server instance and expand the Advanced Settings.
kafka_exporter_topic_cpu_utilization{resource_uuid="5599ce48-a830-4c51-995e-a663e590952f",} 0.7950777152296741
* Edit the Collection Interval (Minutes) value and set to the desired value.
* Click OK to save the change.


Note: if metrics appear at MON logs but not at this web service, we may have hit a rare issue (under investigation), where Kafka Exporter loses connection to the bus. 
=== VNF Metrics ===
* To confirm this issue, access the kafka-exporter container and check the log for a message like 'dead coordinator' (tail kafka-topic-exporter.log)
* To recover, just reload the service using 'docker service update --force osm_metrics_kafka-exporter'.


3. Visit Grafana at http://1.2.3.4:3000, replacing 1.2.3.4 with the IP address of your host.  Login with admin/admin credentials and visit the OSM Sample Dashboard.  It should already show graphs for CPU and MemoryYou can clone them and customize as desired.
Metrics can also be collected directly from VNFs using VCA, through the [https://docs.jujucharms.com/2.4/en/developer-metrics Juju Metrics] framework. A simple charm containing a metrics.yaml file at its root folder specifies the metrics to be collected and the associated command.   


[[File:OSM_Grafana_Example_4.png|800px|Diagram of OSM Grafana Example]]
For example, the following metrics.yaml file collects three metrics from the VNF, called 'users', 'load' and 'load_pct'


Please send your feedback and suggestions to OSM_TECH@list.etsi.org
metrics:
    users:
      type: gauge
      description: "# of users"
      command: who|wc -l
    load:
      type: gauge
      description: "5 minute load average"
      command: cat /proc/loadavg |awk '{print $1}'
    load_pct:
      type: gauge
      description: "1 minute load average percent"
      command: cat /proc/loadavg  | awk '{load_pct=$1*100.00} END {print load_pct}'         
 
Please note that the granularity of this metric collection method is fixed to 5 minutes and cannot be changed at this point.
 
After metrics.yaml is available, there are two options for describing the metric collection in the VNFD:
 
==== 1) VNF-level VNF metrics ====
 
mgmt-interface:
  cp: vdu_mgmt # is important to set the mgmt VDU or CP for metrics collection
vnf-configuration:
  initial-config-primitive:
  ...
  juju:
    charm: testmetrics
  metrics:
    - name: load
    - name: load_pct
    - name: users 
...             
monitoring-param:
-  id: metric_vim_vnf1_load
    name: metric_vim_vnf1_load
    aggregation-type: AVERAGE
    vnf-metric:
      vnf-metric-name-ref: load
-  id: metric_vim_vnf1_loadpct
    name: metric_vim_vnf1_loadpct
    aggregation-type: AVERAGE
    vnf-metric:
      vnf-metric-name-ref: load_pct
 
Additional notes:
* Available attributes and values can be directly explored at the [https://osm.etsi.org/wikipub/index.php/OSM_Information_Model OSM Information Model]
* A complete VNFD example with VNF metrics collection (VNF-level) can be downloaded from [https://osm-download.etsi.org/ftp/osm-4.0-four/4th-hackfest/packages/ubuntuvm_vnfmetric_autoscale_vnfd.tar.gz here].
 
==== 2) VDU-level VNF metrics ====
 
vdu:
- id: vdu1
  ...
  interface:
  - ...
    mgmt-interface: true ! is important to set the mgmt interface for metrics collection
    ...
  vdu-configuration:
    initial-config-primitive:
    ...
    juju:
      charm: testmetrics
    metrics:
      - name: load
      - name: load_pct
      - name: users 
...             
monitoring-param:
-  id: metric_vim_vnf1_load
    name: metric_vim_vnf1_load
    aggregation-type: AVERAGE
    vdu-metric:
      vdu-ref: vdu1
      vdu-metric-name-ref: load
-  id: metric_vim_vnf1_loadpct
    name: metric_vim_vnf1_loadpct
    aggregation-type: AVERAGE
    vdu-metric:
      vdu-ref: vdu1
      vdu-metric-name-ref: load_pct
 
Additional notes:
* Available attributes and values can be directly explored at the [https://osm.etsi.org/wikipub/index.php/OSM_Information_Model OSM Information Model]
* A complete VNFD example with VNF metrics collection (VDU-level) can be downloaded from [https://osm-download.etsi.org/ftp/osm-4.0-four/4th-hackfest/packages/ubuntuvm_vnfvdumetric_autoscale_vnfd.tar.gz here].
 
As in VIM metrics, a list of "metrics" is defined first either at the VNF or VDU "configuration" level, which contain a name that comes from the metrics.yaml file. Then, at the VNF level, a list of monitoring-params is referred, with an ID, name, aggregation-type and their source, which can be a "vdu-metric" or a "vnf-metric" in this case.
 
==Retrieving metrics from Prometheus TSDB==
 
Once the metrics are being collected, they are stored in the Prometheus Time-Series DB '''with an 'osm_' prefix''', and there are a number of ways in which you can retrieve them.
 
=== 1) Visualizing metrics in Prometheus UI ===
 
Prometheus TSDB includes its own UI, which you can visit at http://[OSM_IP]:9091
 
From there, you can:
* Type any metric name (i.e. osm_cpu_utilization) in the 'expression' field and see its current value or a histogram.
* Visit the Status --> Target menu, to monitor the connection status between Prometheus and MON (through 'mon-exporter')
 
[[File:osm_prometheus_rel5.png|800px|Screenshot of OSM Prometheus UI]]
 
=== 2) Integrating Prometheus with Grafana ===
 
There is a well-known integration between these two components that allows Grafana to easily show interactive graphs for any metric.  You can use any existing Grafana installation or the one that comes in OSM as the experimental "pm_stack".
 
To install the 'pm_stack' (which today includes only Grafana), in an existing OSM installation, run the installer in the following way:
./install_osm.sh -o pm_stack
Otherwise, if you are installing from scratch, you can simply run the installer with the pm_stack option.
./install.sh --pm_stack
 
Once installed, access Grafana with its default credentials (admin / admin) at http://[OSM_IP_address]:3000 and by clicking the 'Manage' option at the 'Dashboards' menu (to the left), you will find a sample dashboard containing two graphs for VIM metrics, and two graphs for VNF metrics.  You can easily change them or add more, as desired.
 
[[File:osm_grafana_rel5.png|800px|Screenshot of OSM Grafana UI]]
 
=== 3) Interacting with Prometheus directly through its API ===
 
Even though many analytics applications, like Grafana, include their own integration for Prometheus, some other applications do not include it out of the box or there is a need to build a custom integration. 
 
In such cases, the [https://prometheus.io/docs/prometheus/latest/querying/api/ Prometheus HTTP API] can be used to gather any metrics.  A couple of examples are shown below:
 
Example with Date range query
curl 'http://localhost:9091/api/v1/query_range?query=osm_cpu_utilization&start=2018-12-03T14:10:00.000Z&end=2018-12-03T14:20:00.000Z&step=15s'
 
Example with Instant query
curl 'http://localhost:9091/api/v1/query?query=osm_cpu_utilization&time=2018-12-03T14:14:00.000Z'
 
Further examples and API calls can be found at the [https://prometheus.io/docs/prometheus/latest/querying/api/ Prometheus HTTP API documentation].
 
=== 4) Interacting directly with MON Collector ===
 
The way Prometheus TSDB stores metrics is by querying Prometheus 'exporters' periodically, which are set as 'targets'.  Exporters expose current metrics in a specific format that Prometheus can understand, more information can be found [https://prometheus.io/docs/instrumenting/exporters/ here]
 
OSM MON features a "mon-exporter" module that exports '''current metrics''' through port 8000.  Please note that this port is by default not being exposed outside the OSM docker's network.
 
A tool that understands Prometheus 'exporters' (for example, Elastic Metricbeat) can be plugged-in to integrate directly with "mon-exporter".  To get an idea on how metrics look alike in this particular format, you could:
 
1. Get into MON console
docker exec -ti osm_mon.1.[id] bash
2. Install curl
apt -y install curl
3. Use curl to get the current metrics list
curl localhost:8000
 
Please note that as long as the Prometheus container is up, it will continue retrieving and storing metrics in addition to any other tool/DB you connect to "mon-exporter"
 
=== 5) Using your own TSDB ===
 
OSM MON integrates Prometheus through a plugin/backend model, so if desired, other backends can be developed.  If interested in contributing with such option, you can ask for details at our Slack #mon channel or through the OSM Tech mailing list.
 
==Default Infrastructure Status Collection==
OSM MON collects, automatically, "status metrics" for:
* VIMs - each VIM that OSM establishes contact with, the metric will be reflected with the name 'osm_vim_status' in the TSDB.
* VMs - VMs for each VDU that OSM has instantiated, the metric will be reflected with the name 'osm_vm_status' in the TSDB.
 
Metrics will be "1" or "0" depending on the element availability.
 
{{Feedback}}

Latest revision as of 10:39, 17 February 2021

THIS PAGE IS DEPRECATED. OSM User Guide has been moved to a new location: https://osm.etsi.org/docs/user-guide/

---

This documentation corresponds to Release SIX 6.0.0, previous documentation related to Performance Management has been deprecated.

Activating VNF Metrics Collection

OSM MON features a "mon-collector" module which will collect metrics whenever specified at the descriptor level. For metrics to be collected, they have to exist first at any of these two levels:

  • NFVI - made available by VIM's Telemetry System
  • VNF - made available by OSM VCA (Juju Metrics)

Reference diagram:

OSM Performance Management Reference Diagram

VIM Metrics

For VIM metrics to be collected, your VIM should support a Telemetry system. As of Release 5.0.5, metric collection has been tested with:

  • OpenStack VIM with Keystone v3 authentication and legacy or Gnocchi-based telemetry services.
  • VMware vCloud Director with vRealizeOperations.

Other VIM types will soon be added during the Release FIVE cycle.

Next step is to activate metrics collection at your VNFDs. Every metric to be collected from the VIM for each VDU has to be described both at the VDU level, and then at the VNF level. For example:

vdu:
id: vdu1
  ...  
  monitoring-param:
        - id: metric_vdu1_cpu
          nfvi-metric: cpu_utilization
        - id: metric_vdu1_memory
          nfvi-metric: average_memory_utilization
...
monitoring-param:
-   id: metric_vim_vnf1_cpu
    name: metric_vim_vnf1_cpu
    aggregation-type: AVERAGE
    vdu-monitoring-param:
      vdu-ref: vdu1
      vdu-monitoring-param-ref: metric_vdu1_cpu
-   id: metric_vim_vnf1_memory
    name: metric_vim_vnf1_memory
    aggregation-type: AVERAGE
    vdu-monitoring-param:
      vdu-ref: vdu1
      vdu-monitoring-param-ref: metric_vdu1_memory

As you can see, a list of "NFVI metrics" is defined first at the VDU level, which contains an ID and the corresponding normalized metric name (in this case, "cpu_utilization" and "average_memory_utilization") Then, at the VNF level, a list of monitoring-params is referred, with an ID, name, aggregation-type and their source ('vdu-monitoring-param' in this case)


Additional notes

  • Available attributes and values can be directly explored at the OSM Information Model
  • A complete VNFD example can be downloaded from here.
  • Normalized metric names are: cpu_utilization, average_memory_utilization, disk_read_ops, disk_write_ops, disk_read_bytes, disk_write_bytes, packets_received, packets_sent, packets_out_dropped, packets_in_dropped


OpenStack specific notes

Since REL6 onwards, MON collects the last measure for the corresponding metric, so no further configuration is needed.

VMware vCD specific notes

Since REL6 onwards, MON collects all the normalized metrics, with the following exceptions:

  • packets_in_dropped is not available and will always return 0.
  • packets_received cannot be measured. Instead the number of bytes received for all interfaces is returned.
  • packets_sent cannot be measured. Instead the number of bytes sent for all interfaces is returned.

The rolling average for vROPS metrics is always 5 minutes. The collection interval is also 5 minutes, and can be changed, however, it will still report the rolling average for the past 5 minutes, just updated according to the collection interval. See https://kb.vmware.com/s/article/67792 for more information.

Although it is not receommended, if a more frequent interval is desired, the following procedure can be used to change the collection interval:

  • Log into vROPS as an admin.
  • Navigate to Administration and expand Configuration.
  • Select Inventory Explorer.
  • Expand the Adapter Instances and select vCenter Server.
  • Edit the vCenter Server instance and expand the Advanced Settings.
  • Edit the Collection Interval (Minutes) value and set to the desired value.
  • Click OK to save the change.

VNF Metrics

Metrics can also be collected directly from VNFs using VCA, through the Juju Metrics framework. A simple charm containing a metrics.yaml file at its root folder specifies the metrics to be collected and the associated command.

For example, the following metrics.yaml file collects three metrics from the VNF, called 'users', 'load' and 'load_pct'

metrics:
   users:
     type: gauge
     description: "# of users"
     command: who|wc -l
   load:
     type: gauge
     description: "5 minute load average"
     command: cat /proc/loadavg |awk '{print $1}'
   load_pct:
     type: gauge
     description: "1 minute load average percent"
     command: cat /proc/loadavg  | awk '{load_pct=$1*100.00} END {print load_pct}'          

Please note that the granularity of this metric collection method is fixed to 5 minutes and cannot be changed at this point.

After metrics.yaml is available, there are two options for describing the metric collection in the VNFD:

1) VNF-level VNF metrics

mgmt-interface:
 cp: vdu_mgmt # is important to set the mgmt VDU or CP for metrics collection
vnf-configuration:
  initial-config-primitive:
  ...
  juju:
    charm: testmetrics
  metrics:
    - name: load 
    - name: load_pct
    - name: users  
...              
monitoring-param:
-   id: metric_vim_vnf1_load
    name: metric_vim_vnf1_load
    aggregation-type: AVERAGE
    vnf-metric:
      vnf-metric-name-ref: load
-   id: metric_vim_vnf1_loadpct
    name: metric_vim_vnf1_loadpct
    aggregation-type: AVERAGE
    vnf-metric:
      vnf-metric-name-ref: load_pct

Additional notes:

  • Available attributes and values can be directly explored at the OSM Information Model
  • A complete VNFD example with VNF metrics collection (VNF-level) can be downloaded from here.

2) VDU-level VNF metrics

vdu:
- id: vdu1
  ...
  interface:
  - ...
    mgmt-interface: true ! is important to set the mgmt interface for metrics collection
    ...
  vdu-configuration:
    initial-config-primitive:
    ...
    juju:
      charm: testmetrics
    metrics:
      - name: load 
      - name: load_pct
      - name: users  
...              
monitoring-param:
-   id: metric_vim_vnf1_load
    name: metric_vim_vnf1_load
    aggregation-type: AVERAGE
    vdu-metric:
      vdu-ref: vdu1
      vdu-metric-name-ref: load
-   id: metric_vim_vnf1_loadpct
    name: metric_vim_vnf1_loadpct
    aggregation-type: AVERAGE
    vdu-metric:
      vdu-ref: vdu1
      vdu-metric-name-ref: load_pct

Additional notes:

  • Available attributes and values can be directly explored at the OSM Information Model
  • A complete VNFD example with VNF metrics collection (VDU-level) can be downloaded from here.

As in VIM metrics, a list of "metrics" is defined first either at the VNF or VDU "configuration" level, which contain a name that comes from the metrics.yaml file. Then, at the VNF level, a list of monitoring-params is referred, with an ID, name, aggregation-type and their source, which can be a "vdu-metric" or a "vnf-metric" in this case.

Retrieving metrics from Prometheus TSDB

Once the metrics are being collected, they are stored in the Prometheus Time-Series DB with an 'osm_' prefix, and there are a number of ways in which you can retrieve them.

1) Visualizing metrics in Prometheus UI

Prometheus TSDB includes its own UI, which you can visit at http://[OSM_IP]:9091

From there, you can:

  • Type any metric name (i.e. osm_cpu_utilization) in the 'expression' field and see its current value or a histogram.
  • Visit the Status --> Target menu, to monitor the connection status between Prometheus and MON (through 'mon-exporter')

Screenshot of OSM Prometheus UI

2) Integrating Prometheus with Grafana

There is a well-known integration between these two components that allows Grafana to easily show interactive graphs for any metric. You can use any existing Grafana installation or the one that comes in OSM as the experimental "pm_stack".

To install the 'pm_stack' (which today includes only Grafana), in an existing OSM installation, run the installer in the following way:

./install_osm.sh -o pm_stack

Otherwise, if you are installing from scratch, you can simply run the installer with the pm_stack option.

./install.sh --pm_stack

Once installed, access Grafana with its default credentials (admin / admin) at http://[OSM_IP_address]:3000 and by clicking the 'Manage' option at the 'Dashboards' menu (to the left), you will find a sample dashboard containing two graphs for VIM metrics, and two graphs for VNF metrics. You can easily change them or add more, as desired.

Screenshot of OSM Grafana UI

3) Interacting with Prometheus directly through its API

Even though many analytics applications, like Grafana, include their own integration for Prometheus, some other applications do not include it out of the box or there is a need to build a custom integration.

In such cases, the Prometheus HTTP API can be used to gather any metrics. A couple of examples are shown below:

Example with Date range query

curl 'http://localhost:9091/api/v1/query_range?query=osm_cpu_utilization&start=2018-12-03T14:10:00.000Z&end=2018-12-03T14:20:00.000Z&step=15s'

Example with Instant query

curl 'http://localhost:9091/api/v1/query?query=osm_cpu_utilization&time=2018-12-03T14:14:00.000Z'

Further examples and API calls can be found at the Prometheus HTTP API documentation.

4) Interacting directly with MON Collector

The way Prometheus TSDB stores metrics is by querying Prometheus 'exporters' periodically, which are set as 'targets'. Exporters expose current metrics in a specific format that Prometheus can understand, more information can be found here

OSM MON features a "mon-exporter" module that exports current metrics through port 8000. Please note that this port is by default not being exposed outside the OSM docker's network.

A tool that understands Prometheus 'exporters' (for example, Elastic Metricbeat) can be plugged-in to integrate directly with "mon-exporter". To get an idea on how metrics look alike in this particular format, you could:

1. Get into MON console

docker exec -ti osm_mon.1.[id] bash

2. Install curl

apt -y install curl

3. Use curl to get the current metrics list

curl localhost:8000

Please note that as long as the Prometheus container is up, it will continue retrieving and storing metrics in addition to any other tool/DB you connect to "mon-exporter"

5) Using your own TSDB

OSM MON integrates Prometheus through a plugin/backend model, so if desired, other backends can be developed. If interested in contributing with such option, you can ask for details at our Slack #mon channel or through the OSM Tech mailing list.

Default Infrastructure Status Collection

OSM MON collects, automatically, "status metrics" for:

  • VIMs - each VIM that OSM establishes contact with, the metric will be reflected with the name 'osm_vim_status' in the TSDB.
  • VMs - VMs for each VDU that OSM has instantiated, the metric will be reflected with the name 'osm_vm_status' in the TSDB.

Metrics will be "1" or "0" depending on the element availability.

Your feedback is most welcome!
You can send us your comments and questions to OSM_TECH@list.etsi.org
Or join the OpenSourceMANO Slack Workplace
See hereafter some best practices to report issues on OSM