OSM Autoscaling: Difference between revisions
From OSM Public Wiki
mNo edit summary |
No edit summary |
||
Line 6: | Line 6: | ||
[[File:osm_pol_as.png|800px|Diagram explaining auto-scaling support]] | [[File:osm_pol_as.png|800px|Diagram explaining auto-scaling support]] | ||
* Scaling descriptors can be included and be tied to automatic reaction to VIM/VNF metric thresholds. | |||
* Supported metrics are both VIM and VNF metrics. | |||
* An internal alarm manager has been added to MON, so that both VIM and VNF metrics can also trigger threshold-violation alarms and scaling actions. | |||
=== Scaling Descriptor === | |||
The scaling descriptor is part of a VNFD. Like the example below shows, it mainly specifies: | |||
* An existing metric to be monitored, which should be pre-defined in the monitoring-param list (vnf-monitoring-param-ref) | |||
* The VDU to be scaled (vdu-id-ref) and the amount of instances to scale per event (count) | |||
* The thresholds to monitor (scale-in/out-threshold) | |||
* The minimum and maximum amount of '''scaled instances''' to produce. | |||
* The minimum time it should pass between scaling operations (cooldown-time) | |||
scaling-group-descriptor: | |||
- name: "cpu_autoscaling_descriptor" | |||
min-instance-count: 0 | |||
max-instance-count: 10 | |||
scaling-policy: | |||
- name: "cpu_scaling_policy" | |||
scaling-type: "automatic" | |||
cooldown-time: 120 | |||
scaling-criteria: | |||
- name: "cpu_autoscaling_criteria" | |||
scale-in-threshold: 20 | |||
scale-in-relational-operation: "LT" | |||
scale-out-threshold: 80 | |||
scale-out-relational-operation: "GT" | |||
vnf-monitoring-param-ref: "vnf01_cpu_util" | |||
vdu: | |||
- count: 1 | |||
vdu-id-ref: vdu01 | |||
=== Example === | |||
This will launch a Network Service formed by an HAProxy load balancer and an (autoscalable) Apache web server | |||
Make sure: | |||
# Your VIM has an accesible 'public' network and a management network (in this case called "PUBLIC" and "vnf-mgmt") | |||
# Your VIM has the 'haproxy_ubuntu' and 'apache_ubuntu' images located [https://osm-download.etsi.org/ftp/osm-4.0-four/4th-hackfest/images/ here] | |||
# You run the following command to match your VIM metrics telemetry system's granularity, if different than 300s (recommended for this example is 60s or Gnocchi's "high archive-policy"): | |||
docker service update --env-add OS_DEFAULT_GRANULARITY=60 osm_mon | |||
Get the descriptors: | |||
wget https://osm-download.etsi.org/ftp/osm-4.0-four/4th-hackfest/packages/webserver_vimmetric_autoscale_nsd.tar.gz | |||
wget https://osm-download.etsi.org/ftp/osm-4.0-four/4th-hackfest/packages/webserver_vimmetric_autoscale_vnfd.tar.gz | |||
Onboard them | |||
osm vnfd-create webserver_vimmetric_autoscale_vnfd.tar.gz | |||
osm nsd-create webserver_vimmetric_autoscale_nsd.tar.gz | |||
Launch the NS | |||
osm ns-create --ns_name web01 --nsd_name webserver_vimmetric_autoscale_ns --vim_account <VIM_ACCOUNT_NAME>|<VIM_ACCOUNT_ID> | |||
osm ns-list | |||
osm ns-show web01 | |||
Testing | |||
# To ensure the NS is working, visit the Load balancer's IP at the public network using a browser, the page should show an OSM logo and active VDUs | |||
# To check metrics at Prometheus, visit http://[OSM_IP]:9091 and look for osm_cpu_utilization and osm_average_memory_utilization (initial values could take some some minutes depending on your Telemetry system's granularity) | |||
# To check metrics at Grafana, just install the OSM preconfigured version (./install_osm.sh -o pm_stack) and visit http://[OSM_IP]:3000 (admin / admin), you will find a sample dashboard (the two top charts correspond to this example) | |||
# To increase CPU in this example to autoscale the web server, install Apache Bench in a client within reach (could be the OSM Host) and run it towards test.php | |||
sudo apt install apache2-utils | |||
ab -n 5000000 -c 2 http://[load-balancer-ip]/test.php | |||
# This will stress CPU to 100% and trigger a scale-out operation in POL. | |||
# In this test, scaling will usually go up to 3 web servers before HAProxy spreads to load to reach a normal CPU level (w/ 60s granularity, 180s cooldown) | |||
Any of the VMs can be accessed through SSH to further monitor (with htop, for example), and there is an HAProxy UI at port http://[HAProxy_IP]:32700 (all credentials are osm / osm2018) |
Revision as of 01:13, 4 December 2018
This is a new feature available since Release FIVE, which allows to automatically scale VNFs with a VDU granularity and based on any available metric.
Reference diagram
The following diagram summarizes the feature:
- Scaling descriptors can be included and be tied to automatic reaction to VIM/VNF metric thresholds.
- Supported metrics are both VIM and VNF metrics.
- An internal alarm manager has been added to MON, so that both VIM and VNF metrics can also trigger threshold-violation alarms and scaling actions.
Scaling Descriptor
The scaling descriptor is part of a VNFD. Like the example below shows, it mainly specifies:
- An existing metric to be monitored, which should be pre-defined in the monitoring-param list (vnf-monitoring-param-ref)
- The VDU to be scaled (vdu-id-ref) and the amount of instances to scale per event (count)
- The thresholds to monitor (scale-in/out-threshold)
- The minimum and maximum amount of scaled instances to produce.
- The minimum time it should pass between scaling operations (cooldown-time)
scaling-group-descriptor: - name: "cpu_autoscaling_descriptor" min-instance-count: 0 max-instance-count: 10 scaling-policy: - name: "cpu_scaling_policy" scaling-type: "automatic" cooldown-time: 120 scaling-criteria: - name: "cpu_autoscaling_criteria" scale-in-threshold: 20 scale-in-relational-operation: "LT" scale-out-threshold: 80 scale-out-relational-operation: "GT" vnf-monitoring-param-ref: "vnf01_cpu_util" vdu: - count: 1 vdu-id-ref: vdu01
Example
This will launch a Network Service formed by an HAProxy load balancer and an (autoscalable) Apache web server Make sure:
- Your VIM has an accesible 'public' network and a management network (in this case called "PUBLIC" and "vnf-mgmt")
- Your VIM has the 'haproxy_ubuntu' and 'apache_ubuntu' images located here
- You run the following command to match your VIM metrics telemetry system's granularity, if different than 300s (recommended for this example is 60s or Gnocchi's "high archive-policy"):
docker service update --env-add OS_DEFAULT_GRANULARITY=60 osm_mon
Get the descriptors:
wget https://osm-download.etsi.org/ftp/osm-4.0-four/4th-hackfest/packages/webserver_vimmetric_autoscale_nsd.tar.gz wget https://osm-download.etsi.org/ftp/osm-4.0-four/4th-hackfest/packages/webserver_vimmetric_autoscale_vnfd.tar.gz
Onboard them
osm vnfd-create webserver_vimmetric_autoscale_vnfd.tar.gz osm nsd-create webserver_vimmetric_autoscale_nsd.tar.gz
Launch the NS
osm ns-create --ns_name web01 --nsd_name webserver_vimmetric_autoscale_ns --vim_account <VIM_ACCOUNT_NAME>|<VIM_ACCOUNT_ID> osm ns-list osm ns-show web01
Testing
- To ensure the NS is working, visit the Load balancer's IP at the public network using a browser, the page should show an OSM logo and active VDUs
- To check metrics at Prometheus, visit http://[OSM_IP]:9091 and look for osm_cpu_utilization and osm_average_memory_utilization (initial values could take some some minutes depending on your Telemetry system's granularity)
- To check metrics at Grafana, just install the OSM preconfigured version (./install_osm.sh -o pm_stack) and visit http://[OSM_IP]:3000 (admin / admin), you will find a sample dashboard (the two top charts correspond to this example)
- To increase CPU in this example to autoscale the web server, install Apache Bench in a client within reach (could be the OSM Host) and run it towards test.php
sudo apt install apache2-utils ab -n 5000000 -c 2 http://[load-balancer-ip]/test.php # This will stress CPU to 100% and trigger a scale-out operation in POL. # In this test, scaling will usually go up to 3 web servers before HAProxy spreads to load to reach a normal CPU level (w/ 60s granularity, 180s cooldown)
Any of the VMs can be accessed through SSH to further monitor (with htop, for example), and there is an HAProxy UI at port http://[HAProxy_IP]:32700 (all credentials are osm / osm2018)