diff --git a/05-osm-usage.md b/05-osm-usage.md index acf6e267e4daaec34411745815a64bdbb39458b3..564ee0a1e51905bffd2c96336f17baf171d818e9 100644 --- a/05-osm-usage.md +++ b/05-osm-usage.md @@ -785,30 +785,40 @@ The following diagram summarizes the feature: The scaling descriptor is part of a VNFD. Like the example below shows, it mainly specifies: - An existing metric to be monitored, which should be pre-defined in the monitoring-param list (`vnf-monitoring-param-ref`). -- The VDU to be scaled (vdu-id-ref) and the amount of instances to scale per event (`count`) +- The VDU to be scaled (`aspect-delta-details:deltas:vdu-delta:id`) and the amount of instances to scale per event (`number-of-instances`) - The thresholds to monitor (`scale-in/out-threshold`) -- The minimum and maximum amount of **scaled instances** to produce. +- The VDU's (`vdu-profile:id`) minimum and maximum amount of **scaled instances** to produce - The minimum time it should pass between scaling operations (`cooldown-time`) +- The minimum amount of scaled instances to produce (`max-scale-level`) ```yaml -scaling-group-descriptor: -- name: "cpu_autoscaling_descriptor" - min-instance-count: 0 - max-instance-count: 10 +scaling-aspect: +- aspect-delta-details: + deltas: + - id: vdu01_autoscale-delta + vdu-delta: + - id: vdu01 + number-of-instances: 1 + id: vdu01_autoscale + max-scale-level: 1 + name: vdu01_autoscale scaling-policy: - - name: "cpu_scaling_policy" - scaling-type: "automatic" - cooldown-time: 120 + - cooldown-time: 120 + name: cpu_scaling_policy scaling-criteria: - - name: "cpu_autoscaling_criteria" + - name: cpu_scaling_policy + scale-in-relational-operation: LT scale-in-threshold: 20 - scale-in-relational-operation: "LT" - scale-out-threshold: 80 - scale-out-relational-operation: "GT" - vnf-monitoring-param-ref: "vnf01_cpu_util" - vdu: - - count: 1 - vdu-id-ref: vdu01 + scale-out-relational-operation: GT + scale-out-threshold: 60 + vnf-monitoring-param-ref: vnf01_cpu_util + scaling-type: automatic + threshold-time: 10 + +vdu-profile: +- id: vdu01 + min-number-of-instances: 1 + max-number-of-intannces: 11 ``` #### Example @@ -817,30 +827,25 @@ This will launch a Network Service formed by an HAProxy load balancer and an (au 1. Your VIM has an accesible 'public' network and a management network (in this case called "PUBLIC" and "vnf-mgmt") 2. Your VIM has the 'haproxy_ubuntu' and 'apache_ubuntu' images, which can be found [here](https://osm-download.etsi.org/ftp/osm-4.0-four/4th-hackfest/images/) -3. You run the following command to match your VIM metrics telemetry system's granularity, if different than 300s (recommended for this example is 60s or Gnocchi's `medium archive-policy`): - -```bash -docker service update --env-add OS_DEFAULT_GRANULARITY=60 osm_mon -``` Get the descriptors: ```bash -wget https://osm-download.etsi.org/ftp/osm-4.0-four/4th-hackfest/packages/webserver_vimmetric_autoscale_nsd.tar.gz -wget https://osm-download.etsi.org/ftp/osm-4.0-four/4th-hackfest/packages/webserver_vimmetric_autoscale_vnfd.tar.gz +git clone https://osm.etsi.org/gitlab/vnf-onboarding/osm-packages.git ``` Onboard them: ```bash -osm vnfd-create webserver_vimmetric_autoscale_vnfd.tar.gz -osm nsd-create webserver_vimmetric_autoscale_nsd.tar.gz +cd osm-packages +osm vnfd-create wiki_webserver_autoscale_vnfd +osm nsd-create wiki_webserver_autoscale_nsd ``` Launch the NS: ```bash -osm ns-create --ns_name web01 --nsd_name webserver_vimmetric_autoscale_ns --vim_account | +osm ns-create --ns_name web01 --nsd_name wiki_webserver_autoscale_ns --vim_account | osm ns-list osm ns-show web01 ``` @@ -849,17 +854,26 @@ Testing: 1. To ensure the NS is working, visit the Load balancer's IP at the public network using a browser, the page should show an OSM logo and active VDUs. 2. To check metrics at Prometheus, visit `http://[OSM_IP]:9091` and look for `osm_cpu_utilization` and `osm_average_memory_utilization` (initial values could take some some minutes depending on your telemetry system's granularity). -3. To check metrics at Grafana, just install the OSM preconfigured version (`./install_osm.sh -o pm_stack`) and visit `http://[OSM_IP]:3000` (`admin`/`admin`), you will find a sample dashboard (the two top charts correspond to this example). +3. To check metrics at Grafana, just visit `http://[OSM_IP]:3000` (`admin`/`admin`), you will find a sample dashboard (the two top charts correspond to this example). 4. To increase CPU in this example to auto-scale the web server, install Apache Bench in a client within reach (could be the OSM host) and run it towards `test.php`. ```bash sudo apt install apache2-utils -ab -n 5000000 -c 2 http://[load-balancer-ip]/test.php +ab -n 5000000 -c 2 http:///test.php +# Can also be run in the HAproxy machine. +ab -n 10000000 -c 1000 http://:8080/ # This will stress CPU to 100% and trigger a scale-out operation in POL. # In this test, scaling will usually go up to 3 web servers before HAProxy spreads to load to reach a normal CPU level (w/ 60s granularity, 180s cooldown) ``` -Any of the VMs can be accessed through SSH to further monitor (with `htop`, for example), and there is an HAProxy UI at port `http://[HAProxy_IP]:32700` (all credentials are `osm`/`osm2018`) +If HA proxy is not started + +```bash +service haproxy status +sudo service haproxy restart +``` + +Any of the VMs can be accessed through SSH (credential: `ubuntu`/`osm2021`) to further monitor (with `htop`, for example), and there is an HAProxy UI at port `http://[HAProxy_IP]:32700` (credential: `osm`/`osm2018`) ## Using Network Slices