diff --git a/01-requirements.md b/01-requirements.md index 9c62b368b641b6af594ebafe18cbc50716b8b7d9..578c195f3dee54b060c13e5322fc43bed88123ec 100644 --- a/01-requirements.md +++ b/01-requirements.md @@ -12,21 +12,21 @@ During the Day-0 stage, the VNF is instantiated and the management access is est The main function of every VNF component (VDU) should be clearly described in order to ease the understanding of the VNF. For example: -| VDU | Description | -|:------:|:------------------------------------| -| vLB | External frontend and load balancer | -| uMgmt | Universal VNF Manager (EM) | -| sBE | Service Backend of the platform | +| VDU | Description | +| :---: | :---------------------------------- | +| vLB | External frontend and load balancer | +| uMgmt | Universal VNF Manager (EM) | +| sBE | Service Backend of the platform | ### Defining NFVI requirements These requirements refer to properties like the number of vCPUs, RAM GBs and disk GBs per component, as well as any other resource that the VNF components need from the physical infrastructure. For example: | VDU | vCPU | RAM (GB) | Storage (GB) | External volume? | -|:-----:|:----:|:--------:|:------------:|:----------------:| -| vLB | 2 | 4 | 10 | N | -| uMgmt | 1 | 1 | 2 | N | -| sBE | 2 | 8 | 10 | Y | +| :---: | :--: | :------: | :----------: | :--------------: | +| vLB | 2 | 4 | 10 | N | +| uMgmt | 1 | 1 | 2 | N | +| sBE | 2 | 8 | 10 | Y | For some VNFs, the Enhanced Platform Awareness (EPA) characteristics need to be defined when the VNF requires performance capabilities which are "higher than default" or any particular hardware architecture from the NFVI. Popular EPA attributes include: @@ -50,7 +50,7 @@ Ideally, a diagram should be used to quickly identify components and internal/ex ![](assets/vnftopology1.png) -Additional topology examples, along with sample descriptor files, can be found [here](https://osm.etsi.org/wikipub/index.php/Reference_VNF_and_NS_Descriptors). +Sample descriptor files, can be found [here](https://osm.etsi.org/gitlab/vnf-onboarding/osm-packages), while sample topologies can be found [here](https://osm.etsi.org/docs/vnf-onboarding-guidelines/05-basic-examples.html#) ### Images and cloud-init files @@ -99,7 +99,7 @@ This may be required to identify instantiation parameters or special timing requ - Components needing parameters from other components or from the infrastructure to complete the parameters configuration. - Components depending on others for their configuration to be initialized. - + ### Defining the required configuration for service initialization This initial configuration will run automatically after the VNF is instantiated. It should activate the service delivered by the VNF and should be initially prepared in the language that the VNF supports. Once it's defined, it would need to be incorporated by the mechanism that the generic VNF Manager implements. For example: @@ -151,7 +151,7 @@ The VNF Day-1 configuration may require some parameters passed at instantiation ## Day-2 requirements -The main objetive of Day-2 is to be able to **re-configure** the VNF so its behavior can be modified during runtime, as well as being able to monitor its main KPIs and run scaling actions over it. +The main objetives of Day-2 are to be able to **re-configure** the VNF so its behavior can be modified during runtime, being able to monitor its main KPIs, and running scaling or other closed-loop operations over it. To achieve this, the main requirements are: ### Identifying dependencies between components diff --git a/02-day0.md b/02-day0.md index 251bda7e39f1b26de842d41e9293f3a9c566fa55..eaa06a72d23f9f1a971c802dd4467ee8cb4c1187 100644 --- a/02-day0.md +++ b/02-day0.md @@ -12,64 +12,59 @@ The way to achieve this in OSM is to prepare the descriptor so that it accuratel The most straightforward way to build a VNF package from scratch is to use the existing script available a the OSM Devops repository. From a Linux/Unix-based system: -#### Clone the OSM DevOps repository and access the tools folder. +#### Install the OSM client of you don't have it already. -``` -git clone https://osm.etsi.org/gerrit/osm/devops.git -cd devops/descriptor-packages/tools -``` +If you have OSM installed, the client is added automatically, if not, there are different methods to install it as standalone. You can follow the guide [here](https://osm.etsi.org/docs/user-guide/10-osm-client-commands-reference.html#installing-standalone-osm-client) #### Run the generator script with the desired options. -`./generate_descriptor_pkg.sh [options] [name]` - -Most common options are: - -| Parameter | Scope | Description | Values | -|:-----------------:|:-------:|:-------------------------------------------:|:----------:| -| -t | package | descriptor type | vnfd | nsd | -| -a | package | create package for the descriptor | - | -| -N | package | keep folder after tar is built | - | -| -c | package | create folder structure inside package | - | -| -d | package | destination of the folder | path | -| --nsd | package | create folder structure for NSD as well | - | -| --image | vdu | image name | name | -| --vcpu | vdu | vCPU number | # | -| --memory | vdu | RAM size | [mb] | -| --storage | vdu | disk size | [gb] | -| --cloud-init-file | vdu | cloud-init file name | name | -| --interfaces | vdu | interface number (additional to management) | # | -| --vendor | vnf | vendor name | name | +`osm package-create [options] [vnf|ns] [name]` + +Most common options are shown in the commands help + +``` + --base-directory TEXT (NS/VNF/NST) Set the location for package creation. Default: "." + --image TEXT (VNF) Set the name of the vdu image. Default "image-name" + --vdus INTEGER (VNF) Set the number of vdus in a VNF. Default 1 + --vcpu INTEGER (VNF) Set the number of virtual CPUs in a vdu. Default 1 + --memory INTEGER (VNF) Set the memory size (MB) of the vdu. Default 1024 + --storage INTEGER (VNF) Set the disk size (GB) of the vdu. Default 10 + --interfaces INTEGER (VNF) Set the number of additional interfaces apart from the management interface. Default 0 + --vendor TEXT (NS/VNF) Set the descriptor vendor. Default "OSM" + --override (NS/VNF/NST) Flag for overriding the package if exists. + --detailed (NS/VNF/NST) Flag for generating descriptor .yaml with all possible commented options + --netslice-subnets INTEGER (NST) Number of netslice subnets. Default 1 + --netslice-vlds INTEGER (NST) Number of netslice vlds. Default 1 + -h, --help Show this message and exit. +``` For example: ``` -./generate_descriptor_pkg.sh -t vnfd -N -c -d /home/ubuntu \ --a --image haproxy_ubuntu --vcpu 2 --memory 4096 --storage 10 \ ---cloud-init-file init_lb --interfaces 2 --vendor ACME --nsd vLB +# For the VNF Package +osm package-create --base-directory /home/ubuntu --image myVNF.qcow2 --vcpu 1 --memory 4096 --storage 50 --interfaces 2 --vendor OSM vnf vLB + +# For the NS Package +osm package-create --base-directory /home/ubuntu --vendor OSM ns vLB ``` -Note that we are adding the 'nsd' keyword to create also a NS Package that refers to this VNF Package, to be able to instantiate it and test it out. So the above example will create, in the `/home/ubuntu` folder: +Note that there are separate options for VNF and NS packages. A Network Service Package that refers to the VNF Packages is always needed in OSM to be able to instantiate it the constituent VNFs. So the above example will create, in the `/home/ubuntu` folder: -- `vLB_vnfd` → VNFD Folder -- `vLB_vnfd.tar.gz` → VNFD Package -- `test_vnf01_nsd` → NSD Folder -- `test_vnf01_nsd.tar.gz` → NSD Package +- `vLB_vnf` → VNFD Folder +- `vLB_ns` → NSD Folder **The VNFD Folder will contain the YAML file which models the VNF. This should be further edited to achieve the desired characteristics.** ### Modelling advanced topologies -Most topology types, along with sample descriptor files, can be found [here](https://osm.etsi.org/wikipub/index.php/Reference_VNF_and_NS_Descriptors). - When dealing with multiple VDUs inside a VNF, it is important to understand the differences between external and internal connection points (CPs) and virtual link descriptors (VLDs). -| Component | Definition | Modelled at | -|:------------:|---------------------------------------------------------------------------|:-----------:| -| Internal VLD | Network that interconnects VDUs within a VNF | VNFD | -| External VLD | Network that interconnects different VNFs within a NS | NSD | -| Internal CP | Element internal to a VNF, maps VDU interfaces to internal VLDs | VNFD | -| External CP | Element exposed externally by a VNF, maps VDU interfaces to external VLDs | NSD | +| Component | Definition | Modelled at | +| :----------: | ------------------------------------------------------------------------- | :---------: | +| Internal VLD | Network that interconnects VDUs within a VNF | VNFD | +| External VLD | Network that interconnects different VNFs within a NS | NSD | +| Internal CP | Element internal to a VNF, maps VDU interfaces to internal VLDs | VNFD | +| External CP | Element exposed externally by a VNF, maps VDU interfaces to external VLDs | NSD | As VNF Package builders, we should clearly identify interfaces that i) are internal to the VNF and used to interconnect our own VDUs through internal VLDs, and ii) those we want to expose to other VNFs within a Network Service, using external VLDs. @@ -80,191 +75,216 @@ In this example from the [5th OSM Hackfest](https://osm.etsi.org/wikipub/index.p The VNFD would look like this: ``` -vnfd:vnfd-catalog: - vnfd: - - ... - # A external CP should be used for VNF management - mgmt-interface: - cp: vnf-mgmt - - # External CPs are exposed externally, to be referred at the NSD - connection-point: - - id: vnf-mgmt - name: vnf-mgmt - short-name: vnf-mgmt - type: VPORT - - id: vnf-data - name: vnf-data - short-name: vnf-data - type: VPORT - - # Internal VLDs are defined globally at the VNFD - internal-vld: - - id: internal - name: internal - short-name: internal - type: ELAN - internal-connection-point: - - id-ref: mgmtVM-internal - - id-ref: dataVM-internal - - # Inside the VDU block, multiple VDUs, their interfaces and CPs are modelled - vdu: - - id: mgmtVM - ... - - # VDU Interfaces map to either a external o internal CP - interface: - - name: mgmtVM-eth0 - position: '1' - type: EXTERNAL - virtual-interface: - type: VIRTIO - external-connection-point-ref: vnf-mgmt - - name: mgmtVM-eth1 - position: '2' - type: INTERNAL - virtual-interface: - type: VIRTIO - internal-connection-point-ref: mgmtVM-internal - - # Internal CPs are modelled inside each VDU - internal-connection-point: - - id: mgmtVM-internal - name: mgmtVM-internal - short-name: mgmtVM-internal - type: VPORT - - - id: dataVM - ... - # VDU Interfaces map to either a external o internal CP - interface: - - name: dataVM-eth0 - position: '1' - type: INTERNAL - virtual-interface: - type: VIRTIO - internal-connection-point-ref: dataVM-internal - - name: dataVM-xe0 - position: '2' - type: EXTERNAL - virtual-interface: - type: VIRTIO - external-connection-point-ref: vnf-data - - # Internal CPs are modelled inside each VDU - internal-connection-point: - - id: dataVM-internal - name: dataVM-internal - short-name: dataVM-internal - type: VPORT +vnfd: + description: A VNF consisting of 2 VDUs connected to an internal VL + + # The Deployment Flavour (DF) "ties together" all the other definitions + df: + - id: default-df + instantiation-level: + - id: default-instantiation-level + vdu-level: + - number-of-instances: 1 + vdu-id: mgmtVM + - number-of-instances: 1 + vdu-id: dataVM + vdu-profile: + - id: mgmtVM + min-number-of-instances: 1 + - id: dataVM + min-number-of-instances: 1 + + # External CPs are exposed externally, to be referred at the NSD + ext-cpd: + - id: vnf-mgmt-ext + int-cpd: + cpd: mgmtVM-eth0-int + vdu-id: mgmtVM + - id: vnf-data-ext + int-cpd: + cpd: dataVM-xe0-int + vdu-id: dataVM + + id: hackfest_multivdu-vnf + + # Internal VLDs are defined globally at the VNFD + int-virtual-link-desc: + - id: internal + + # A external CP should be used for VNF management + mgmt-cp: vnf-mgmt-ext + + product-name: hackfest_multivdu-vnf + sw-image-desc: + - id: US1604 + image: US1604 + name: US1604 + + # Inside the VDU block, multiple VDUs, and their internal CPs are modelled + vdu: + - id: mgmtVM + + # Internal CPs are modelled inside each VDU + int-cpd: + - id: mgmtVM-eth0-int + virtual-network-interface-requirement: + - name: mgmtVM-eth0 + position: 1 + virtual-interface: + type: PARAVIRT + - id: mgmtVM-eth1-int + int-virtual-link-desc: internal + virtual-network-interface-requirement: + - name: mgmtVM-eth1 + position: 2 + virtual-interface: + type: PARAVIRT + + name: mgmtVM + sw-image-desc: US1604 + virtual-compute-desc: mgmtVM-compute + virtual-storage-desc: + - mgmtVM-storage + + - id: dataVM + + # Internal CPs are modelled inside each VDU + int-cpd: + - id: dataVM-eth0-int + int-virtual-link-desc: internal + virtual-network-interface-requirement: + - name: dataVM-eth0 + position: 1 + virtual-interface: + type: PARAVIRT + - id: dataVM-xe0-int + virtual-network-interface-requirement: + - name: dataVM-xe0 + position: 2 + virtual-interface: + type: PARAVIRT + + name: dataVM + sw-image-desc: US1604 + virtual-compute-desc: dataVM-compute + virtual-storage-desc: + - dataVM-storage + version: '1.0' + virtual-compute-desc: + - id: mgmtVM-compute + virtual-memory: + size: 1.0 + virtual-cpu: + num-virtual-cpu: 1 + - id: dataVM-compute + virtual-memory: + size: 1.0 + virtual-cpu: + num-virtual-cpu: 1 + virtual-storage-desc: + - id: mgmtVM-storage + size-of-storage: 10 + - id: dataVM-storage + size-of-storage: 10 ``` As an additional reference, let's take a look at this Network Service Descriptor (NSD), where connections between VNFs are modelled using external CPs mapped to external VLDs like this: ``` -nsd:nsd-catalog: - nsd: - - ... - # External VLDs are modelled globally - vld: - - id: mgmtnet - name: mgmtnet - short-name: mgmtnet - type: ELAN - mgmt-network: 'true' - vim-network-name: mgmt - vnfd-connection-point-ref: - - # Mapping between VNF's external CPs and the external VLD occurs here: - - vnfd-id-ref: hackfest_multivdu-vnf - member-vnf-index-ref: '1' - vnfd-connection-point-ref: vnf-mgmt - - vnfd-id-ref: hackfest_multivdu-vnf - member-vnf-index-ref: '2' - vnfd-connection-point-ref: vnf-mgmt +nsd: + nsd: + - description: NS with 2 VNFs connected by datanet and mgmtnet VLs + id: hackfest_multivdu-ns + name: hackfest_multivdu-ns + version: '1.0' + + # External VLDs are defined globally: + virtual-link-desc: + - id: mgmtnet + mgmt-network: true + - id: datanet + vnfd-id: + - hackfest_multivdu-vnf + + df: + - id: default-df + + # External VLD mappings to CPs are defined inside the deployment flavour's vnf-profile: + vnf-profile: + - id: '1' + virtual-link-connectivity: + - constituent-cpd-id: + - constituent-base-element-id: '1' + constituent-cpd-id: vnf-mgmt-ext + virtual-link-profile-id: mgmtnet + - constituent-cpd-id: + - constituent-base-element-id: '1' + constituent-cpd-id: vnf-data-ext + virtual-link-profile-id: datanet + vnfd-id: hackfest_multivdu-vnf + - id: '2' + virtual-link-connectivity: + - constituent-cpd-id: + - constituent-base-element-id: '2' + constituent-cpd-id: vnf-mgmt-ext + virtual-link-profile-id: mgmtnet + - constituent-cpd-id: + - constituent-base-element-id: '2' + constituent-cpd-id: vnf-data-ext + virtual-link-profile-id: datanet + vnfd-id: hackfest_multivdu-vnf ``` ### Modelling specific networking requirements Even though it is not recommended to hard-code networking values in order to maximize the VNF Package uniqueness, there may be some freedom for doing this at internal VLDs, especially when they are not externally accessible by other VNFs and not directly accessible from the management network. -The **IP Profiles** feature allows us to set some subnet specifics that can become useful. Further IP Profile settings can be found at the [OSM Information Model Documentation](https://osm.etsi.org/wikipub/index.php/OSM_Information_Model). The following VNFD extract can be used as a reference: +The former **IP Profiles** feature, today implemented in SOL006 through the **Virtual-link Profiles** extensions inside the Connection Point blocks, allows us to set some subnet specifics that can become useful. Further settings can be found at the [OSM Information Model Documentation](https://osm.etsi.org/docs/user-guide/11-osm-im.html). The following VNFD extract can be used as a reference: ``` -vnfd:vnfd-catalog: - vnfd: - - ... - # IP profiles let us set subnet parameters like disabling a default GW - ip-profiles: - - name: ip1 - description: ip1 - ip-profile-params: - ip-version: ipv4 - dns-server: 8.8.8.8 - gateway-address: - subnet-address: 192.168.100.0/24 - dhcp-params: - enabled: true - - # The IP Profile name is then applied at the VLD level - internal-vld: - - id: internal - ip-profile-ref: ip1 - ... -``` - -Specific IP and MAC addresses can also be set, although this practice is not recommended unless we use it in isolated connection points. - -``` -vnfd:vnfd-catalog: - vnfd: - - ... - # A specific IP address can be set at the VLD, it requires the subnet to be predefined by using an IP Profile - internal-vld: - - id: internal - ip-profile-ref: p1 - ... - internal-connection-point: - - id-ref: mgmtVM-internal - ip-address: 192.168.100.100 - ... - vdu: - - id: mgmtVM - ... +vnfd: + description: A VNF consisting of 2 VDUs connected to an internal VL + df: + ... + virtual-link-profile: + - id: internal # internal VLD ID goes here + virtual-link-protocol-data: + l3-protocol-data: + cidr: 192.168.100.0/24 + dhcp-enabled: true +``` - # A specific MAC address can also be set at the interface level - interface: - - ... - mac-address: '01:02:03:01:02:03' +Specific IP and MAC addresses can also be set inside the internal CP block, although this practice is not recommended unless we use it in isolated connection points. + +``` +TODO: Example of setting IP & MAC Addresses with new SOL006 model ``` ### Building and adding cloud-init scripts #### Cloud-init basics -Cloud-init is normally used for Day-0 operations like: +Cloud-init is normally used for Day-0 operations like: -* Setting a default locale -* Setting an instance hostname -* Generating instance SSH private keys or defining passwords -* Adding SSH keys to a user’s .ssh/authorized_keys so they can log in -* Setting up ephemeral mount points -* Configuring network devices -* Adding users and groups -* Adding files +- Setting a default locale +- Setting an instance hostname +- Generating instance SSH private keys or defining passwords +- Adding SSH keys to a user’s .ssh/authorized_keys so they can log in +- Setting up ephemeral mount points +- Configuring network devices +- Adding users and groups +- Adding files Cloud-init scripts are referred at the VDU level. These can be defined inline or can be included in the **cloud_init** folder of the VNF package, then referred in the descriptor. For inline cloud-init definition, follow this: ``` -vnfd:vnfd-catalog: - vnfd: +vnfd: + ... + vdu: - ... - vdu: - - ... - cloud-init: | + cloud-init: | #cloud-config ... ``` @@ -272,13 +292,13 @@ vnfd:vnfd-catalog: For external cloud-init definition, proceed like this: ``` -vnfd:vnfd-catalog: - vnfd: +vnfd: + ... + vdu: - ... - vdu: - - ... - cloud-init-file: cloud_init_filename + cloud-init: cloud_init_filename ``` + Its content can have a number of formats, including #cloud-config and bash. For example, any of the following scripts sets a password in Linux. @@ -303,7 +323,7 @@ Additional information about cloud-init can be found in [this documentation](htt #### Parametrizing Cloud-init files -Beginning in OSM version 5.0.3, cloud-init files can be parametrized by using double curly brackets. For example: +Beginning in OSM version 5.0.3, cloud-init files can be parametrized by using double curly brackets. For example: ``` #cloud-config @@ -334,89 +354,89 @@ Besides cloud-init being provided as userdata through a metadata service, some V The support for this is available at the VNFD model, as follows: ``` -supplemental-boot-data: - boot-data-drive: 'true' +vnfd: + ... + vdu: + - ... + supplemental-boot-data: + boot-data-drive: 'true' ``` ### Guidelines for EPA requirements -Most EPA features can be specified at the VDU descriptor level as requirements in the `guest-epa` section, which will be then translated to the appropriate request through the VIM connector. Please note that the NFVI should be pre-configured to support these EPA capabilities. +Most EPA features can be specified at the VDU descriptor level as requirements in the virtual-compute, virtual-cpu and virtual-memory descriptors, which will be then translated to the appropriate request through the VIM connector. Please note that the NFVI should be pre-configured to support these EPA capabilities. #### Huge Pages Huge pages are requested as follows: ``` -vnfd:vnfd-catalog: - vnfd: +vnfd: + ... + vdu: - ... - vdu: - - ... - guest-epa: - mempage-size: LARGE - ... + virtual-memory: + mempage-size: LARGE + ... ``` The `mempage-size` attribute can take any of these values: -* LARGE: Require hugepages (either 2MB or 1GB) -* SMALL: Doesn't require hugepages -* SIZE_2MB: Requires 2MB hugepages -* SIZE_1GB: Requires 1GB hugepages -* PREFER_LARGE: Application prefers hugepages +- LARGE: Require hugepages (either 2MB or 1GB) +- SMALL: Doesn't require hugepages +- SIZE_2MB: Requires 2MB hugepages +- SIZE_1GB: Requires 1GB hugepages +- PREFER_LARGE: Application prefers hugepages #### CPU Pinning CPU pinning allows for different settings related to vCPU assignment and hyper threading: ``` -vnfd:vnfd-catalog: - vnfd: +vnfd: + ... + vdu: - ... - vdu: - - ... - guest-epa: - cpu-pinning-policy: DEDICATED - cpu-thread-pinning-policy: AVOID - ... + virtual-cpu: + policy: DEDICATED + thread-policy: AVOID + ... ``` -CPU pinning policy describes association between virtual CPUs in guest and the -physical CPUs in the host. Valid values are: +CPU pinning policy describes association between virtual CPUs in guest and the physical CPUs in the host. Valid values are: -* DEDICATED: Virtual CPUs are pinned to physical CPUs -* SHARED: Multiple VMs may share the same physical CPUs. -* ANY: Any policy is acceptable for the VM +- DEDICATED: Virtual CPUs are pinned to physical CPUs +- SHARED: Multiple VMs may share the same physical CPUs. +- ANY: Any policy is acceptable for the VM CPU thread pinning policy describes how to place the guest CPUs when the host supports hyper threads. Valid values are: -* AVOID: Avoids placing a guest on a host with threads. -* SEPARATE: Places vCPUs on separate cores, and avoids placing two vCPUs on two threads of same core. -* ISOLATE: Places each vCPU on a different core, and places no vCPUs from a different guest on the same core. -* PREFER: Attempts to place vCPUs on threads of the same core. +- AVOID: Avoids placing a guest on a host with threads. +- SEPARATE: Places vCPUs on separate cores, and avoids placing two vCPUs on two threads of same core. +- ISOLATE: Places each vCPU on a different core, and places no vCPUs from a different guest on the same core. +- PREFER: Attempts to place vCPUs on threads of the same core. #### NUMA Topology Awareness This policy defines if the guest should be run on a host with one NUMA node or multiple NUMA nodes. ``` -vnfd:vnfd-catalog: - vnfd: +vnfd: + ... + vdu: - ... - vdu: - - ... - guest-epa: - numa-node-policy: - node-cnt: 2 - mem-policy: STRICT - node: - - id: 0 - memory-mb: 2048 - num-cores: 1 - - id: 1 - memory-mb: 2048 - num-cores: 1 - ... + virtual-memory: + numa-node-policy: + node-cnt: 2 + mem-policy: STRICT + node: + - id: 0 + memory-mb: 2048 + num-cores: 1 + - id: 1 + memory-mb: 2048 + num-cores: 1 + ... ``` `node-cnt` defines the number of NUMA nodes to expose to the VM, while `mem-policy` defines if the memory should be allocated strictly from the 'local' NUMA node (STRICT) or not necessarily from that node (PREFERRED). @@ -427,64 +447,70 @@ The rest of the settings request a specific mapping between the NUMA nodes and t Dedicated interface resources can be requested at the VDU interface level. ``` -vnfd:vnfd-catalog: - vnfd: +vnfd: + ... + vdu: - ... - vdu: + int-cpd: - ... - interface: - - name: eth0 - position: '1' - type: EXTERNAL - virtual-interface: - type: SR-IOV + virtual-network-interface-requirement: + - name: eth0 + virtual-interface: + type: SR-IOV ``` Valid values for `type`, which specifies the type of virtual interface between VM and host, are: -* PARAVIRT : Use the default paravirtualized interface for the VIM (virtio, vmxnet3, etc.). -* PCI-PASSTHROUGH : Use PCI-PASSTHROUGH interface. -* SR-IOV : Use SR-IOV interface. -* E1000 : Emulate E1000 interface. -* RTL8139 : Emulate RTL8139 interface. -* PCNET : Emulate PCNET interface. +- PARAVIRT : Use the default paravirtualized interface for the VIM (virtio, vmxnet3, etc.). +- PCI-PASSTHROUGH : Use PCI-PASSTHROUGH interface. +- SR-IOV : Use SR-IOV interface. +- E1000 : Emulate E1000 interface. +- RTL8139 : Emulate RTL8139 interface. +- PCNET : Emulate PCNET interface. ### Managing alternative images for specific VIM types The `image` name specified at the VDU level is expected to be either located at the `images` folder within the VNF package, or at the VIM catalogue. Alternative images can be specified and mapped to different VIM types, so that they are used whenever the VNF package is instantiated over the given VIM type. -In the following example, the `ubuntu1604` image is used by default (any VIM), but a different image is used if the VIM type is AWS. +In the following example, the `ubuntu20` image is used by default (any VIM), but a different image is used if the VIM type is AWS. ``` -vnfd:vnfd-catalog: - vnfd: +vnfd: + ... + vdu: - ... - vdu: - - ... - image: ubuntu1604 - alternative-images: - - vim-type: aws - image: ubuntu/images/hvm-ssd/ubuntu-artful-17.10-amd64-server-20180509 + sw-image-desc: + - id: ubuntu20 + image: ubuntu20 + name: ubuntu20 + - id: ubuntuAWS + image: ubuntu/images/hvm-ssd/ubuntu-artful-17.10-amd64-server-20180509 + name: ubuntuAWS + vim-type: aws ``` ### Updating and Testing Instantiation of the VNF Package -Once the VNF Descriptor has been updated with all the Day-0 requirements, its folder needs to be repackaged. For example, in Linux/UNIX, it would be something like: `tar -cvfz vLB_vnfd.tar.gz vLB_vnfd/` +Once the VNF Descriptor has been updated with all the Day-0 requirements, its folder needs to be repackaged. This can be done using the OSM CLI, using a command that packages, validates and uploads the package to the catalogue. + +`osm vnfpkg-create [VNF Folder]` -A Network Service package containing at least this single VNF needs to be used to instantiate the VNF. This could be generated with the _devops tool_ described earlier. +A Network Service package containing at least this single VNF needs to be used to instantiate the VNF. This could be generated with the OSM CLI command described earlier. Remember the objectives of this phase: + 1. Instantiating the VNF with all the required VDUs, images, initial (unconfigured) state and NFVI requirements. 2. Making the VNF manageable from OSM (OSM should have SSH access to the management interfaces, for example) To test this out, the NS can be launched using the OSM client, like this: + ``` osm ns-create --ns_name [ns name] --nsd_name [nsd name] --vim_account [vim name] \ --ssh_keys [comma separated list of public key files to inject to vnfs] ``` At launch time, extra **_instantiation parameters_** can be passed so that the VNF can be adapted to the particular instantiation environment or to achieve a proper inter-operation with other VNFs within the specific NS. -More information about these parameters will be revised during the next chapter as part of Day-1 objectives, or can be reviewed [here](https://osm.etsi.org/wikipub/index.php/OSM_instantiation_parameters). +More information about these parameters will be revised during the next chapter as part of Day-1 objectives, or can be reviewed [here](https://osm.etsi.org/docs/user-guide/05-osm-usage.html?highlight=instantiation%20parameters#advanced-instantiation-using-instantiation-parameters). The following sections will provide details on how to further populate the VNF Package to automate Day 1/2 operations. diff --git a/03-day1.md b/03-day1.md index 272066539ac6ae6da9a82be8fae1edcd96059cdb..1c487e119d6109f74b688375d808929fd89a7ac8 100644 --- a/03-day1.md +++ b/03-day1.md @@ -8,7 +8,7 @@ The main mechanism to achieve this in OSM is to build a Charm and include it in In the VNFD you will find metadata, which is declarative data specified in the YAML file, and code that takes care of the operations related to a VNF. The operations code is call "Charm", and it can handle the lifecycle, configuration, integration, and actions/primitives in your workloads. -There are two kinds of Charms, and at this point you have to decide which one you need, and that depends on the nature of your workload. These are the two types of Charms: +There are two kinds of Charms, and at this point you have to decide which one you need, and that depends on the nature of your workload. These are the two types of Charms: - Proxy Charms - Native Charms @@ -23,17 +23,19 @@ However, if the the workload CAN be modified, then the code can live in the same This type of initial actions will run automatically after instantiation and should be specified in the VNF descriptor. These can be defined at two different levels: -* VDU-level: for a specific VDU, used when a VDU needs configuration, which is different than the VDU used for managing the VNF. -* VNF-level: for the "management VDU", used when the configuration applies to the VDU exposing a interface for managing the whole VNF. +- VDU-level: for a specific VDU, used when a VDU needs configuration, which is different than the VDU used for managing the VNF. +- VNF-level: for the "management VDU", used when the configuration applies to the VDU exposing a interface for managing the whole VNF. -**Initial primitives** must include a primitive named `config` that passes information for OSM VCA to be able to authenticate and run further primitives into the VNF. The *config primitive* should provide, at least, the following parameters: +**Initial primitives** must include a primitive named `config` that passes information for OSM VCA to be able to authenticate and run further primitives into the VNF. The _config primitive_ should provide, at least, the following parameters: -* `ssh-hostname`: Typically used with the , which is automatically replaced by the VNF or VDU management IP address specified in the correspondent section. -* `ssh-username`: The username used for authentication with the VDU. +- `ssh-hostname`: Typically used with the , which is automatically replaced by the VNF or VDU management IP address specified in the correspondent section. +- `ssh-username`: The username used for authentication with the VDU. Additionally, OSM VCA needs the credentials to succeed the authentication. For that, there are two options: -* Add `ssh-password` in the config initial-config-primitive: A static password -* Add `config-access in the vnf/vdu-configuration: With this method, OSM will inject the public keys generated by the Proxy Charm to the workload. + +- Add `ssh-password` in the config initial-config-primitive: A static password +- Add `config-access in the vnf/vdu-configuration: With this method, OSM will inject the public keys generated by the Proxy Charm to the workload. + ``` vnf-configuration: config-access: @@ -44,27 +46,36 @@ Additionally, OSM VCA needs the credentials to succeed the authentication. For t > NOTE: Any Charm can provide a set of configuration parameters in a config.yaml file. The value for those parameters should be specified in the `config` initial primitive. -Additional to the *config primitive*, more initial primitives can be run in the desired order so that the VNF initializes its services. Note that each of these additional actions will be later detailed in the proxy charm that implements them. +Additional to the _config primitive_, more initial primitives can be run in the desired order so that the VNF initializes its services. Note that each of these additional actions will be later detailed in the proxy charm that implements them. -The following example shows VNF-level initial primitives: both the expected *config* primitive in the beginning, but also the *configure-remote* and *start-service* to be run in addition right after initialization. +The following example shows VNF-level initial primitives: both the expected _config_ primitive in the beginning, but also the _configure-remote_ and _start-service_ to be run in addition right after initialization. ```yaml -vnfd:vnfd-catalog: - vnfd: - - ... - mgmt-interface: - cp: vnf-cp0 - ... +vnfd: +... + df: + - ... + # VNF/VDU Configuration needs to be globally "activated" at the DF + vnf-configuration-id: default-vnf-configuration + + # VNF/VDU Configuration is then described globally vnf-configuration: + - id: default-vnf-configuration + execution-environment-list: + - id: configure-vnf + connection-point-ref: vnf-mgmt + juju: + charm: samplecharm initial-config-primitive: - - name: config + - execution-environment-ref: configure-vnf + name: config parameter: - name: ssh-hostname value: - name: ssh-username value: admin - # - name: ssh-password - # value: secretpassword + - name: ssh-password + value: secretpassword seq: '1' - name: configure-remote parameter: @@ -72,44 +83,10 @@ vnfd:vnfd-catalog: value: 10.1.1.1 seq: '2' - name: start-service - seq: '3' - juju: - charm: samplecharm + seq: '3' ``` -**Instantiation parameters** can be used to define the values of these parameters in a later time, during the NS instantiation. The following example shows a VDU-level parameter with variables. Note that when using VDU-level primitives, an interface must be specified as the "management interface" for that specific VDU. - -``` -vnfd:vnfd-catalog: - vnfd: - - ... - vdu: - - ... - interface: - - external-connection-point-ref: vdu1_mgmt - mgmt-interface: true - ... - vdu-configuration: - initial-config-primitive: - - seq: '1' - name: config - parameter: - - name: ssh-hostname - value: - - name: ssh-username - value: admin - - name: ssh-password - value: - - seq: '2' - name: configure-remote - parameter: - - name: dest-ip - value: - - seq: '3' - name: start-service - juju: - charm: samplecharm -``` +**Instantiation parameters** can be used to define the values of these parameters in a later time, during the NS instantiation. Notice that the `connection-point-ref` can be used to map the primitive to any given VDU CP, enabling the posibility of having multiple primitives mapped to different management interfaces of different VDUs. The values for the variables used at the primitive level are defined at instantiation time, just like in the `cloud-init` case: @@ -161,9 +138,9 @@ tags: - nfv subordinate: false series: - - bionic - - xenial -peers: # This will give HA capabilities to your Proxy Charm + - bionic + - xenial +peers: # This will give HA capabilities to your Proxy Charm proxypeer: interface: proxypeer ``` @@ -233,6 +210,7 @@ get-ssh-public-key: ``` Add the following code to `src/charm.py`, which will implement the Day-1 primitives: + > Note: Actions in the Charm can be used in the VNFD for either Day-1 and Day-2 primitives. There's no difference in the Charm. ```python @@ -337,7 +315,7 @@ In the charms.osm library, you can find an SSHProxyCharm library that handles sc self.framework.observe(self.on.start_service_action, self.start_service) ``` -In the initialization of the Charm, we need to observe to start (self.on.start), install(self.on.install), and config_changed (self.on.config_changed) events. Additionally, we need to observe the events for the implemented actions, which have the following format: self.on._action. +In the initialization of the Charm, we need to observe to start (self.on.start), install(self.on.install), and config_changed (self.on.config_changed) events. Additionally, we need to observe the events for the implemented actions, which have the following format: self.on.\_action. ```python def on_config_changed(self, event): @@ -376,7 +354,7 @@ export LAYER_PATH=$JUJU_REPOSITORY/layers cd $LAYER_PATH ``` -b) A proxy charm includes, by default, the "VNF" and "basic" layers, which take care of the initial SSH connection to the VNF. Create the new personalized *layer* for your proxy charm: +b) A proxy charm includes, by default, the "VNF" and "basic" layers, which take care of the initial SSH connection to the VNF. Create the new personalized _layer_ for your proxy charm: ``` charm create samplecharm @@ -450,8 +428,8 @@ EOF f) Open the respective file at the 'reactive/' folder. This will be used to code, in Python, the actual actions that will run through SSH when each primitive is triggered. Note that any variable can be recovered in two ways: -* Using the `config()` function if the variable belongs to that specific primitive. -* Using the `action_get('name-of-parameter')` function to get any other parameter. +- Using the `config()` function if the variable belongs to that specific primitive. +- Using the `action_get('name-of-parameter')` function to get any other parameter. The following example provides an idea of the contents of a reactive file. @@ -497,9 +475,9 @@ def configure_remote(): @when('actions.start-service') def start_service(): err = '' - # Variables should be retrieved, if needed + # Variables should be retrieved, if needed try: - # Commands to be run through SSH should go here + # Commands to be run through SSH should go here cmd = "sudo service vnfoper start" result, err = charms.sshproxy._run(cmd) except: @@ -507,7 +485,7 @@ def start_service(): else: action_set({'output': result}) finally: - remove_flag('actions.start-service') + remove_flag('actions.start-service') ``` @@ -527,7 +505,7 @@ options: h) Finally, build the charm with `charm build` and copy the resulting folder (in this case the `~/charms/builds/simplecharm` directory) inside the `charms` folder of your VNF Package. -Futher information about building charms can be found [here](https://osm.etsi.org/wikipub/index.php/Creating_your_own_VNF_charm_(Release_THREE)). +Futher information about building charms can be found [here](). #### DEPRECATED: Reactive (Method 2): Using Proxy Charm Generators @@ -555,7 +533,7 @@ Once the Ansible playbook has been tested against your VNF, the procedure to inc a) Create your environment and charm in the traditional way (that is, steps (a) and (b) from the previous method) -b) Clone the devops repository elsewhere and copy the generator files to your charm root folder. For example: `cp -r ~/devops/descriptor-packages/tools/charm-generator/* ./` `[TODO: migrate to binary]` +b) Clone the devops repository elsewhere and copy the generator files to your charm root folder. For example: `cp -r ~/devops/descriptor-packages/tools/charm-generator/* ./` `[TODO: migrate to binary]` c) Install the dependencies of the generator with `sudo pip3 install -r requirements.txt` @@ -597,10 +575,9 @@ def playbook(): f) Finally, build the charm with `charm build` and copy the resulting folder (in this case the "~/charms/builds/simplecharm" directory) inside the "charms" folder of your VNF Package. -Once the VNF is launched, the results from running the generator will be found inside the proxy charm *lxc* container, at the "/var/log/ansible.log" file. If not successful, it could indicate the need for other possible modifications which are applicable for certain VNFs. - -**Note**: some VNFs will not pass some SSH pre-checks that Ansible performs in some operations (SFTP, SCP, etc.) In those cases, it has been noted that `ansible_connection=ssh`, which is a default set of the generator, needs to be disabled. This preset would need to be deleted from the `lib/charms/libansible.py` file, `create_hosts` function. `[TODO: explore an enhancement to the Ansible Generator, to be as generic as possible]` +Once the VNF is launched, the results from running the generator will be found inside the proxy charm _lxc_ container, at the "/var/log/ansible.log" file. If not successful, it could indicate the need for other possible modifications which are applicable for certain VNFs. +**Note**: some VNFs will not pass some SSH pre-checks that Ansible performs in some operations (SFTP, SCP, etc.) In those cases, it has been noted that `ansible_connection=ssh`, which is a default set of the generator, needs to be disabled. This preset would need to be deleted from the `lib/charms/libansible.py` file, `create_hosts` function. `[TODO: explore an enhancement to the Ansible Generator, to be as generic as possible]` ### Testing Instantiation of the VNF Package @@ -615,6 +592,7 @@ osm ns-create --ns_name [ns name] --nsd_name [nsd name] --vim_account [vim name] Furthermore, and as mentioned earlier, extra **_instantiation parameters_** can be passed so that the VNF can be adapted to the particular instantiation environment or to achieve a proper inter-operation with other VNFs into the specific NS. For example, if using IP Profiles to predefine subnet values, a specific IP address could be passed to an interface like this: + ``` osm ns-create ... --config '{vnf: [ {member-vnf-index: "1", internal-vld: [ {name: internal, ip-profile: {...}, internal-connection-point: [{id-ref: id1, ip-address: "a.b.c.d"}] ] } ], additionalParamsForVnf...}' @@ -646,10 +624,10 @@ additionalParamsForVnf: ## Native Charms (In progress) - diff --git a/04-day2.md b/04-day2.md index 27dd064f72417c585039b948f621f44f1b5bccfa..ac52c9155f9e36e4f7cbae113af007e381522100 100644 --- a/04-day2.md +++ b/04-day2.md @@ -4,7 +4,7 @@ The objective of this section is to provide the guidelines for including all the necessary elements in the VNF Package so that it can be operated at runtime and therefore, reconfigured on demand at any point by the end-user. Typical operations include reconfiguration of services, KPI monitoring and the enablement of automatic, closed-loop operations triggered by monitored status. -The main mechanisms to achieve reconfiguration in OSM is to build a Proxy Charm and include it in the descriptor. On the other hand, monitoring and VNF-specific policy management can be achieved by specifying the requirements at the descriptor (modifying monitored indicators and policies at runtime is not supported in OSM as of version 5.0.5) +The main mechanisms to achieve reconfiguration in OSM is to build a Proxy Charm and include it in the descriptor. On the other hand, monitoring and VNF-specific policy management can be achieved by specifying the requirements at the descriptor (modifying monitored indicators and policies at runtime is not supported in OSM as of version 9) ## Day-2 Onboarding Guidelines @@ -14,17 +14,23 @@ Day-2 primitives are actions invoked on demand, so the `config-primitive` block For example, a VNF-level set of Day-2 primitives would look like this: -``` -vnfd:vnfd-catalog: - vnfd: - - ... - mgmt-interface: - cp: vnf-cp0 - ... - vnf-configuration: - ... +```yaml +vnfd: +... + df: + - ... + vnf-configuration-id: default-vnf-configuration + + vnf-configuration: + - id: default-vnf-configuration + execution-environment-list: + - id: operate-vnf + connection-point-ref: vnf-mgmt + juju: + charm: samplecharm config-primitive: - - name: restart-service + - execution-environment-ref: operate-vnf + name: restart-service parameter: - name: offset default-value: 10 @@ -33,9 +39,7 @@ vnfd:vnfd-catalog: parameter: - name: force default-value: true - data-type: BOOLEAN - juju: - charm: samplecharm + data-type: BOOLEAN ``` ### Building a Proxy Charm @@ -46,200 +50,156 @@ Proxy charms for implementing Day-2 primitives are built exactly in the same way #### Collecting NFVI metrics -In order to collect NFVI-level metrics associated to any given VDU and store them in the OSM TSDB (using Prometheus software), a set of `monitoring-params` should be declared both globally and at the VDU level. +In order to collect NFVI-level metrics associated to any given VDU and store them in the OSM TSDB (using Prometheus software), a set of `monitoring-params` should be declared both globally and at the VDU level. -Only CPU and Memory are supported as of OSM version 5.0.5. For example: +Only CPU, Memory and Network metrics are supported as of OSM version 9. For example: -``` -vnfd:vnfd-catalog: - vnfd: +```yaml +vnfd: + vdu: - ... - vdu: - - id: "apache_vdu" - ... - monitoring-param: - - id: "apache_cpu_util" - # nfvi-metric name should match the supported set of metrics collectable by OSM through the VIM connectors - nfvi-metric: "cpu_utilization" - - id: "apache_memory_util" - nfvi-metric: "average_memory_utilization" - ... - monitoring-param: - - id: "apache_vnf_cpu_util" - name: "apache_vnf_cpu_util" - # only 'AVERAGE' aggregation is supported at this time - aggregation-type: AVERAGE - vdu-monitoring-param: - # vdu-ref should match the id of the VDU - vdu-ref: "apache_vdu" - # vdu-monitoring-param-ref should match the nfvi-metric id - vdu-monitoring-param-ref: "apache_cpu_util" - - id: "apache_vnf_memory_util" - name: "apache_vnf_memory_util" - aggregation-type: AVERAGE - vdu-monitoring-param: - vdu-ref: "apache_vdu" - vdu-monitoring-param-ref: "apache_memory_util" + monitoring-parameter: + - id: vnf_cpu_util + name: vnf_cpu_util + performance-metric: cpu_utilization + - id: vnf_memory_util + name: vnf_memory_util + performance-metric: average_memory_utilization + - id: vnf_packets_sent + name: vnf_packets_sent + performance-metric: packets_sent + - id: vnf_packets_received + name: vnf_packets_received + performance-metric: packets_received ``` #### Collecting VNF indicators -As of OSM version 5.0.5, collection of VNF indicators is done by using Proxy Charms with the *metrics layer*. This is a simple method that has a couple of limitations: - -* Metrics are collected every five minutes and this can't be changed. -* Only positive decimal values of *gauge* or *absolute* types can be collected. - -At the charm level, the only file that needs to be created before building it is the "metrics.yaml" file at the root folder of the charm. - -For example, the following file collects *active users* and *loads* values from a Linux machine. - -``` -# metrics.yaml file -metrics: - users: - type: gauge - description: "# of users" - command: who|wc -l - load: - type: gauge - description: "5 minute load average" - command: cat /proc/loadavg |awk '{print $1}' -``` - -More information on how to populate this file can be found in the Juju [developer metrics](https://docs.jujucharms.com/2.5/en/developer-metrics) documentation. - -Once the charm has been created and included in the VNF Package, the descriptor needs to define the metrics to be actually collected by OSM. As with any charm, this can be done at a VNF or VDU level. - -For example, at the VNF level (a VDU that represents the VNF): - -``` -vnfd:vnfd-catalog: - vnfd: - - ... - mgmt-interface: - cp: vnf-cp0 - ... - vnf-configuration: - ... - juju: - # this is the name of the proxy charm - charm: metricscharm - metrics: - # metric names should match the ones specified at the metrics.yaml file - - name: users - - name: load - ... - monitoring-param: - - id: "ubuntuvdu_users" - name: "ubuntuvdu_users" - aggregation-type: AVERAGE - vnf-metric: - # vnf-metric-name-ref should match the metric name specified at VNF/VDU level - vnf-metric-name-ref: "users" - - id: "ubuntuvdu_load" - name: "ubuntuvdu_load" - aggregation-type: AVERAGE - vnf-metric: - vnf-metric-name-ref: "load" +As of OSM version 9, collection of VNF indicators is done by using Prometheus Exporters running as "execution environments", which translate into PODs instantiated in the same K8s cluster where OSM runs. These PODs follow the VNF lifecycle (as charms do) and are dedicated to the collection of metrics. +A first implementation supports SNMP Exporters, to grab scalar provided by any SNMP MIB/OID. + +At the VNF package level: + +- The only file that needs to be created before building it is the "generator.yaml" file at the `helm-charts/chart_name/snmp/` folder, just as in [this sample VNF Package](https://osm.etsi.org/gitlab/vnf-onboarding/osm-packages/-/tree/master/snmp_ee_vnf), where the chart is called `eechart`. +- Required MIBs should be included in the `helm-charts/chart_name/snmp/mibs` folder. +- The rest of the structure inside the `helm-chart` folder shown in the example above needs to be included. + +The `generator.yaml` file follows the same format as in the open-source Prometheus SNMP Exporter project which we use, documented [here](https://github.com/prometheus/snmp_exporter/tree/master/generator) +In this example, the interfaces metrics from IF-MIB are collected, using the "public" SNMP community. + +```yaml +# generator.yaml file +modules: + osm-snmp: + walk: [interfaces] + lookups: + - source_indexes: [ifIndex] + lookup: ifAlias + - source_indexes: [ifIndex] + lookup: ifDescr + - source_indexes: [ifIndex] + # Use OID to avoid conflict with Netscaler NS-ROOT-MIB. + lookup: 1.3.6.1.2.1.31.1.1.1.1 # ifName + auth: + # Community string is used with SNMP v1 and v2. Defaults to "public". + community: public ``` -This other example does the same, but at a specific VDU level: +Once the `generator.yml` has been created and included in the VNF Package, the descriptor needs to define the helm-based monitoring that will be launched, and running the `generate_snmp` primitive, which compiles the MIBs and builds the SNMP Exporter POD configuration. -``` -vnfd:vnfd-catalog: - vnfd: +```yaml +vnfd: + ... + df: - ... - vdu: - - ... - interface: - # remember that for any VDU that has runs a charm, a management interface needs to be specified - - external-connection-point-ref: vdu1_mgmt - mgmt-interface: true - ... - vdu-configuration: - ... - juju: - charm: metricscharm - metrics: - - name: users - - name: load - ... - monitoring-param: - - id: "ubuntuvdu_users" - name: "ubuntuvdu_users" - aggregation-type: AVERAGE - vdu-metric: - vdu-ref: "ubuntuvdu1" - vdu-metric-name-ref: "users" - - id: "ubuntuvdu_load" - name: "ubuntuvdu_load" - aggregation-type: AVERAGE - vdu-metric: - vdu-ref: "ubuntuvdu1" - vdu-metric-name-ref: "load" + vnf-configuration-id: default-vnf-configuration + + vnf-configuration: + - id: default-vnf-configuration + execution-environment-list: + - connection-point-ref: vnf-mgmt + helm-chart: eechart + helm-version: v2 + id: monitor + metric-service: snmpexporter + initial-config-primitive: + - execution-environment-ref: monitor + name: generate_snmp + seq: 2 + config-primitive: + - execution-environment-ref: monitor + name: generate_snmp ``` ### Adding scaling operations -Scaling operations happen at a VDU level and can be added with automatic triggers (*closed-loop* mode triggered by *monitoring-param* thresholds), or with a manual trigger. +Scaling operations happen at a VDU level and can be added with automatic triggers (_closed-loop_ mode triggered by _monitoring-parameters_ thresholds), or with a manual trigger. -In both cases, a `scaling-group-descriptor` section must be added to the VNF descriptor. The following example enables VDU scaling based on a manual trigger (OSM API or CLI). +In both cases, a `scaling-aspect` section must be added to the VNF Deployment Flavour. The following example enables VDU scaling based on a manual trigger (OSM API or CLI). -``` -vnfd:vnfd-catalog: - vnfd: - - ... - scaling-group-descriptor: - - name: "apache_vdu_manualscale" - # the following counts refer to "scaled instances" only - min-instance-count: 0 - max-instance-count: 10 - scaling-policy: - - name: "manual_policy" - scaling-type: "manual" - vdu: - - vdu-id-ref: apache_vdu - count: 1 +```yaml +vnfd: + df: + - ... + scaling-aspect: + - aspect-delta-details: + deltas: + - id: vdu_autoscale-delta + vdu-delta: + - id: hackfest_basic_metrics-VM + number-of-instances: "1" + id: vdu_autoscale + max-scale-level: 1 + name: vdu_autoscale + scaling-policy: + - cooldown-time: 120 + name: cpu_util_above_threshold + scaling-type: manual ``` The following example defines a closed-loop scaling operation based on a specific monitoring parameter threshold. +In this case, the `vdu-profile` should specify both `min-number-of-instances` and `max-number-of-instances` to limit the sum of the original and the scaled instances. -``` -vnfd:vnfd-catalog: - vnfd: - - ... - scaling-group-descriptor: - - name: "apache_vdu_autoscale" - min-instance-count: 0 - max-instance-count: 10 - scaling-policy: - - name: "apache_cpu_util_above_threshold" - scaling-type: "automatic" - threshold-time: 10 - cooldown-time: 120 - scaling-criteria: - - name: "apache_cpu_util_above_threshold" - # this is the name of the monitoring-param to monitor - vnf-monitoring-param-ref: "apache_vnf_cpu_util" - # scale-in threshold - scale-in-threshold: 20 - scale-in-relational-operation: "LT" - # scale-out threshold - scale-out-threshold: 80 - scale-out-relational-operation: "GT" - vdu: - - vdu-id-ref: apache_vdu - count: 1 +```yaml +vnfd: + df: + - ... + vdu-profile: + - ... + max-number-of-instances: "2" + min-number-of-instances: "1" + scaling-aspect: + - aspect-delta-details: + deltas: + - id: vdu_autoscale-delta + vdu-delta: + - id: hackfest_basic_metrics-VM + number-of-instances: "1" # how many instances will be added / removed + id: vdu_autoscale + max-scale-level: 1 + name: vdu_autoscale + scaling-policy: + - cooldown-time: 120 + name: cpu_util_above_threshold + scaling-criteria: + - name: cpu_util_above_threshold + scale-in-relational-operation: LT + scale-in-threshold: 10 + scale-out-relational-operation: GT + scale-out-threshold: 60 + vnf-monitoring-param-ref: vnf_cpu_util + scaling-type: automatic + threshold-time: 10 ``` -More information about scaling can be found in the [OSM Autoscaling documentation](https://osm.etsi.org/wikipub/index.php/OSM_Autoscaling) +More information about scaling can be found in the [OSM Autoscaling documentation](https://osm.etsi.org/docs/user-guide/05-osm-usage.html?highlight=instantiation%20parameters#autoscaling) ### Testing Instantiation of the VNF Package Each of the objectives of this phase can be tested as follows: -* **Enabling a way of re-configuring the VNF on demand**: primitives can be called through the OSM API, dashboard, or directly by running the following OSM client command: `osm ns-action [ns-name] --vnf_name [vnf-index] --action_name [primitive-name] --params '{param-name-1: "param-value-1", param-name-2: "param-value-2", ...}` +- **Enabling a way of re-configuring the VNF on demand**: primitives can be called through the OSM API, dashboard, or directly by running the following OSM client command: `osm ns-action [ns-name] --vnf_name [vnf-index] --action_name [primitive-name] --params '{param-name-1: "param-value-1", param-name-2: "param-value-2", ...}` -* **Monitor the main KPIs of the VNF**: if correctly enabled, metrics will automatically start appearing in the OSM Prometheus database. More information on how to access, visualize and troubleshoot metrics can be found in the [OSM Performance Management documentation](https://osm.etsi.org/wikipub/index.php/OSM_Performance_Management) +- **Monitor the main KPIs of the VNF**: if correctly enabled, metrics will automatically start appearing in the OSM Prometheus database. More information on how to access, visualize and troubleshoot metrics can be found in the [OSM Performance Management documentation](https://osm.etsi.org/docs/user-guide/05-osm-usage.html?highlight=instantiation%20parameters#performance-management) -* **Enabling scaling operations**: automatic scaling should be tested by making the metric reach the corresponding threshold, while manual scaling can be tested by using the following command (which also works when the "scaling-type" has been set to "automatic"): `osm vnf-scale [ns-name] [vnf-name] --scaling-group [scaling-group name] [--scale-in|--scale-out]` +- **Enabling scaling operations**: automatic scaling should be tested by making the metric reach the corresponding threshold, while manual scaling can be tested by using the following command (which also works when the "scaling-type" has been set to "automatic"): `osm vnf-scale [ns-name] [vnf-name] --scaling-group [scaling-group name] [--scale-in|--scale-out]` diff --git a/05-basic-examples.md b/05-basic-examples.md index fb1da2b79d6c2a2a184f1e05960ac8eb4597fe5f..f985144f2229eb8cc7b7c8d5184a6c05b5dcb8de 100644 --- a/05-basic-examples.md +++ b/05-basic-examples.md @@ -1,5 +1,7 @@ # Reference NSD/VNFD & Charms +** NOTE: this section uses pre-SOL006 descriptors and will be updated ** + ## Reference NS#1: Testing an endpoint VNF The following network service captures a simple test setup where a VNF is tested with a traffic generator VNF (or a simple VNF/VM with a basic client application). For simplicity, this network service assumes that the VNF under test is the endpoint of a given service (e.g. DNS, AAA, etc.) and does not require special conditions or resource allocation besides the usual in a standard cloud environments. @@ -148,4 +150,4 @@ Under the scope of a H2020 project, [5GinFIRE](https://5ginfire.eu/) has develop ## Resources -The template used to create these NS/VNF diagrams is available at: [Reference_NS-VNF_diagrams.pptx](https://drive.google.com/open?id=0B0IUJnTZzp2iUnJUb1JFSGpBRGs) \ No newline at end of file +The template used to create these NS/VNF diagrams is available at: [Reference_NS-VNF_diagrams.pptx](https://drive.google.com/open?id=0B0IUJnTZzp2iUnJUb1JFSGpBRGs) diff --git a/06-walkthrough.md b/06-walkthrough.md index 7df1d08dbc70d4180515202d10bbec6091716ef2..4c6e6c7196ae198bdc5c580833d718dc2a7b7d87 100644 --- a/06-walkthrough.md +++ b/06-walkthrough.md @@ -1,10 +1,12 @@ # VNF Onboarding Walkthrough +** NOTE: this section uses pre-SOL006 descriptors and will be updated ** + ## Introduction This section uses NextEPC (an open-source implementation of a 4G/5G packet core) to go through most of the steps described in the onboarding guidelines, in order to provide a concrete example on how to build a complete VNF Package from scratch. -The example is meant to be used for educational purposes and not for a real-life implementation of an EPC. It may change over time to cover more use cases. A Linux machine is required to follow the complete procedure. +The example is meant to be used for educational purposes and not for a real-life implementation of an EPC. It may change over time to cover more use cases. A Linux machine is required to follow the complete procedure. In addition to the procedure, here you can find some resources related to it: * [Resulting packages](https://osm-download.etsi.org/ftp/Packages/vnf-onboarding-tf/) @@ -25,10 +27,10 @@ The following table describes the components description and associated images. And here the NFVI requirements (for this example, as the VNF has no strict requirements) -| VDU | vCPU | RAM (GB) | Storage (GB) | EPA Features | -|:-------:|:----:|:--------:|:------------:|:------------------------------------------| -| spgwmme | 2 | 4 | 10 | CPU Pinning, Huge Pages & SRIOV in S1/SGI | -| hss | 1 | 2 | 10 | - | +| VDU | vCPU | RAM (GB) | Storage (GB) | EPA Features | +| :-----: | :--: | :------: | :----------: | :---------------------------------------- | +| spgwmme | 2 | 4 | 10 | CPU Pinning, Huge Pages & SRIOV in S1/SGI | +| hss | 1 | 2 | 10 | - | The topology of the VNF would be as follows: @@ -59,34 +61,44 @@ For this sample VNF, we will assume that establishing the session between the MM SPGW-MME Day-1 operations include: -* Enabling its 3 additional interfaces: +- Enabling its 3 additional interfaces: + ``` sudo ip link set ens4 up && sudo dhclient ens4 sudo ip link set ens5 up && sudo dhclient ens5 sudo ip link set ens6 up && sudo dhclient ens6 ``` -* Replacing the HSS and SPGW IP addresses in the configuration file of the MME component. We will need to pass both IP addresses as instantiation parameters (as of OSM Release 6.0.2, there is no automatic detection of non-management IP addressing) + +- Replacing the HSS and SPGW IP addresses in the configuration file of the MME component. We will need to pass both IP addresses as instantiation parameters (as of OSM Release 6.0.2, there is no automatic detection of non-management IP addressing) + ``` sudo sed -i 's/$hss_ip/HSS_IP/g' /etc/nextepc/freeDiameter/mme.conf sudo sed -i 's/$spgw_ip/SPGW_IP/g' /etc/nextepc/freeDiameter/mme.conf ``` -* Restarting the MME daemon + +- Restarting the MME daemon + ``` sudo systemctl restart nextepc-mmed ``` HSS Day-1 operations include: -* Enabling its additional interface: +- Enabling its additional interface: + ``` sudo ip link set ens4 up && sudo dhclient ens4 ``` -* Replacing the HSS and SPGW IP addresses in the configuration file of the HSS component. We will need to pass both IP addresses as instantiation parameters as well. + +- Replacing the HSS and SPGW IP addresses in the configuration file of the HSS component. We will need to pass both IP addresses as instantiation parameters as well. + ``` sudo sed -i 's/$hss_ip/HSS_IP/g' /etc/nextepc/freeDiameter/hss.conf sudo sed -i 's/$spgw_ip/SPGW_IP/g' /etc/nextepc/freeDiameter/hss.conf ``` -* Restarting the HSS daemon + +- Restarting the HSS daemon + ``` sudo systemctl restart nextepc-hssd ``` @@ -96,11 +108,12 @@ sudo systemctl restart nextepc-hssd For day-2 operations, we will enable the possibility of reconfiguring and monitoring the SPGW. - SPGW reconfiguration includes adding static routes. We will need to pass variables for prefix and next-hop. + ``` sudo route add -net PREFIX gw NEXTHOP ``` -- SPGW KPIs will include CPU and Memory metrics collection from the NFVI, through the VIM. +- SPGW KPIs will include CPU and Memory metrics collection from the NFVI, through the VIM. ## Building the VNF Package for Day-0 @@ -169,7 +182,7 @@ osm package-create --base-directory ~/vEPC --image nextepc-spgwmme-base --vcpu 2 virtual-interface: type: PARAVIRT external-connection-point-ref: hss-s6a -... +... ``` - Modify the s6a connection point to be 'internal' one, mapped to internal CPs and VLD to interconnect both VDUs. @@ -178,9 +191,9 @@ osm package-create --base-directory ~/vEPC --image nextepc-spgwmme-base --vcpu 2 ... vdu: - id: spgwmme -... - interface: -... +... + interface: +... - name: eth3 type: INTERNAL virtual-interface: @@ -189,13 +202,13 @@ osm package-create --base-directory ~/vEPC --image nextepc-spgwmme-base --vcpu 2 internal-connection-point: - id: spgwmme-s6a name: spgwmme-s6a - type: VPORT + type: VPORT ... vdu: - id: hss -... +... interface: -... +... - name: eth1 type: INTERNAL virtual-interface: @@ -212,7 +225,7 @@ osm package-create --base-directory ~/vEPC --image nextepc-spgwmme-base --vcpu 2 - id-ref: spgwmme-s6a - id-ref: hss-s6a name: s6a -... +... ``` - Modify the external connection points that will be exposed to the Network Service level, and set the management one. @@ -223,10 +236,10 @@ osm package-create --base-directory ~/vEPC --image nextepc-spgwmme-base --vcpu 2 - name: spgwmme-mgmt - name: spgwmme-s1 - name: spgwmme-sgi - - name: hss-mgmt + - name: hss-mgmt mgmt-interface: - cp: spgwmme-mgmt -... + cp: spgwmme-mgmt +... ``` - We will set a particular subnet prefix for our internal VLD, to be able to set our own IP addresses at instantiation time. @@ -241,11 +254,11 @@ osm package-create --base-directory ~/vEPC --image nextepc-spgwmme-base --vcpu 2 subnet-address: 10.0.6.0/24 dhcp-params: enabled: true -... +... internal-vld: - id: s6a ip-profile-ref: s6a -... +... ``` - Now, let's add the EPA requirements to the SPGW VDU. @@ -257,9 +270,9 @@ osm package-create --base-directory ~/vEPC --image nextepc-spgwmme-base --vcpu 2 guest-epa: cpu-pinning-policy: DEDICATED mempage-size: LARGE -... - interface: -... +... + interface: +... - name: eth1 type: EXTERNAL virtual-interface: @@ -270,7 +283,7 @@ osm package-create --base-directory ~/vEPC --image nextepc-spgwmme-base --vcpu 2 virtual-interface: type: SRIOV external-connection-point-ref: spgwmme-sgi -... +... ``` - Finally, let's specify the cloud-init file names in the descriptor, and copy them to the corresponding folder (~/vEPC/vEPC_vnf/cloud_init/) @@ -284,7 +297,7 @@ osm package-create --base-directory ~/vEPC --image nextepc-spgwmme-base --vcpu 2 vdu: - id: hss cloud-init-file: hss-init -... +... ``` - At this point, it is ideal to validate if your package has the correct format: @@ -297,8 +310,8 @@ osm package-validate vEPC/vEPC_vnf/ ## Building the VNF Package for Day-1 - Let's start with the SPGW day-1 operations by populating the descriptor with 'initial config primitives'. The management interface is already set at the "spgw_mgmt" CP, so OSM will connect to this machine when defining a configuration at the VNF level. A 'config' primitive is required to pass the parameters for SSH connection. -The only parameter that is auto-populated is 'rw_mgmt_ip' (the management IP address), the other ones will need to be provided at instantiation time. -Note that we are setting names for primitives (operations), order of execution, parameters, and the juju charm that will implement all of them. + The only parameter that is auto-populated is 'rw_mgmt_ip' (the management IP address), the other ones will need to be provided at instantiation time. + Note that we are setting names for primitives (operations), order of execution, parameters, and the juju charm that will implement all of them. ``` ... @@ -325,7 +338,7 @@ Note that we are setting names for primitives (operations), order of execution, - seq: '3' name: restart-spgw juju: - charm: spgwcharm + charm: spgwcharm ... ``` @@ -361,12 +374,12 @@ Note that we are setting names for primitives (operations), order of execution, value: - name: hss-ip data-type: STRING - value: + value: - seq: '3' name: restart-hss juju: - charm: hsscharm -... + charm: hsscharm +... ``` - Validate your package again, just in case: @@ -512,7 +525,7 @@ def configure_spgw(): cmd3='sudo sed -i "\'s/$hss_ip/{}/g\'" /etc/nextepc/freeDiameter/mme.conf'.format(hss_ip) charms.sshproxy._run(cmd3) cmd4='sudo sed -i "\'s/$spgw_ip/{}/g\'" /etc/nextepc/freeDiameter/mme.conf'.format(spgw_ip) - charms.sshproxy._run(cmd4) + charms.sshproxy._run(cmd4) remove_flag('actions.configure-spgw') @when('actions.restart-spgw') @@ -564,7 +577,7 @@ configure-hss: spgw-ip: description: "SPGW IP" type: string - default: "0.0.0.0" + default: "0.0.0.0" restart-hss: description: "Restarts the service of the VNF" ``` @@ -594,6 +607,7 @@ except Exception as e: EOF chmod +x actions/configure-hss ``` + ``` # Same procedure for the 'restart-hss' action cat <<'EOF' >> actions/restart-hss @@ -655,11 +669,10 @@ def configure_hss(): def restart_hss(): cmd = "sudo systemctl restart nextepc-hssd" charms.sshproxy._run(cmd) - remove_flag('actions.restart-hss') + remove_flag('actions.restart-hss') ``` - Charms need to be built and copied into the package, but we can do this in a later stage after Day-2 operations have been defined. - ## Building the VNF Package for Day-2 @@ -677,7 +690,7 @@ def restart_hss(): default-value: '8.8.8.8/32' - name: next-hop data-type: STRING - default-value: '192.168.2.1' + default-value: '192.168.2.1' ``` - Back to the SPGW charm folder, modify the actions.yaml file to include this new primitive, ading the respective executable file at the 'actions' folder, and finally adding the primitive at the 'reactive' file. @@ -691,12 +704,13 @@ add-route: external-prefix: description: "Destinaton prefix IP" type: string - default: "8.8.8.8/32" + default: "8.8.8.8/32" next-hop: description: "SGI next hop" type: string default: "192.168.2.1" ``` + ``` # Populate the executable file for the new action, in the 'actions' folder: cat <<'EOF' >> actions/add-route @@ -721,6 +735,7 @@ except Exception as e: EOF chmod +x actions/add-route ``` + ``` # Fill in the contents of the 'reactive' file (reactive/spgwcharm.py) ... @@ -756,7 +771,7 @@ cp -r ~/charms/builds/hsscharm ~/vEPC/vEPC_vnf/charms/ - id: "spgw_cpu_util" nfvi-metric: "cpu_utilization" - id: "spgw_memory_util" - nfvi-metric: "average_memory_utilization" + nfvi-metric: "average_memory_utilization" ... monitoring-param: - id: "spgw_cpu_util" @@ -781,17 +796,16 @@ cp -r ~/charms/builds/hsscharm ~/vEPC/vEPC_vnf/charms/ osm package-validate vEPC/vEPC_vnf/ ``` - ## Testing the VNF Package - To test the VNF package, you need to first include it into a NS Package, let's create one. ``` -osm package-create --base-directory ~/vEPC --vendor OSM_VNFONB_TF ns vEPC +osm package-create --base-directory ~/vEPC --vendor OSM_VNFONB_TF ns vEPC ``` - Its content should be similar to the following one, where VLDs are mapped to the external connection points of the VNF. Management external network is expected to be already present at the VIM ('vim-network-name' attribute). -Note that, for this example, we are setting some subnets and IP address values, to be requested to the VIM's IPAM to match some pre-existing configurations inside the VNF. + Note that, for this example, we are setting some subnets and IP address values, to be requested to the VIM's IPAM to match some pre-existing configurations inside the VNF. ``` nsd:nsd-catalog: @@ -804,7 +818,7 @@ nsd:nsd-catalog: version: '1.0' constituent-vnfd: - member-vnf-index: 1 - vnfd-id-ref: vEPC_vnfd + vnfd-id-ref: vEPC_vnfd ip-profiles: - name: s1 description: s1 network @@ -833,7 +847,7 @@ nsd:nsd-catalog: vnfd-connection-point-ref: spgwmme-mgmt - member-vnf-index-ref: 1 vnfd-id-ref: vEPC_vnfd - vnfd-connection-point-ref: hss-mgmt + vnfd-connection-point-ref: hss-mgmt - id: s1 name: s1 short-name: s1 @@ -848,7 +862,7 @@ nsd:nsd-catalog: name: sgi short-name: sgi type: ELAN - ip-profile-ref: sgi + ip-profile-ref: sgi vnfd-connection-point-ref: - member-vnf-index-ref: 1 vnfd-id-ref: vEPC_vnfd @@ -925,4 +939,4 @@ osm ns-action --vnf_name 1 --action_name add-route --params '{external-prefix: " - Finally, visit the Prometheus GUI at OSM IP (port 9091), or Grafana Dashboard at port 3000 and look for the 'osm_cpu_utilization' and 'osm_average_memory_utilization' metrics. - ![](assets/vnfonbtf_samplevnf_1_prometheus.png) + ![](assets/vnfonbtf_samplevnf_1_prometheus.png) diff --git a/07-knfwalkthrough.md b/07-knfwalkthrough.md index 728355b79c0a87fad60c8b0ab3d17c9064d2fa01..c0d19d412098b55887350bb556c84d8157b1eab0 100644 --- a/07-knfwalkthrough.md +++ b/07-knfwalkthrough.md @@ -1,12 +1,14 @@ # KNF Onboarding Walkthrough (Work in Progress) +** NOTE: this section uses pre-SOL006 descriptors and will be updated ** + ## Introduction This section uses Facebook's Magma, an open-source software platform that gives network operators an open, flexible and extendable mobile core network solution. ![](assets/magma_overview.png) -This example focuses on deploying the Magma Orchestrator component as a KNF, and then integrating it with a Magma AGW deployed as a VNF. It has been documented in a concise way while content keeps being added as K8s support is enhanced in OSM. +This example focuses on deploying the Magma Orchestrator component as a KNF, and then integrating it with a Magma AGW deployed as a VNF. It has been documented in a concise way while content keeps being added as K8s support is enhanced in OSM. Final packages used throughout this example can be found [here](https://osm-download.etsi.org/ftp/Packages/vnf-onboarding-tf/) @@ -144,7 +146,7 @@ orc8r-controller ClusterIP 10.233.22.92 8080/ ... ``` -An extra step when your KNF is ready, is to run this command that creates a user for the Magma Orc8r NMS to connect to the Magma Orc8r Controller. +An extra step when your KNF is ready, is to run this command that creates a user for the Magma Orc8r NMS to connect to the Magma Orc8r Controller. This one has to be executed manually while Day-1/2 primitives are not supported in KNFs: ``` @@ -159,7 +161,7 @@ Visit the dashboard with HTTPS and access it with user `admin@magma.test` (passw ## Testing functionality -We have prepared a modified Magma AGW, which is the distributed Packet Core component which runs in a single VM, in order to test it together with its Orchestrator (Orc8r KNF) +We have prepared a modified Magma AGW, which is the distributed Packet Core component which runs in a single VM, in order to test it together with its Orchestrator (Orc8r KNF) You can download the image from [here](https://osm-download.etsi.org/ftp/images/vnf-onboarding-tf/magma101.qcow2.gz). Please "gunzip" it before uploading it to your VIM. @@ -190,7 +192,7 @@ additionalParamsForVnf: orch_net: 'osmnet' ``` -c) Launch the AGW. +c) Launch the AGW. ``` osm ns-create --ns_name agw01 --nsd_name magma-agw_nsd --vim_account --config_file params.yaml @@ -208,4 +210,3 @@ As mentioned before, the charms run some scripts included in this image, which i e) When finished, the Magma Orchestrator dashboard will show the registered AGW01. You are ready to integrate some eNodeBs! (emulators to be provided soon!) ![](assets/magma_orc8r_dashboard_agw.png) -