# Onboarding Requirements Each lifecycle stage targets specific configurations in the VNF. These are: management setup during instantiation (Day-0), service initialization right after instantiation (Day-1) and re-configuration during runtime (Day-2). In order to provide a VNF with as many capabilities for each lifecycle stage as possible, the following specific requirements should be adressed. ## Day-0 requirements During the Day-0 stage, the VNF is instantiated and the management access is established so that the VNF can be configured at a later stage. The main requirements to achieve this are: ### Description of each VNF component The main function of every VNF component (VDU) should be clearly described in order to ease the understanding of the VNF. For example: | VDU | Description | | :---: | :---------------------------------- | | vLB | External frontend and load balancer | | uMgmt | Universal VNF Manager (EM) | | sBE | Service Backend of the platform | ### Defining NFVI requirements These requirements refer to properties like the number of vCPUs, RAM GBs and disk GBs per component, as well as any other resource that the VNF components need from the physical infrastructure. For example: | VDU | vCPU | RAM (GB) | Storage (GB) | External volume? | | :---: | :--: | :------: | :----------: | :--------------: | | vLB | 2 | 4 | 10 | N | | uMgmt | 1 | 1 | 2 | N | | sBE | 2 | 8 | 10 | Y | For some VNFs, the Enhanced Platform Awareness (EPA) characteristics need to be defined when the VNF requires performance capabilities which are "higher than default" or any particular hardware architecture from the NFVI. Popular EPA attributes include: - Compute performance attributes: - CPU Pinning - NUMA Topology Awareness - Memory Page Size - Data plane performance attributes: - PCI-Passthrough - SR-IOV For example, vLB and sBE VDUs could require: - 2 dedicated vCPUs - Large size memory pools - SR-IOV for eth1 and eth2 ### Topology and management definition Ideally, a diagram should be used to quickly identify components and internal/external connections. ![](assets/vnftopology1.png) Sample descriptor files, can be found [here](https://osm.etsi.org/gitlab/vnf-onboarding/osm-packages), while sample topologies can be found [here](https://osm.etsi.org/docs/vnf-onboarding-guidelines/05-basic-examples.html#) ### Images and cloud-init files The images for each component should be available in the format that corresponds to the main supported hypervisor. This image should contain the minimal configuration that makes it generic (not scenario-specific) and with **no hardcoded parameters** that are relevant to the service. Furthermore, cloud-init files can be used to inject this minimal configuration to the VNF. Some examples: ``` # Cloud-init using cloud-config format #cloud-config hostname: vnfc01 chpasswd: list: | ubuntu:ubuntu expire: False ssh_pwauth: True ``` ``` # Cloud-init using bash format for CentOS #!/bin/bash hostnamectl set-hostname vnfc01 cat < /tmp/ipcfg DEVICE=eth1 BOOTPROTO=dhcp HWADDR=00:19:D1:2A:BA:A8 ONBOOT=yes EOF echo -y | cp /tmp/ipcfg /etc/sysconfig/network-scripts/ifcfg-eth0 systemctl restart network ``` ### Identifying the instantiation parameters The VNF Day-0 configuration may require some parameters passed at instantiation time in order to fulfill the needs of the particular environment or of other VNFs in the Network Service. This parameters should be identified as early in the process as possible. ## Day-1 requirements The main objetive of the Day-1 stage is to configure the VNF so it starts providing the expected service. To achieve this, the main requirements are: ### Identifying dependencies between components This may be required to identify instantiation parameters or special timing requirements. Examples of dependencies between components include: - Components needing parameters from other components or from the infrastructure to complete the parameters configuration. - Components depending on others for their configuration to be initialized. ### Defining the required configuration for service initialization This initial configuration will run automatically after the VNF is instantiated. It should activate the service delivered by the VNF and should be initially prepared in the language that the VNF supports. Once it's defined, it would need to be incorporated by the mechanism that the generic VNF Manager implements. For example: ``` # A Python script (NETCONF/YANG in the example) from ncclient import manager import sys config = """ ... """ host = {'name':'VNF1', 'ip': '192.168.0.1'} interface_list = ['eth1', 'eth2'] m = manager.connect(host=host['ip'], username='ws', password='ws') for interface in interface_list: response = m.edit_config(target='candidate', config=config.format(interface=interface)) commit = m.commit() print(commit) m.close_session() ``` ``` # An Ansible playbook (VyOS module in the example) - hosts: all tasks: - name: Configure the VNF initial NAT Rules vyos_config: lines: - set nat destination rule 1 inbound-interface eth0 - set nat destination rule 1 destination port 80 - set nat destination rule 1 protocol tcp - set nat destination rule 1 translation address {{destination_ip}} ``` ### Identifying the need for instantiation parameters The VNF Day-1 configuration may require some parameters passed at instantiation time in order to fulfill the needs of the particular environment or of other VNFs in the Network Service. These parameters should be identified as early in the process as possible. ## Day-2 requirements The main objetives of Day-2 are to be able to **re-configure** the VNF so its behavior can be modified during runtime, being able to monitor its main KPIs, and running scaling or other closed-loop operations over it. To achieve this, the main requirements are: ### Identifying dependencies between components This process may be required to identify if a VNF component requires a parameter coming from other component for fulfilling runtime operations successfully. ### Defining all possible configurations for runtime operations The set of configurations should be available to be triggered from the orchestrator during the VNF runtime, either manually by the operator or automatically, based on some state. Once that set of configurations has been defined, it needs to be incorporated by the mechanism that the generic VNF Manager implements. Just as in Day-1, the set of configurations can be provided by Python scripts, Ansible playbooks, VNF-specific commands that run over SSH, REST API calls, or whatever the VNF makes available to expose its main operations. ### Defining key performance indicators The metrics that are relevant to the VNF should be specified, either if they are supposed to be collected from the infrastructure (through the VIM) or directly from the VNF (or its Element Manager, through any API, MIB or command that the VNF exposes). Some examples include: - Metrics typically collected from the VIM/NFVI: - CPU Usage - Memory Usage - Network activity (bandwidth, drops, etc.) - Storage consumption - Metrics collected from the VNF/EM (examples): - Active transactions/sessions/connections - Active users - Size of the database or a particular table - Application status ### Defining closed-loop operations Closed-loop operations are actions triggered by the status of a particular metric. The main use cases include: - Auto-scaling: a VNF component scales horizontally (out/in) to match the current demand. Some typical definitions that must be clear are: - How the VNF will load-balance the traffic once it scales. - Which components should scale, in what quantity, and based on which metric threshold or status. - How much time should the system wait between scaling requests. - Auto-healing: a VNF component is re-instantiated, reloaded or reconfigured based on a service status. Some typical definitions that must be clear are: - Under which conditions should the system trigger an auto-healing action. - Which elements should be affected and at what level (reinstantiation, hard-reload, soft-reload, process restart, etc.)