Commit 98bc7ca8 authored by aticig's avatar aticig Committed by calvinosanc1
Browse files

Adding a reference section for EPA parameters

parent c126383a
Loading
Loading
Loading
Loading
+3 −3
Original line number Diff line number Diff line
@@ -463,7 +463,7 @@ Detailed documentation is available at <https://snapcraft.io/microstack> and <ht

### Overview

OSM supports EPA (Enhanced Platform Awareness) since Rel ZERO (May 2016). EPA features like use of hugepages memory, CPU pinning, NUMA pinning, and the use of passthrough and SR-IOV interfaces, can be used in OSM's VNF descriptors since then.
OSM supports EPA (Enhanced Platform Awareness) since Rel ZERO (May 2016) and usage of multiple NUMA nodes with Rel THIRTEEN (Dec 2022) which are already defined in OSM's VNF descriptors. OSM allows to utilize EPA features like mem-policy, hugepages memory, NUMA pinning, cpu thread policy, CPU pinning, setting resource quotas (CPU, memory, storage and interface). Besides, it supports the usage of passthrough and SR-IOV interfaces. Please see the [EPA Parameters](https://osm.etsi.org/docs/user-guide/latest/21-reference.html#EPA-Parameters) to check the details of supported parameters.

If your VIM supports EPA, then you don't need to do anything extra to use it from OSM. VIM connectors in OSM take advantage of EPA capabilities if the VIM supports it. All you need to do is build your descriptors and deploy.

@@ -481,9 +481,9 @@ SDN Assist works as follows to overcome the limitations of the VIM with respect
1. OSM deploys the VMs of a NS in the requested VIM target with Passthrough and/or SRIOV interfaces.
2. Then it retrieves from the VIM the information about the compute node where the VM was deployed and the physical interfaces assigned to the VM (identified by their PCI addressess).
3. Then, OSM maps those interfaces to the appropriate ports in the switch making use of the mapping that you should have introduced in the system.
4. Finally OSM creates the dataplane networks by instructing the SDN controller and connecting the appropriate ports to the same network.
4. Finally, OSM creates the dataplane networks by instructing the SDN controller and connecting the appropriate ports to the same network.

The module in charge of this worflow OSM's RO (Resource Orchestrator), which is provided transparently to the user. It uses an internal library to manage the underlay connectivity via SDN. The current library includes plugins for FloodLight, ONOS and OpenDayLight.
The module in charge of this workflow is the RO (Resource Orchestrator), which is provided transparently to the user. It uses an internal library to manage the underlay connectivity via SDN. The current library includes plugins for FloodLight, ONOS and OpenDayLight.

#### General requirements

21-reference.md

0 → 100644
+98 −0
Original line number Diff line number Diff line
# OSM Reference

## EPA Parameters

### Virtual CPU

Pinning Policy

Instance vCPU processes are not assigned to any particular host CPU by default, instead, they float across host CPUs like any other process. This allows for features like over committing of CPUs. In heavily contended systems, this provides optimal system performance at the expense of performance and latency for individual instances. Some workloads require real-time or near real-time behavior, which is not possible with the latency introduced by the default CPU policy. For such workloads, it is beneficial to control which host CPUs are bound to an instance's vCPUs. This process is known as pinning. No instance with pinned CPUs can use the CPUs of another pinned instance, thus preventing resource contention between instances. To configure a flavor to use pinned vCPUs, the CPU policy is set to dedicated.

Supported CPU pinning policies are as follows:

- shared: (default) The guest vCPUs will be allowed to freely float across host pCPUs, although they are potentially constrained by NUMA policy.
- dedicated: The guest vCPUs will be strictly pinned to a set of host pCPUs. In the absence of an explicit vCPU topology request, the drivers typically expose all vCPUs as sockets with one core and one thread. When strict CPU pinning is in effect, the guest CPU topology is set up to match the topology of the CPUs to which it is pinned. 

Thread Policy

CPU thread pinning policy describes how to place the guest CPUs when the host supports hyper threads. Supported CPU thread policies are:

- prefer: (default) Attempts to place vCPUs on threads of the same core. The host may or may not have an SMT architecture. Where an SMT architecture is present, thread siblings are preferred.
- isolate: Places each vCPU on a different core, and places no vCPUs from a different guest on the same core. The host must not have an SMT architecture or must emulate a non-SMT architecture. If the host does not have an SMT architecture, each vCPU is placed on a different core as expected. If the host does have an SMT architecture - that is, one or more cores have thread siblings - then each vCPU is placed on a different physical core. No vCPUs from other guests are placed on the same core. All but one thread sibling on each utilized core is therefore guaranteed to be unusable.
- require: Each vCPU is allocated on thread siblings of the same core. The host must have an SMT architecture. Each vCPU is allocated on thread siblings. If the host does not have an SMT architecture, then it is not used. If the host has an SMT architecture, but not enough cores with free thread siblings are available, then scheduling fails. 

- EPA CPU Quota

CPU quota describes the CPU resource allocation policy. Limit and Reserve values are defined in MHz. Please see the [Quota Parameters](https://osm.etsi.org/docs/user-guide/latest/21-reference.html#Quota-Parameters) section for quota details.

### Virtual Interface

EPA VIF Quota

Virtual interfaces quota describes the virtual interface bandwidth resource allocation policy. Limit and Reserve values are defined in Mbps. Please see the [Quota Parameters](https://osm.etsi.org/docs/user-guide/latest/21-reference.html#Quota-Parameters) section for quota details.

### Virtual Memory

NUMA Enabled

Non-Uniform Memory Access or Non-Uniform Memory Architecture (NUMA) is a physical memory design used in SMP (multiprocessors) architecture. Under NUMA, a processor can access its own local memory faster than non-local memory, that is, memory local to another processor or memory shared between processors. NUMA enabled parameter specifies the memory allocation to be cognisant of the relevant process/core allocation. The cardinality can be 0 during the allocation request, if no particular value is requested. OSM supports NUMA usage by setting numa-enabled parameter to True.


NUMA Node Policy

The policy defines NUMA topology of the guest. Specifically identifies if the guest should be run on a host with one NUMA node or multiple NUMA nodes. The details of numa node policy parameters is given as below:

```yaml
numa-node-policy:
  - numa-cnt: The number of NUMA nodes to expose to the VM.
  - mem-policy: This policy specifies how the memory should be allocated in a multi-node scenario.
  - node (NUMA node identification):
      - node-id: Id of node. Typically, it's an integer such as 0 or 1 which identifies the nodes.
      - vcpu-id: List of VCPUs to allocate on this NUMA node.
      - memory-mb: Memory size expressed in MB for this NUMA node.
      - om-numa-type: OpenMANO Numa type selection.
          - cores:
            - num-cores: number of cores
          - paired-threads:
            - paired-thread-ids (List of thread pairs to use in case of paired-thread NUMA):
              - thread-a
              - thread-b
          - threads:
            - num-threads: Number of threads
```

Mem-Policy

Memory policy give the information to the kernel in order to allocate the memory from the specified/unspecified nodes in a NUMA system.

- STRICT: The memory must be allocated strictly from the memory attached to the NUMA node.
- PREFERRED: The memory should be allocated preferentially from the memory attached to the NUMA node.

EPA Mem-page Size

Memory page allocation size. If a VM requires hugepages, it should choose LARGE or SIZE_2MB or SIZE_1GB. If the VM prefers hugepages, it should choose PREFER_LARGE.

- LARGE: Require hugepages (either 2MB or 1GB)
- SMALL: Doesn't require hugepages
- SIZE_2MB: Requires 2MB hugepages
- SIZE_1GB: Requires 1GB hugepages
- PREFER_LARGE: Application prefers hugepages

Memory Quota

Memory quota describes the memory resource allocation policy. Limit and Reserve values are defined in MB. Please see the [Quota Parameters](https://osm.etsi.org/docs/user-guide/latest/21-reference.html#Quota-Parameters) for quota details.

### Virtual Storage

Disk IO Quota

Disk IO quota describes the disk IO operations resource allocation policy. Limit and Reserve values are defined in IOPS. Please see the [Quota Parameters](https://osm.etsi.org/docs/user-guide/latest/21-reference.html#Quota-Parameters) section for quota details.


### Quota Parameters

- limit: Defines the maximum allocation. The value 0 indicates that usage is not limited. This parameter ensures that the instance never uses more than the defined amount of resource.
- reserve: Defines the guaranteed minimum reservation. If needed, the machine will definitely get allocated the reserved amount of resources.
- shares: Number of shares allocated. Specifies the proportional weighted share for the domain. If this element is omitted, the service defaults to the OS provided defaults.

+1 −0
Original line number Diff line number Diff line
@@ -26,4 +26,5 @@
17. [ANNEX 8: TACACS Based Authentication Support In OSM](18-tacacs-based-authentication.md)
18. [ANNEX 9: LTS Upgrade](19-lts-upgrade.md)
19. [OSM tutorial](20-tutorial.md)
20. [OSM reference](21-reference.md)