OSM Scope and Functionality: Difference between revisions

From OSM Public Wiki
Jump to: navigation, search
 
(2 intermediate revisions by the same user not shown)
Line 97: Line 97:


* '''Common Lifecycle operations.''' There must be a number of API calls that allow to trigger well-known standard actions potentially applicable to any NS, such as scaling actions, pausing/resuming, on-demand monitoring requests, SW upgrades, etc.
* '''Common Lifecycle operations.''' There must be a number of API calls that allow to trigger well-known standard actions potentially applicable to any NS, such as scaling actions, pausing/resuming, on-demand monitoring requests, SW upgrades, etc.
* '''Actions derived from NS primitives.''' Besides operations potentially applicable to any NS/NSI, each NS/NSI can have a set of operations that are relevant only for the specific functionality that the NS/NSI offers, such as the addition of new subscribers, changes in internal routing, etc. Those actions are enumerated and codified in the corresponding NS Package (that leverage, in turn, on the atomic actions available in NF Packages) are exposed by the API as primary actions available in that given NS/NSI. [[File:assets/ns-augmented-with-primitives.png|NS managed as a single entity via NS primitives]]
* '''Actions derived from NS primitives.''' Besides operations potentially applicable to any NS/NSI, each NS/NSI can have a set of operations that are relevant only for the specific functionality that the NS/NSI offers, such as the addition of new subscribers, changes in internal routing, etc. Those actions are enumerated and codified in the corresponding NS Package (that leverage, in turn, on the atomic actions available in NF Packages) are exposed by the API as primary actions available in that given NS/NSI.
[[File:Ns-augmented-with-primitives.png|thumb|800px|alt=|NS managed as a single entity via NS primitives]]
* Although not directly requested by the client system via API, it must be noted that other actions can be internally triggered in the NS as a result of a closed-loop policies defined in the NS or NF Packages. Usually, these actions involve the monitoring of some parameter of the NS or the NF and the triggering of one of the aforementioned actions if a given threshold is reached (e.g. automatic scale-out).
* Although not directly requested by the client system via API, it must be noted that other actions can be internally triggered in the NS as a result of a closed-loop policies defined in the NS or NF Packages. Usually, these actions involve the monitoring of some parameter of the NS or the NF and the triggering of one of the aforementioned actions if a given threshold is reached (e.g. automatic scale-out).


Line 124: Line 125:
3GPP defines the relation between Network Slices and vanilla NS as per ETSI NFV in a very specific manner, where Network Services become the so-called ''NS subnets'' of the Network Slice, while the Network Slice with its constituent NS subnets can be deployed and operated as if they were a single entity. The following picture depicts the intended relation between both concepts:
3GPP defines the relation between Network Slices and vanilla NS as per ETSI NFV in a very specific manner, where Network Services become the so-called ''NS subnets'' of the Network Slice, while the Network Slice with its constituent NS subnets can be deployed and operated as if they were a single entity. The following picture depicts the intended relation between both concepts:


[[File:assets/nsi-vs-ns.jpeg|frame|none|alt=|caption Relation between Network Slices and vanilla ETSI NFV Network Services]]
[[File:Nsi-vs-ns.jpeg|thumb|800px|alt=|Relation between Network Slices and vanilla ETSI NFV Network Services]]


Still consistent with the same modelling of a regular NS, these ''NS subnets'' can be either exclusive to an upper-level Network Slice (''Dedicated NS subnets'') or shared between several Slices (''Shared NS subnets''), such as in the case of the RAN. Likewise, they would have their own lifecycle and operations as any other NS, so no disruption in the modelling is created.
Still consistent with the same modelling of a regular NS, these ''NS subnets'' can be either exclusive to an upper-level Network Slice (''Dedicated NS subnets'') or shared between several Slices (''Shared NS subnets''), such as in the case of the RAN. Likewise, they would have their own lifecycle and operations as any other NS, so no disruption in the modelling is created.
Line 149: Line 150:
In this mode of operation, the Network Slice can be treated as kind of meta-Network Service, and can be modelled as per the augmented NS lifecycle model described previously, so that OSM works also as Slice Manager (Slice-M). For convenience, the NSI becomes as a first-class object of OSM.
In this mode of operation, the Network Slice can be treated as kind of meta-Network Service, and can be modelled as per the augmented NS lifecycle model described previously, so that OSM works also as Slice Manager (Slice-M). For convenience, the NSI becomes as a first-class object of OSM.


[[File:assets/full-e2e-management-nsi.png|frame|none|alt=|caption Full E2E Management of Network Slices]]
[[File:Full-e2e-management-nsi.png|thumb|800px|alt=|Full E2E Management of Network Slices]]


In this mode, there is a natural match between the different phases of the lifecycle, where the Network Slice Template (NST) and the Network Service Instance (NSI) play, respectively, the same roles as the NS Package (the template defining a NS) and the NS instance in the general lifecycle of an NS in OSM:
In this mode, there is a natural match between the different phases of the lifecycle, where the Network Slice Template (NST) and the Network Service Instance (NSI) play, respectively, the same roles as the NS Package (the template defining a NS) and the NS instance in the general lifecycle of an NS in OSM:
Line 170: Line 171:
Alike the case of legacy OSS/BSS described in the NS operations, it is also possible to allow an external standalone system to manage the lifecycle of slices (as standalone Slice Manager) and leverage on OSM simply as if it were a vanilla NFV Orchestrator (NFVO), using the regular (non-augmented) SOL005 interface.
Alike the case of legacy OSS/BSS described in the NS operations, it is also possible to allow an external standalone system to manage the lifecycle of slices (as standalone Slice Manager) and leverage on OSM simply as if it were a vanilla NFV Orchestrator (NFVO), using the regular (non-augmented) SOL005 interface.


[[File:assets/standalone-management-nsi.png|frame|none|alt=|caption Standalone (Vanilla) Management of Network Slices]]
[[File:Standalone-management-nsi.png|thumb|800px|alt=|Standalone (Vanilla) Management of Network Slices]]


While this scenario is way less convenient in terms of operation than the one of integrated management, it may be useful for small or vertical deployments of slicing and can also be supported as fallback option.
While this scenario is way less convenient in terms of operation than the one of integrated management, it may be useful for small or vertical deployments of slicing and can also be supported as fallback option.
Line 202: Line 203:
The manager function of this platform is the '''Virtual Infrastructure Manager''' ('''VIM'''), and is one of the most popular platform managers in exploitation in industry. In order to perform this task, the VIM is in charge of controlling a set of compute, memory, storage and network resources, and returning them sliced as VMs. Thus, the VIM is in charge of controlling pools of compute nodes (i.e. servers, including their hypervisor SW), virtual networking (vSwitches and, sometimes, physical switches), and storage backends.
The manager function of this platform is the '''Virtual Infrastructure Manager''' ('''VIM'''), and is one of the most popular platform managers in exploitation in industry. In order to perform this task, the VIM is in charge of controlling a set of compute, memory, storage and network resources, and returning them sliced as VMs. Thus, the VIM is in charge of controlling pools of compute nodes (i.e. servers, including their hypervisor SW), virtual networking (vSwitches and, sometimes, physical switches), and storage backends.


[[File:assets/vim-as-manager-of-virtual-infrastructure.png|frame|none|alt=|caption VIM as manager of Virtual Infrastructure Platform]]
[[File:Vim-as-manager-of-virtual-infrastructure.png|frame|none|alt=|VIM as manager of Virtual Infrastructure Platform]]


In some advanced cases, the VIM can be deployed in conjunction of an SDN Platform, which provides connectivity on demand adapted to the needs of the VIM:
In some advanced cases, the VIM can be deployed in conjunction of an SDN Platform, which provides connectivity on demand adapted to the needs of the VIM:


[[File:assets/vim-consuming-sdn-services.png|frame|none|alt=|caption VIM consuming services from an SDN Platform]]
[[File:Vim-consuming-sdn-services.png|frame|none|alt=|VIM consuming services from an SDN Platform]]


It must be noted that in both configuration '''the service exposed by the VIM is undistinguishable from the perspective of the northbound API, and hence to OSM'''.
It must be noted that in both configuration '''the service exposed by the VIM is undistinguishable from the perspective of the northbound API, and hence to OSM'''.
Line 242: Line 243:
The following image depicts all the possible Or-Vi schemes, including SDN Assist:
The following image depicts all the possible Or-Vi schemes, including SDN Assist:


[[File:assets/supported-or-vi-schemes.png|frame|none|alt=|caption Or-Vi schemes supported by OSM]]
[[File:Supported-or-vi-schemes.png|thumb|800px|alt=|Or-Vi schemes supported by OSM]]


===== OpenVIM =====
===== OpenVIM =====

Latest revision as of 22:28, 13 January 2019

OSM Scope and Functionality

OSM Objetives and Scope

The goal of ETSI OSM (Open Source MANO) is the development of a community-driven production-quality E2E Network Service Orchestrator (E2E NSO) for telco services, capable of modelling and automating real telco-grade services, with all the intrinsic complexity of production environments. OSM provides a way to accelerate maturation of NFV technologies and standards, enable a broad ecosystem of VNF vendors, and test and validate the joint interaction of the orchestrator with the other components it has to interact with: commercial NFV infrastructures (NFVI+VIM) and Network Functions (either VNFs, PNFs or Hybrid ones).

OSM’s approach aims to minimize integration efforts thanks to four key aspects:

  1. A well-known Information Model (IM), aligned with ETSI NFV, that is capable of modelling and automating the full lifecycle of Network Functions (virtual, physical or hybrid), Network Services (NS), and Network Slices (NSI), from their initial deployment (instantiation, Day-0, and Day-1) to their daily operation and monitoring (Day-2).

    • Actually, OSM’s IM is completely infrastructure-agnostic, so that the same model can be used to instantiate a given element (e.g. VNF) in a large variety of VIM types and transport technologies, enabling an ecosystem of VNF models ready for their deployment everywhere.
  2. OSM provides a unified northbound interface (NBI), based on NFV SOL005, which enables the full operation of system and the Network Services and Network Slices under its control. In fact, OSM’s NBI offers the service of managing the lifecycle of Network Services (NS) and Network Slices Instances (NSI), providing as a service all the necessary abstractions to allow the complete control, operation and supervision of the NS/NSI lifecycle by client systems, avoiding the exposure of unnecessary details of its constituent elements.

    OSM’s IM and NS operation via NBI
  3. The extended concept of “Network Service” in OSM, so that an NS can span across the different domains identified —virtual, physical and transport—, and therefore control the full lifecycle of an NS interacting with VNFs, PNFs and HNFs in an undistinguishable manner along with on demand transport connections among different sites.

    OSM interaction with different domains
  4. In addition, OSM can also manage the lifecycle of Network Slices, assuming if required the role of Slice Manager, extending it also to support an integrated operation.

Service Platform view

OSM provides the capability of realising one of the main promises derived from NFV and the dynamic capabilities that it brings: creating networks on demand (“Network as a Service” or NaaS) for either their direct exploitation by the service provider or for their potential commercialization to third parties.

In that sense, OSM works as a Network Service Orchestrator (NSO), manager function of a Network Service Platform (see Service Platform view and Layered Service Architectures for details), intended to provide the capability of creating network services on demand and returning a service object ID that can be used later as a handler to control the whole lifecycle and operations of the network service via subsequent calls to OSM’s northbound API and monitor its global state in a convenient fashion.

In the case of OSM, there are two types of NaaS service objects that OSM is able to provide on demand to support the NaaS capability: the Network Service (NS) and the Network Slice Instance (NSI), being the latter a composition of several Network Services that can be treated as a single entity (particularities of both types of NaaS service objects will be described in the next sections).

OSM, as manager function of a service platform, consumes services from other service platforms and controls a number of managed functions in order to create its own composite higher-level service objects. Thus, OSM consumes services provided by the platform(s) in charge of the Virtual Infrastructure (to obtain VMs, etc.) and the platform(s) in charge of the SW-Defined Network (to obtain all the required kinds of inter-DC connections), and, once assembled, configures and monitors the constituent network functions (VNFs, PNFs, HNFs) in order to control the LCM of the entire NS/NSI to be offered on demand.

This view of OSM as part of a service platform architecture for NFV is summarized in the following picture:

OSM in Service Platform view

Services offered Northbound

OSM as provider of Network Services (NS) on demand

The Network Service (NS) is the minimal building block in OSM to manage networks provided as a service, which bundles in one single service object a set of interconnected network functions (VNFs, PNFs and HNFs) which can span across different underlying technologies (virtual or physical), locations (e.g. more or less centralized) and geographical areas (e.g. as part of the service of a large multi-national corporate customer).

In order to enable effectively a “service on demand”, these newly created Network Services are not provisioned as the result of a handcrafted or ad hoc procedure, but as the outcome of a simple and well-known method based on API invocations (to OSM’s NBI) and descriptors following OSM’s Information Model. These descriptions should facilitate the creation of Network Services composed of different network appliances (VNFs, PNFs or HNFs) coming from different vendors, so that those appliances (also called Network Functions or NF) can come pre-modelled by their provider and the service provider can focus on modelling the Network Service itself.

Once a Network Service is entirely modelled (in a Network Service Package), the model works effectively as a template that can be particularized (“parametrized”) upon NS creation time to incorporate specific attributes for that NS instance, returning a unique NS instance ID, useful to drive LCM operations at a later stage. OSM puts also in place all the necessary abstractions to allow the complete control, operation and supervision of the NS lifecycle —in a normalized and replicable fashion— by the client system (usually, OSS/BSS platforms). This NS instance “handler” is not required to expose unnecessary details of its constituent elements, in order to minimize the impact over the final service of potential changes in the NFs or the NS topology that did not intend to mean a change on the actual service offer.

In order to achieve the desired level of flexibility and abstraction, OSM augments the concept of NS with respect to ETSI NFV to incorporate physical and transport domains to enable real E2E services that can be extended beyond virtual domains. Thus, it is possible in OSM:

  • To combine in a single NS virtual network functions (VNFs), physical network functions (PNFs) and, even, network functions composed of both physical and virtual elements (Hybrid Network Functions, or HNFs), more typical of elements closer to the access network.
  • To deploy such NS across a distributed network and even create inter-site transport connections on demand, leveraging on the APIs of SW-Defined Network Platforms.

Both OSS and BSS platforms are expected to be consumers of the NS created on demand by OSM, and sometimes may even keep the control of some constituent network functions of the NS if required (this is quite useful to reuse legacy network nodes without requiring major changes in the OSS). For that reason, OSM has also the capability to delegate selectively the control of specific constituent NFs of the NS to the OSS/BSS platform if explicitly specified in the NS model, giving full freedom to support legacy or hybrid scenarios as desired.

Lifecycle and operation of a Network Service

In the following sections, the stages related to the lifecycle and operation of the NS in the E2E Network Orchestrator are thoroughly discussed and described, so that the API capabilities (and the companion IM) can be better understood in a context of operation:

  1. Preparation phase: Modelling
  2. Onboarding
  3. NS creation (day-0 and day-1)
  4. NS operation (day-2)
  5. NS finalization

It must be noted that, although the initial phase of modelling is a mere pre-requirement, prior to any actual existence of the NS itself (and with no API interactions involved), it is required to understand the NS lifecycle and the API calls that are available at later phases.

Phase #0: Modelling

OSM’s IM provides mechanisms to include the complete blueprint of the NS behaviour, including both a full description of the NS topology, the lifecycle operations that are enabled, and the NS primitives that are available, along with their automation code. Since Network Services are composed, by definition, of one or several Network Functions (VNFs, PNFs or HNFs) of heterogeneous types and internal behaviours —and likely to come from different providers—, the IM provides a mean to let the provider describe the internal topology, required resources, procedures and lifecycle of the Network Functions. This information come bundled in the so-called NF Packages.

This two-layered modelling approach has several advantages:

  • Prevents that the designer of the NS Package (i.e. a Service Provider such as Telefónica) is directly exposed to NF internals, and can focus on the composition of the NS itself, based exclusively on the external properties and procedures of the NF.
  • Enables the consistent and replicable validation of the NFs and their companion NF Packages across all the supply chain, so that the NF vendor can guarantee that their elements are always used and operated in the appropriate manner.
  • Obviously, the same NF Package can be used in more than one NS with no additional modelling work at NF level.
Phase #1: Onboarding

Once the models are ready, they can be injected to the system, so that they can be used as templates for NS creation later on, in a process that is known as onboarding.

OSM’s NBI offers API calls to support CRUD (Create, Read, Update, Delete) operations over the corresponding NS and NF Packages, in order to support the two-layered modelling approach previously described (that can become three-layered in the case of Network Slices), the API supports specific CRUD operations to handle the corresponding NS and NF Packages (and NST when applicable) as independent but related objects. In these operations, and particularly in the onboarding step, the necessary checks to validate in-model and cross-model consistency are performed.

Phase #2: NS creation (day-0 and day-1)

Once the corresponding NS and NF Packages are successfully onboarded in OSM, there is all that is needed to use them as templates for the actual NS creation. Accordingly, OSM offers API calls to support CRUD (Create, Read, Update, Delete) operations related to NS instances.

In the case of the NS creation operation (also known as NS instantiation), OSM takes as input an NS Package and, optionally, a set of additional deployment constraints (e.g. target deployment locations for specific VNFs of the NS) and parameters to particularize in the NS, as explicitly allowed by the NS Package.

During the NS creation, OSM interacts with different service platforms southbound (VIMs and WIMs) and managed functions (NFs) to create the composite service object of the NS instance.

Phase #3: NS operation (day-2)

Once the NS has been successfully created, the NS instance becomes the only relevant object for further operation, lifecycle and assurance actions.

Once an NS has been successfully created, the NS instance becomes the only relevant object for further operation, lifecycle and assurance actions. The NS/NSI instance can be subject to different types of API-driven operations, which fall into one of these categories:

  • Common Lifecycle operations. There must be a number of API calls that allow to trigger well-known standard actions potentially applicable to any NS, such as scaling actions, pausing/resuming, on-demand monitoring requests, SW upgrades, etc.
  • Actions derived from NS primitives. Besides operations potentially applicable to any NS/NSI, each NS/NSI can have a set of operations that are relevant only for the specific functionality that the NS/NSI offers, such as the addition of new subscribers, changes in internal routing, etc. Those actions are enumerated and codified in the corresponding NS Package (that leverage, in turn, on the atomic actions available in NF Packages) are exposed by the API as primary actions available in that given NS/NSI.
NS managed as a single entity via NS primitives
  • Although not directly requested by the client system via API, it must be noted that other actions can be internally triggered in the NS as a result of a closed-loop policies defined in the NS or NF Packages. Usually, these actions involve the monitoring of some parameter of the NS or the NF and the triggering of one of the aforementioned actions if a given threshold is reached (e.g. automatic scale-out).

In fact, it is possible in OSM to work with metrics and alarms with great flexibility:

  • Descriptor-defined alarms and metrics related to VDU-PDU level, but still explicitly co-related with NS/NF instances.
  • Descriptor-defined alarms and metrics related to application-specific (NF or NS) KPIs.
  • It also allows on-demand requests to export alarms, events and metrics via Kafka bus, and a smooth integration with the most popular frameworks, including ELK, Prometheus, and Grafana.

It must be noted that OSM also allows the management of brownfield scenarios where some elements had to be managed out-of-band by an external/legacy entity.

Phase #4: NS finalization

As any other on-demand service, it is possible to finalize a NS and release the resources that had been assigned, preserving those components that should not be removed (e.g. persistent volumes).

The “delete” call of the API (from the of the aforementioned CRUD operations related to a NS) is in charge of triggering that process and report on demand of its status of completion.

OSM as provider of Network Slices

Network Slices and Network Services

ETSI OSM is also capable to to provide Network Slices as a service, assuming also the role of Slice Manager as per ETSI NFV EVE012 and 3GPP TR 28.801, extending it also to support an integrated operation of Network Slice Instances (NSI) along with Network Service instances (NS).

The intended use of a Network Slice can be described as a particularization of the NaaS case but more focused on the enablement of 5G use cases. This 3GGP spec defines a specific type of underlying construct, the Network Slice, which is intended to provide the illusion of separated specialized networks for different purposes. Unsurprisingly, Network Slices, in practice, operate as a particular kind of Network Service or, more generally, as a set of various Network Services that are treated as a single entity.

3GPP defines the relation between Network Slices and vanilla NS as per ETSI NFV in a very specific manner, where Network Services become the so-called NS subnets of the Network Slice, while the Network Slice with its constituent NS subnets can be deployed and operated as if they were a single entity. The following picture depicts the intended relation between both concepts:

Relation between Network Slices and vanilla ETSI NFV Network Services

Still consistent with the same modelling of a regular NS, these NS subnets can be either exclusive to an upper-level Network Slice (Dedicated NS subnets) or shared between several Slices (Shared NS subnets), such as in the case of the RAN. Likewise, they would have their own lifecycle and operations as any other NS, so no disruption in the modelling is created.

As it can be seen, the Network Slice concept defined in 3GPP overlaps almost completely with the concept of “Nested NS” (an NS composed of various NS) as defined in ETSI NFV, with the only addition of including some PNFs and Transport connections explicitly, features that are already included in the extended concept of NS that OSM already provides. Therefore, the decision of incorporating the Network Slice as a particular case of NS in OSM was rather natural.

Lifecycle and operations of a Network Slice

As previously described, Network Slices operate as a grouping of a set of Network Services, which would become the so-called NS subnets of the Network Slice. The Network Slice with its constituent NS subnets can be deployed and operated as if they were a single entity.

Consistent with the modelling approach followed for NS, these “NS subnets” can be either exclusive to an upper-level Network Slice (Dedicated NS subnets) or shared between several Network Slices (Shared NS subnets), such as in the case of the RAN. Likewise, they would have their own lifecycle and operations as any other NS, so no disruption in the modelling is created.

Alike Network Services, 3GPP TR 28.801 describes the lifecycle of the global Network Slice, which is comprised of the four following phases:

  1. Preparation. In the Preparation phase, the Network Slice (or Network Slice Instance, NSI) does not exist yet. The preparation phase includes the creation and verification of Network Slice Templates (NST), the onboarding of these, preparing the necessary network environment to be used to support the lifecycle of NSIs, and any other preparations that are needed in the network.
  2. Instantiation, Configuration and Activation. During Instantiation/Configuration, all resources shared/dedicated to the NSI have been created and are configured to a state where the NSI is ready for operation. The Activation step includes any actions that make the NSI active (if dedicated to the network slice, otherwise this takes place in the preparation phase). Network slice instantiation, configuration and activation can include instantiation, configuration and activation of other shared and/or non-shared network function(s).
  3. Run-time. In the Run-time phase, the NSI is capable of handling traffic to support communication services. The Run-time phase includes supervision/reporting (e.g. for KPI monitoring), as well as activities related to modification: upgrade, reconfiguration, NSI scaling, changes of NSI capacity, etc.
  4. Decommissioning. The Decommissioning phase includes deactivation (taking the NSI out of active duty) as well as the reclamation of dedicated resources (e.g. termination or re-use of network functions) and configuration of shared/dependent resources. After decommissioning the NSI does not exist anymore

Two non-mutually exclusive modes of deployment and management to support this lifecycle are feasible in OSM: Full E2E Management (Integrated Modelling) and Standalone Management (Vanilla NFV/3GPP).

Full E2E Management (Integrated Modelling)

In this mode of operation, the Network Slice can be treated as kind of meta-Network Service, and can be modelled as per the augmented NS lifecycle model described previously, so that OSM works also as Slice Manager (Slice-M). For convenience, the NSI becomes as a first-class object of OSM.

Full E2E Management of Network Slices

In this mode, there is a natural match between the different phases of the lifecycle, where the Network Slice Template (NST) and the Network Service Instance (NSI) play, respectively, the same roles as the NS Package (the template defining a NS) and the NS instance in the general lifecycle of an NS in OSM:

  • Preparation is comprised of Phase #0 (Modelling) and Phase #1 (Onboarding).
  • Instantiation, Configuration and Activation is equivalent to Phase #2 (NS Creation).
  • Run-time provides a standardized subset of the operations available at Phase #3 (NS Operation).
  • Decommissioning is equivalent to Phase #4 (NS finalization).

There are some obvious advantages of this approach:

  • The Preparation phase is largely simplified, as there is no split in the information models between the 3GPP Slice Manager and the NFV Orchestrator (a “reduced” NSO specialized in virtual components, as defined by ETSI NFV).
  • Day-2 operations are integrated in a single platform and a single northbound interface.
  • Possibility to add custom primitives to a given slice, alike the general NS constructs allow.
  • Packages are generated, by definition, with a multi-vendor scenario in mind.
  • The slices can include non-3GPP related network functions with no need of special integration.
Standalone Management (Vanilla NFV/3GPP)

Alike the case of legacy OSS/BSS described in the NS operations, it is also possible to allow an external standalone system to manage the lifecycle of slices (as standalone Slice Manager) and leverage on OSM simply as if it were a vanilla NFV Orchestrator (NFVO), using the regular (non-augmented) SOL005 interface.

Standalone (Vanilla) Management of Network Slices

While this scenario is way less convenient in terms of operation than the one of integrated management, it may be useful for small or vertical deployments of slicing and can also be supported as fallback option.

In this mode of operation, the management of the slices happens entirely outside of OSM in a separate management element, which would only leverage on vanilla NFVO capabilities of OSM. On the other hand, from the perspective of OSM, the standalone slice management would look like just any other OSS.

In consequence, the lifecycle operations of the Slice may require all the additional preparatory and intermediate steps to guarantee an appropriate slice-NS mapping as defined by 3GPP:

  • In this case, during the preparation phase, the resource requirement for an NST is realized by one or more existing Network Service Descriptors that have been previously on-boarded on the E2E Orchestrator (working as NFVO). The creation of a new NST can lead to requiring the update of an existing NSD or generation of a new NSD followed by on-boarding the new NSD if the slice requirements do not map to an already on-boarded NSD (i.e. available in the NSD catalogue). Indeed, the Network Slice for the multiple Network Slice Instances may be instantiated with the same NSD, in order to deliver exactly the same optimizations and features but dedicated to different enterprise customers. On the other hand, a network slice intended to support totally new customer facing services is likely to require a new NS and thus the generation of a new NSD.
  • The network slice instantiation step in the second phase needs to trigger the instantiation of the underlying NSs. Vanilla NFV-MANO functions would only be involved in the network slice configuration phase if the configuration of virtualisation-related parameters is required on one or more of the constituent VNF instances.

OSM’s IM and NBI specifications

OSM provides a well-known, complete and thoroughly tested Information Model to facilitate an accurate and sufficient description of the internal topology, procedures and lifecycle of Network Services and Network Slices. ETSI OSM’s IM is openly (and freely) available for every industry player, continuously evolved by a large Community of industrial players, and being pre-validated in its intended E2E behaviour at the own OSM upstream community, so that new cloud and application technologies can be taken into account as they emerge and mature. The latest official version of ETSI OSM’s IM is always available as an up-to-date spec at OSM’s documentation and git repos.

On the other hand, OSM’s NBI provides a superset of ETSI NFV SOL005 API calls with the addition of E2E NS operation capabilities and the ability to handle Network Slices. Alike the IM, the latest official version of OSM’s NBI is openly available in OpenAPI format, and can be used as the authoritative reference for interoperability northbound, even facilitating the automated generation of code for client applications.

Services consumed Southbound

OSM is oriented to consume the services of two kinds service platforms commonly available in industry:

  • Virtual Infrastructure Platforms, each managed by a Virtual Infrastructure Manager (VIM), which exposes an Or-Vi reference point northbound.
  • Software-Defined Network Platforms, each managed by a WAN Infrastructure Manager (WIM) (often a kind of SDN Controller), which exposes an Or-Wi reference point northbound.

In order to support the variety of alternative industry APIs that implement these reference points, OSM has plugin models for both VIMs and WIMs, so that all the variety of commercial southbound APIs are supported via their corresponding connectors.

VIMs as managers of Virtual Infrastructure Platforms

The functionality of providing chunks of the underlying physical resources dynamically is known in industry as “Infrastructure as a Service” or IaaS, and it is the most basic building block that a cloud can offer on demand. In our service platform view, the pool of resources that offers IaaS becomes a Virtual Infrastructure Platform.

The manager function of this platform is the Virtual Infrastructure Manager (VIM), and is one of the most popular platform managers in exploitation in industry. In order to perform this task, the VIM is in charge of controlling a set of compute, memory, storage and network resources, and returning them sliced as VMs. Thus, the VIM is in charge of controlling pools of compute nodes (i.e. servers, including their hypervisor SW), virtual networking (vSwitches and, sometimes, physical switches), and storage backends.

VIM as manager of Virtual Infrastructure Platform

In some advanced cases, the VIM can be deployed in conjunction of an SDN Platform, which provides connectivity on demand adapted to the needs of the VIM:

VIM consuming services from an SDN Platform

It must be noted that in both configuration the service exposed by the VIM is undistinguishable from the perspective of the northbound API, and hence to OSM.

Due to the variety of VIM APIs available and the requirement to be open to future types of technologies, OSM provides a plugin model for the calls needed for the IaaS services required by OSM to instantiate and manage a NS/NSI. As of today, ETSI OSM already supports out-of-the-box (i.e. with no need of integration):

  • Openstack-based VIMs (e.g. Canonical OpenStack, Red Hat OpenStack Platform, WindRiver Titanium Cloud (4 and above), WhiteCloud, SuSE OpenStack, Mirantis OpenStack, ECEE, FusionSphere, etc.)
  • VMware VIO 4 and above.
  • VMware vCloud Director
  • Amazon Web Services (AWS)
  • OpenVIM
SDN Assist

Most VIMs provide the automatic creation of network connectivity for management and signaling interfaces but not for those that are dataplane intensive (use of PF passthrough or SR-IOV). In those cases, the VIM is able to create virtual resources with Enhanced Platform Awareness (EPA) requirements but cannot take care of providing the required underlay (physical) connectivity between them.

In those cases, where the VIM does not support natively the management of underlay networking, OSM is able to supply the missing functionality of handling the underlay connectivity with the help of an SDN controller, which manages a fabric where the compute nodes of the VIM are connected. This unique functionality of OSM, which is called SDN Assist, enables OSM to:

  • Provide the dataplane connectivity that the VIM is unable to manage.
  • Treat the VIM+SDN Assist combo as if they there were a single augmented VIM, so that, from user’s perspective they will behave like a regular unique manager function of a given Virtual Infrastructure Platform.

In order to work properly, it is a pre-requirement to have a clear delineation between the knowledge and responsibility of the VIM and the SDN controller:

  • The VIM will be in charge of deploying the VMs and the overlay networks, and providing to OSM the information about the compute nodes and interfaces assigned to the VMs.
  • The SDN controller will be responsible of creating the underlay connectivity taking as boundary conditions the switches and ports to be connected to the same network. The internal switching topology of the datacentre will be known by the SDN controller, fed as part of the provisioning activities (i.e. prior to any instantiation process).

In that scenario, OSM keeps the mapping between compute nodes and interfaces at VIM level and the switch ports at SDN controller level.

Due to the variety of SDN controllers and the requirement to be open to future types of technologies, OSM provides a plugin model for the calls needed for SDN Assist, namely, the ones required to provide the intra-VIM underlay part (i.e. to assist effectively). As of today, OSM provides SDN Assist plugins for for the following families of SDN Controllers:

  • OpenDayLight (ODL)
  • ONOS
  • FloodLight

The following image depicts all the possible Or-Vi schemes, including SDN Assist:

Or-Vi schemes supported by OSM
OpenVIM

OSM also provides a reference VIM with EPA capabilities and underlay support, called OpenVIM, which is shipped as part of OSM’s releases since Release ONE.

As a reference VIM, OpenVIM is particularly useful in the context of deployment that require high I/O performance and efficiency (leveraging advanced EPA capabilities), lightweight deployments (e.g. Edge) and setups where software in the compute nodes cannot be upgraded as often as the software of the control of the VIM.

As a reference VIM, OpenVIM is particularly useful in the context of deployment that require high I/O performance and efficiency (leveraging advanced EPA capabilities), deployments in the Edge, where the footprint requirements for a VIM can be very low, and setups where software in the compute nodes cannot be upgraded as often as the software of the control of the VIM.

Metrics and alarms

In the different VIMs, OSM supports the interaction with different existing frameworks to gather metrics and alarms for different Virtual Domain technologies:

  • AODH/Gnocchi for OpenStack
  • VMware vRealise Ops Update
  • AWS CloudWatch

Following the same approach as in other cases of API diversity, OSM uses a plugin model for VIM metric frameworks, which normalizes the types of metrics a alarms for OSM, and which is easily extensible to support additional frameworks and technologies.

SDN Controllers/WIMs as managers of Software-Defined Network Platforms

Another type of manager functions widely available are the SDN Controllers (SDNC) and their specialized version for transport connections, the Controllers for Software-Defined Transport Network (SDTN). Whenever these elements are invocated from an NSO such as OSM (as a superset of an NFVO), can be also referred as WIMs (WAN Infrastructure Managers). In our service platform view, the pool of connectivity resources that is offered here on demand becomes a Software-Defined Network Platform.

Similarly to the role of the manager in other platforms, the key function of these WIMs is providing connections on demand and offering an API to manage their lifecycle and monitor them consistently. One of the key advantages of this approach is that WIM’s API aims to be largely independent of the specific underlying elements, the network topology underneath and/or the switching technology itself, so the use of these connections on-demand becomes highly convenient for the client platform and leaves a lot of freedom to design and evolve the physical network underneath.

In order to provide that service, the WIMs, as managers of the platform, are in charge of controlling the switching/routing elements underneath and/or invoking other SDN Controllers in lower levels of a hierarchy. Quite often, these switching elements are designed specifically to support SDN operations with some well-known protocols (e.g. OpenFlow, OVSDB, TAPI…), although some traditional means, such as Netconf/YANG, are commonly used as well.

Thus, via SDN/SDTN Controllers it is possible setting up many different types of connections, involving different types of technologies:

  • Virtual networks for a VIM
  • MPLS connections
  • VPN connections (overlay or with interaction with physical equipment)
  • Inter-DC connections (various types)
  • MAN connections
  • Etc.

Due to the variety of WIM APIs available and the requirement to be open to future types of technologies, OSM provides a plugin model for the calls needed for the inter-VIM connections required by OSM to instantiate and manage a NS/NSI. As of today, OSM provides plugins to support:

  • TAPI
  • ONOS
  • Dynpac

Following the same approach as in other cases of API diversity, this plugin model is easily extensible to support additional WIM APIs and technologies.

Configuration and monitoring of Network Functions

Regarding Network Functions (VNFs, PNFs, HNFs), OSM incorporates them in a manner that provides model-driven interaction with NFs through the use of Juju Charms, which allows NF vendors to encapsulate their configuration mechanisms (NETCONF+YANG, Expect, SSH+scripts, Ansible, etc.). This makes PNF (and HNF) management indistinguishable from VNF management in OSM.

Two different kinds of Juju charms are supported:

  • Native charms, when NFs are able to run charms inside. This is particularly interesting for new VNFs or cloud-like VNFs/Apps that already support charms. Interaction with those charms happens directly from the orchestrator.
  • Proxy charms, when NFs do not support running charms inside, which is always true for PNF. In that case, the proxy charm will use the appropriate configuration protocol to interact with the NF and run the desired actions for the primitive.