+++ /dev/null
-# Affinity and anti-affinity rules for VNF deployment #
-
-## Proposer ##
-- Gerardo Garcia (Telefonica)
-- Alfonso Tierno (Telefonica)
-- Francisco Javier Ramon (Telefonica)
-
-## Type ##
-**Feature**
-
-## Target MDG/TF ##
-SO, RO
-
-## Description ##
-OSM should provide means to specify that different VDUs of the same VNF should
-be deployed in different availability zones of the same datacenter. This
-behaviour is often required by VNFs that implement active-standby resiliency
-strategies.
-
-Current OSM\92s IM might need to be extended to support this feature.
-
-## Demo or definition of done ##
-- Add a datacenter to OSM specifying more than one availability zone.
-- Add a second datacenter with only one zone.
-- Onboard a VNF with 2 VDUs, with a VNFD that mandates that these VDUs must be
-deployed in different availability zones.
-- Onboard an NS which includes the VNF above.
-- Instantiate the NS in the datacenter with 2 availability zones and check that
-the VDUs are deployed as expected.
-- Instantiate the NS in the datacenter with one zone and check that it fails.
\ No newline at end of file
+++ /dev/null
-# Support of multi-VDU VNFs #
-
-## Proposer ##
-- Gerardo Garcia (Telefonica)
-- Alfonso Tierno (Telefonica)
-- Francisco Javier Ramon (Telefonica)
-
-## Type ##
-**Feature**
-
-## Target MDG/TF ##
-SO, VCA
-
-## Description ##
-Many common use cases involve VNFs that are composed of several types of VDUs
-that need to be scaled and operated independently but still belong to the same
-VNF and, hence, need to be completely described in the VNF Package, not at NS
-level.
-
-Some common examples are composite VNFs -such as IMS or EPC- or simple but
-decomposed VNFs, such as a vRouter with control and data plane that are
-intended to run in separate VMs.
-
-## Demo or definition of done ##
-- A VNF Package of a VNF composed of more than one type of VDUs (e.g. 2 types
-of VNUs: user and data plane) is successfully onboarded in the system. The set
-of primitives available is unified at VNF level.
-- The VNF is properly deployed, with the VMs interconnected as specified in the
-descriptor and the VNF as a whole can be operated with VNF primitives.
-
-**This feature might obsolete feature #655 _partially_:
-https://osm.etsi.org/gerrit/#/c/655/**
\ No newline at end of file
+++ /dev/null
-# Juju Asyncronous API #
-
-## Proposer ##
-- Adam Israel
-
-## Type ##
-**Feature**
-
-## Target MDG/TF ##
-SO
-
-## Description ##
-
-As of R2, the SO is using the older syncronous Juju API. This can lead the SO
-to appear "frozen" while it blocks waiting for a Juju operation to complete.
-
-To address this issue, we would like the SO to convert to using
-[libjuju](https://github.com/juju/python-libjuju) python library. It offers
-the following benefits:
-- Asynchronous, using the asyncio and async/await features of python 3.5+
-- Websocket-level bindings are programmatically generated (indirectly) from the
-Juju golang code, ensuring full api coverage
-- Provides an OO layer which encapsulates much of the websocket api and
-provides familiar nouns and verbs (e.g. Model.deploy(), Application.add_unit(),
-etc.)
-
-## Demo or definition of done ##
-Demos of using libjuju can be found in the
-[quickstart](https://github.com/juju/python-libjuju#quickstart) and in the
-[examples](https://github.com/juju/python-libjuju/tree/master/examples) folder.
-
-Here is a simple implementation that deploys a charm:
-```python
-#!/usr/bin/python3.5
-
-import asyncio
-import logging
-
-from juju import loop
-from juju.model import Model
-
-
-async def deploy():
- # Create a Model instance. We need to connect our Model to a Juju api
- # server before we can use it.
- model = Model()
-
- # Connect to the currently active Juju model
- await model.connect_current()
-
- # Deploy a single unit of the ubuntu charm, using revision 0 from the
- # stable channel of the Charm Store.
- ubuntu_app = await model.deploy(
- 'ubuntu-0',
- application_name='ubuntu',
- series='xenial',
- channel='stable',
- )
-
- # Disconnect from the api server and cleanup.
- model.disconnect()
-
-
-def main():
- # Set logging level to debug so we can see verbose output from the
- # juju library.
- logging.basicConfig(level=logging.DEBUG)
-
- # Quiet logging from the websocket library. If you want to see
- # everything sent over the wire, set this to DEBUG.
- ws_logger = logging.getLogger('websockets.protocol')
- ws_logger.setLevel(logging.INFO)
-
- # Run the deploy coroutine in an asyncio event loop, using a helper
- # that abstracts loop creation and teardown.
- loop.run(deploy())
-
-
-if __name__ == '__main__':
- main()
-```
+++ /dev/null
-# Metrics Collection #
-
-## Proposer ##
-- Adam Israel
-
-## Type ##
-**Feature**
-
-## Target MDG/TF ##
-VCA, DevOps, SO
-
-## Description ##
-Operators prefer to make data-driven decisions. To aid this, I propose that we
-begin collecting operational metrics from VNFs that make them available, via
-push (collectd, statsd) or pull ([prometheus](https://prometheus.io/), etc).
-
-This metric data should be collected to a central data store. Once collected,
-reports can be generated for the operator to view, and operational rules can be
-defined with regard to notifications, auto-scaling and auto-healing.
-
-
-## Demo or definition of done ##
-- Selection of supported collection method(s)
-- Creation of an OSM VNF METRICS charm layer to aid metrics collection
-- The option to install metric collection during the installation process.
-- Exposing a metrics dashboard (if supported by the supported collection
- method(s), or the definition of of one or more reports to view relevant
- metrics dtaa.
+++ /dev/null
-# OSM Kubernetes Support #
-
-## Proposer ##
-- Prithiv Mohan (Intel)
-
-## Type ##
-** Feature **
-
-## Target MDG/TF ##
-SO, RO, VCA, IM
-
-## Supported VIMs ##
-
-1. OpenStack
-2. VMWare
-3. AWS
-
-## Description ##
-
-Enable a container-based VNF deployment model via OSM that allows for the coexistence of container
-and VM deployment models to coexist within a network service. Kubernetes should be enabled as the
-container orchestration engine, but it must be done in a plugin manner that permits other container
-orchestration engines to be added in the future.
-
-The Kubernetes pods can be created either on a baremetal node or inside a VM. Either case, the
-Resource Orchestrator(RO) is responsible only for the resource allocation.
-
-For the VM based Kubernetes deployment, the VM image could have the kubelet node agent already
-installed. In this case, RO creates a VM and VCA takes care of the configuration and the LCM of the
-pods. If the image used by the RO to provide a VM has no kubelet installation, the VCA module will
-handle the installation, configuration of the kubernetes. The Resource Orchestrator will not involve
-in the installation of any kind of Kubernetes package.
-
-The options for baremetal provisioning includes OpenStack Ironic, Canonical Metal as a Service(MaaS)
-but not limited to that. An additional or extended plugin is required in the Resource Orchestrator
-for the baremetal provisioning of the resources for the Kubernetes.
-
-The proposed workflow below assumes Ironic/MaaS as the options for baremetal provisioning:
-
- i. User creating a descriptor with VNFD updates that indicate the deployment model for the
- VNF (container on bare metal, container in VM, or VM deployment.
- ii. SO passes the request to RO.
- iii. RO receives the request.
- iv a. In case of a Kubernetes deployment inside the VM, RO creates a VM with/without kubelet
- installed in it based on the type of the image.
- b. Alternatively, in the case of a Kubernetes baremetal deployment, RO communicates with
- the MaaS/Ironic plugin. The plugin talks to the MaaS and/or Ironic to create the
- Kubernetes pods.
- v. VCA does the installation and/or configuration of the VM for Kubernetes.
-
-Changes in the descriptor in the IM repo is required. Expected these show up fairly seamlessly in
-the UI due to the model driven nature of the platform.
-
-There is an option on the preparation of the baremetal nodes or VMs for supporting Kubernetes. These
-could be prepared beforehand, or Kubernetes could be configured on them once they have been
-allocated.
-
-The installation of Kubernetes and the configuration of the Kubernetes Controller will not be handled
-by the OSM as it is out of scope for OSM. The configuration could be handled by Juju with the charms.
-
-## Demo or definition of done ##
-
-* Creation of Kubernetes Pods is successful
-* Usual lifecycle operations on a NS that includes VM-based and container-in-VM based VNFs.
-* Supports multiple VIM environments.
-
-Links:
-
-1. https://kubernetes.io/
-2. https://kubernetes.io/docs/getting-started-guides/ubuntu/installation/
-3. https://www.ubuntu.com/kubernetes
--- /dev/null
+# Affinity and anti-affinity rules for VNF deployment #
+
+## Proposer ##
+- Gerardo Garcia (Telefonica)
+- Alfonso Tierno (Telefonica)
+- Francisco Javier Ramon (Telefonica)
+
+## Type ##
+**Feature**
+
+## Target MDG/TF ##
+SO, RO
+
+## Description ##
+OSM should provide means to specify that different VDUs of the same VNF should
+be deployed in different availability zones of the same datacenter. This
+behaviour is often required by VNFs that implement active-standby resiliency
+strategies.
+
+Current OSM\92s IM might need to be extended to support this feature.
+
+## Demo or definition of done ##
+- Add a datacenter to OSM specifying more than one availability zone.
+- Add a second datacenter with only one zone.
+- Onboard a VNF with 2 VDUs, with a VNFD that mandates that these VDUs must be
+deployed in different availability zones.
+- Onboard an NS which includes the VNF above.
+- Instantiate the NS in the datacenter with 2 availability zones and check that
+the VDUs are deployed as expected.
+- Instantiate the NS in the datacenter with one zone and check that it fails.
\ No newline at end of file
--- /dev/null
+# Support of multi-VDU VNFs #
+
+## Proposer ##
+- Gerardo Garcia (Telefonica)
+- Alfonso Tierno (Telefonica)
+- Francisco Javier Ramon (Telefonica)
+
+## Type ##
+**Feature**
+
+## Target MDG/TF ##
+SO, VCA
+
+## Description ##
+Many common use cases involve VNFs that are composed of several types of VDUs
+that need to be scaled and operated independently but still belong to the same
+VNF and, hence, need to be completely described in the VNF Package, not at NS
+level.
+
+Some common examples are composite VNFs -such as IMS or EPC- or simple but
+decomposed VNFs, such as a vRouter with control and data plane that are
+intended to run in separate VMs.
+
+## Demo or definition of done ##
+- A VNF Package of a VNF composed of more than one type of VDUs (e.g. 2 types
+of VNUs: user and data plane) is successfully onboarded in the system. The set
+of primitives available is unified at VNF level.
+- The VNF is properly deployed, with the VMs interconnected as specified in the
+descriptor and the VNF as a whole can be operated with VNF primitives.
+
+**This feature might obsolete feature #655 _partially_:
+https://osm.etsi.org/gerrit/#/c/655/**
\ No newline at end of file
+++ /dev/null
-# Support of multi-segment VIM-managed networks #
-
-## Proposer ##
-- Gerardo Garcia (Telefonica)
-- Alfonso Tierno (Telefonica)
-- Francisco Javier Ramon (Telefonica)
-
-## Type ##
-**Feature**
-
-## Target MDG/TF ##
-RO
-
-## Description ##
-A VIM, with the help of an SDN controller, can allow the creation of
-multi-segment networks that enables seamless connectivity between legacy VLAN
-domains (e.g. external networks aka provider networks), SR-IOV or Passthrough
-interfaces, and VIRTIO interfaces.
-
-This can be done, for instance, by deploying a VXLAN gateway on the physical
-switches, which must support VXLAN encapsulation and HWVTEP functionality. The
-gateway would be created by the SDN controller, which is controlled by the VIM.
-In the case of Openstack, network configuration is performed using neutron’s
-multi-segment networks and the L2GW service plugin.
-
-This feature will modify the way networks are created in a VIM, assuming that
-the VIM supports multi-segment networks. There are three cases:
-1. Networks where only VIRTIO interfaces are connected
-2. Networks where only dataplane interfaces (SR-IOV or passthrough) are
-connected
-3. Networks where a mix of VIRTIO and dataplane interfaces are connected
-
-In case 1, it is foreseen that the network is created as a single-segment
-network. In cases 2 and 3, it is foreseen that the network will be created as
-multi-segment network. The reason is that VLAN identifiers are a scarce
-resource, while VXLAN Network Identifiers are not. Allocating a VLAN for case 1
-could imply running out of VLANs very quickly, since networks for case 1 are
-typical for cloud application workloads.
-
-In cases 2 and 3, it is foreseen that it would be possible to connect, a
-posteriori, new elements to the already created multi-segment networks.
-
-## Demo or definition of done ##
-To be added.
+++ /dev/null
-# Detection of VIM events not directly related to VDU infra metrics #
-
-## Proposer ##
-- Gianpietro Lavado (Whitestack)
-
-## Type ##
-**Feature**
-
-## Target MDG/TF ##
-MON
-
-## Description ##
-
-Detection of VIM events not directly related to VDU infra metrics (broken VMs,
-links/networks down, etc.)
-
-OSM should be able to detect the following VIM events:
-1. VM state change (shutdown, pause, not present, etc.)
-2. Network changes (deletion, disconnection, etc.)
-3. Volume changes (detachments, deletion, etc.)
-4. VIM status (API connectivity)
-
-## Demo or definition of done ##
-The following events should make MON throw a log related to the NS, and event in bus.
-- A VM is powered down at the VIM interface.
-- A network is disconnected from a VDU.
-- A volume is detached from a VDU.
-- The VIM API is not available.
--- /dev/null
+# Metrics Collection #
+
+## Proposer ##
+- Adam Israel
+
+## Type ##
+**Feature**
+
+## Target MDG/TF ##
+VCA, DevOps, SO
+
+## Description ##
+Operators prefer to make data-driven decisions. To aid this, I propose that we
+begin collecting operational metrics from VNFs that make them available, via
+push (collectd, statsd) or pull ([prometheus](https://prometheus.io/), etc).
+
+This metric data should be collected to a central data store. Once collected,
+reports can be generated for the operator to view, and operational rules can be
+defined with regard to notifications, auto-scaling and auto-healing.
+
+
+## Demo or definition of done ##
+- Selection of supported collection method(s)
+- Creation of an OSM VNF METRICS charm layer to aid metrics collection
+- The option to install metric collection during the installation process.
+- Exposing a metrics dashboard (if supported by the supported collection
+ method(s), or the definition of of one or more reports to view relevant
+ metrics dtaa.
+++ /dev/null
-# Ability to provide real-time feedback in CLI and GUI upon request
-
-## Proposer
-
-- Gerardo Garcia (Telefonica)
-- Alfonso Tierno (Telefonica)
-- Francisco Javier Ramon (Telefonica)
-
-## Type
-
-**Feature**
-
-## Target MDG/TF
-
-IM-NBI, LCM, CLI, LW-GUI (optional)
-
-## Description
-
-Currently, OSM's CLI (and OSM's GUI to some extent) follow a similar behaviour
-to OSM's NBI, returning control to the user immediately after a request has
-been made, although OSM is still being processed by OSM behind the scenes, so
-the user needs to poll the system to learn (i.e. running commands) when the
-request has been completely processed. While this behaviour is reasonable as
-regular practice, it still presents some limitations in terms of usability:
-
-- It makes more complicated to follow the progress of a given operation or
-whether it is stuck at any step, particularly when the requested operation is
-complex (e.g. instantiate a large NS).
-- It makes even more complicated to relate the processes during the operation
-with other asynchronous events (e.g. error messages) that may happen during the
-operation and explain better its progress and/or results.
-
-The current proposal is adding an option `--wait` to selected CLI commands so
-that they do not return the control immediately, but keep it until the
-operation is completed, reporting meanwhile the progress in the operation and,
-if appropriate, relevant events that may facilitate diagnosis/troubleshooting.
-
-As a minimum, the commands for NS instantiation and NST instantiation should
-support this option. Likewise, LW-GUI might support the same mode for
-equivalent operations.
-
-## Demo or definition of done
-
-Possibility to use the `--wait` option for an NS/NSI creation that keeps
-control and reports the progress of the operation in terms of:
-
-- VMs/VDUs created, VLs created and bar of progress (e.g. "VM 7/24: 'CTRL_1'
-VDU of 'Router_7' VNF created").
-- Stage of instantiation where the process is (e.g. "starting NS day-1",
-"Starting 'Router_2' day-1", etc. ).
-- Timely report any other relevant event that might occur in the middle of the
-operation.
-
--- /dev/null
+# Support of multi-segment VIM-managed networks #
+
+## Proposer ##
+- Gerardo Garcia (Telefonica)
+- Alfonso Tierno (Telefonica)
+- Francisco Javier Ramon (Telefonica)
+
+## Type ##
+**Feature**
+
+## Target MDG/TF ##
+RO
+
+## Description ##
+A VIM, with the help of an SDN controller, can allow the creation of
+multi-segment networks that enables seamless connectivity between legacy VLAN
+domains (e.g. external networks aka provider networks), SR-IOV or Passthrough
+interfaces, and VIRTIO interfaces.
+
+This can be done, for instance, by deploying a VXLAN gateway on the physical
+switches, which must support VXLAN encapsulation and HWVTEP functionality. The
+gateway would be created by the SDN controller, which is controlled by the VIM.
+In the case of Openstack, network configuration is performed using neutron’s
+multi-segment networks and the L2GW service plugin.
+
+This feature will modify the way networks are created in a VIM, assuming that
+the VIM supports multi-segment networks. There are three cases:
+1. Networks where only VIRTIO interfaces are connected
+2. Networks where only dataplane interfaces (SR-IOV or passthrough) are
+connected
+3. Networks where a mix of VIRTIO and dataplane interfaces are connected
+
+In case 1, it is foreseen that the network is created as a single-segment
+network. In cases 2 and 3, it is foreseen that the network will be created as
+multi-segment network. The reason is that VLAN identifiers are a scarce
+resource, while VXLAN Network Identifiers are not. Allocating a VLAN for case 1
+could imply running out of VLANs very quickly, since networks for case 1 are
+typical for cloud application workloads.
+
+In cases 2 and 3, it is foreseen that it would be possible to connect, a
+posteriori, new elements to the already created multi-segment networks.
+
+## Demo or definition of done ##
+To be added.
--- /dev/null
+# Detection of VIM events not directly related to VDU infra metrics #
+
+## Proposer ##
+- Gianpietro Lavado (Whitestack)
+
+## Type ##
+**Feature**
+
+## Target MDG/TF ##
+MON
+
+## Description ##
+
+Detection of VIM events not directly related to VDU infra metrics (broken VMs,
+links/networks down, etc.)
+
+OSM should be able to detect the following VIM events:
+1. VM state change (shutdown, pause, not present, etc.)
+2. Network changes (deletion, disconnection, etc.)
+3. Volume changes (detachments, deletion, etc.)
+4. VIM status (API connectivity)
+
+## Demo or definition of done ##
+The following events should make MON throw a log related to the NS, and event in bus.
+- A VM is powered down at the VIM interface.
+- A network is disconnected from a VDU.
+- A volume is detached from a VDU.
+- The VIM API is not available.
+++ /dev/null
-# Interoperability with Azure public clouds #
-
-## Proposer ##
-- Alfonso Tierno (Telefonica)
-- Gerardo Garcia (Telefonica)
-- Francisco Javier Ramon (Telefonica)
-
-## Type ##
-
-**Feature**
-
-## Target MDG/TF ##
-
-RO
-
-## Description ##
-
-Currently OSM can deploy VNFs in VIMs of the following types: VMware,
-Openstack, Openvim and AWS.
-With this feature the public cloud portfolio will be extended to Azure.
-This will ease the adoption and testing of OSM. Azure public clouds
-could be part of multi-site deployments in production.
-
-In practice, this feature would require interacting with Azure, in a
-similar way as it is done with AWS.
-
-## Demo or definition of done ##
-
-Deploy a NSD example with at lease one VM and one private network at public
-cloud Azure
--- /dev/null
+# Juju Asyncronous API #
+
+## Proposer ##
+- Adam Israel
+
+## Type ##
+**Feature**
+
+## Target MDG/TF ##
+SO
+
+## Description ##
+
+As of R2, the SO is using the older syncronous Juju API. This can lead the SO
+to appear "frozen" while it blocks waiting for a Juju operation to complete.
+
+To address this issue, we would like the SO to convert to using
+[libjuju](https://github.com/juju/python-libjuju) python library. It offers
+the following benefits:
+- Asynchronous, using the asyncio and async/await features of python 3.5+
+- Websocket-level bindings are programmatically generated (indirectly) from the
+Juju golang code, ensuring full api coverage
+- Provides an OO layer which encapsulates much of the websocket api and
+provides familiar nouns and verbs (e.g. Model.deploy(), Application.add_unit(),
+etc.)
+
+## Demo or definition of done ##
+Demos of using libjuju can be found in the
+[quickstart](https://github.com/juju/python-libjuju#quickstart) and in the
+[examples](https://github.com/juju/python-libjuju/tree/master/examples) folder.
+
+Here is a simple implementation that deploys a charm:
+```python
+#!/usr/bin/python3.5
+
+import asyncio
+import logging
+
+from juju import loop
+from juju.model import Model
+
+
+async def deploy():
+ # Create a Model instance. We need to connect our Model to a Juju api
+ # server before we can use it.
+ model = Model()
+
+ # Connect to the currently active Juju model
+ await model.connect_current()
+
+ # Deploy a single unit of the ubuntu charm, using revision 0 from the
+ # stable channel of the Charm Store.
+ ubuntu_app = await model.deploy(
+ 'ubuntu-0',
+ application_name='ubuntu',
+ series='xenial',
+ channel='stable',
+ )
+
+ # Disconnect from the api server and cleanup.
+ model.disconnect()
+
+
+def main():
+ # Set logging level to debug so we can see verbose output from the
+ # juju library.
+ logging.basicConfig(level=logging.DEBUG)
+
+ # Quiet logging from the websocket library. If you want to see
+ # everything sent over the wire, set this to DEBUG.
+ ws_logger = logging.getLogger('websockets.protocol')
+ ws_logger.setLevel(logging.INFO)
+
+ # Run the deploy coroutine in an asyncio event loop, using a helper
+ # that abstracts loop creation and teardown.
+ loop.run(deploy())
+
+
+if __name__ == '__main__':
+ main()
+```
--- /dev/null
+# Ability to provide real-time feedback in CLI and GUI upon request
+
+## Proposer
+
+- Gerardo Garcia (Telefonica)
+- Alfonso Tierno (Telefonica)
+- Francisco Javier Ramon (Telefonica)
+
+## Type
+
+**Feature**
+
+## Target MDG/TF
+
+IM-NBI, LCM, CLI, LW-GUI (optional)
+
+## Description
+
+Currently, OSM's CLI (and OSM's GUI to some extent) follow a similar behaviour
+to OSM's NBI, returning control to the user immediately after a request has
+been made, although OSM is still being processed by OSM behind the scenes, so
+the user needs to poll the system to learn (i.e. running commands) when the
+request has been completely processed. While this behaviour is reasonable as
+regular practice, it still presents some limitations in terms of usability:
+
+- It makes more complicated to follow the progress of a given operation or
+whether it is stuck at any step, particularly when the requested operation is
+complex (e.g. instantiate a large NS).
+- It makes even more complicated to relate the processes during the operation
+with other asynchronous events (e.g. error messages) that may happen during the
+operation and explain better its progress and/or results.
+
+The current proposal is adding an option `--wait` to selected CLI commands so
+that they do not return the control immediately, but keep it until the
+operation is completed, reporting meanwhile the progress in the operation and,
+if appropriate, relevant events that may facilitate diagnosis/troubleshooting.
+
+As a minimum, the commands for NS instantiation and NST instantiation should
+support this option. Likewise, LW-GUI might support the same mode for
+equivalent operations.
+
+## Demo or definition of done
+
+Possibility to use the `--wait` option for an NS/NSI creation that keeps
+control and reports the progress of the operation in terms of:
+
+- VMs/VDUs created, VLs created and bar of progress (e.g. "VM 7/24: 'CTRL_1'
+VDU of 'Router_7' VNF created").
+- Stage of instantiation where the process is (e.g. "starting NS day-1",
+"Starting 'Router_2' day-1", etc. ).
+- Timely report any other relevant event that might occur in the middle of the
+operation.
+
--- /dev/null
+# OSM Kubernetes Support #
+
+## Proposer ##
+- Prithiv Mohan (Intel)
+
+## Type ##
+** Feature **
+
+## Target MDG/TF ##
+SO, RO, VCA, IM
+
+## Supported VIMs ##
+
+1. OpenStack
+2. VMWare
+3. AWS
+
+## Description ##
+
+Enable a container-based VNF deployment model via OSM that allows for the coexistence of container
+and VM deployment models to coexist within a network service. Kubernetes should be enabled as the
+container orchestration engine, but it must be done in a plugin manner that permits other container
+orchestration engines to be added in the future.
+
+The Kubernetes pods can be created either on a baremetal node or inside a VM. Either case, the
+Resource Orchestrator(RO) is responsible only for the resource allocation.
+
+For the VM based Kubernetes deployment, the VM image could have the kubelet node agent already
+installed. In this case, RO creates a VM and VCA takes care of the configuration and the LCM of the
+pods. If the image used by the RO to provide a VM has no kubelet installation, the VCA module will
+handle the installation, configuration of the kubernetes. The Resource Orchestrator will not involve
+in the installation of any kind of Kubernetes package.
+
+The options for baremetal provisioning includes OpenStack Ironic, Canonical Metal as a Service(MaaS)
+but not limited to that. An additional or extended plugin is required in the Resource Orchestrator
+for the baremetal provisioning of the resources for the Kubernetes.
+
+The proposed workflow below assumes Ironic/MaaS as the options for baremetal provisioning:
+
+ i. User creating a descriptor with VNFD updates that indicate the deployment model for the
+ VNF (container on bare metal, container in VM, or VM deployment.
+ ii. SO passes the request to RO.
+ iii. RO receives the request.
+ iv a. In case of a Kubernetes deployment inside the VM, RO creates a VM with/without kubelet
+ installed in it based on the type of the image.
+ b. Alternatively, in the case of a Kubernetes baremetal deployment, RO communicates with
+ the MaaS/Ironic plugin. The plugin talks to the MaaS and/or Ironic to create the
+ Kubernetes pods.
+ v. VCA does the installation and/or configuration of the VM for Kubernetes.
+
+Changes in the descriptor in the IM repo is required. Expected these show up fairly seamlessly in
+the UI due to the model driven nature of the platform.
+
+There is an option on the preparation of the baremetal nodes or VMs for supporting Kubernetes. These
+could be prepared beforehand, or Kubernetes could be configured on them once they have been
+allocated.
+
+The installation of Kubernetes and the configuration of the Kubernetes Controller will not be handled
+by the OSM as it is out of scope for OSM. The configuration could be handled by Juju with the charms.
+
+## Demo or definition of done ##
+
+* Creation of Kubernetes Pods is successful
+* Usual lifecycle operations on a NS that includes VM-based and container-in-VM based VNFs.
+* Supports multiple VIM environments.
+
+Links:
+
+1. https://kubernetes.io/
+2. https://kubernetes.io/docs/getting-started-guides/ubuntu/installation/
+3. https://www.ubuntu.com/kubernetes
--- /dev/null
+# Support of multiple VCA instances and a platform-independent API in N2VC
+
+## Proposers
+
+- Gerardo Garcia (Telefonica)
+- Alfonso Tierno (Telefonica)
+- Francisco Javier Ramon (Telefonica)
+- Adam Israel (Canonical)
+- José Antonio Quiles (Indra)
+
+## Type
+
+Feature
+
+## Target MDG/TF
+
+N2VC, LCM
+
+## Description
+
+Currently, OSM assumes that management networks of all the VNF instances are remotely accessible
+by OSM from outside the datacenter, either directly or via floating IPs, so that VCA can drive
+successfully its operations with the VNFs. While this way of work has many advantages in telco
+clouds it might be somewhat rigid in hybrid deployments where e.g. public clouds are involved. In
+fact, this imposes some networking conditions to OSM so that VCA can access to all the
+management networks of each target VIM.
+
+In addition, the current N2VC library is platform-dependent. Most of the API calls are juju-specific
+instead of providing a higher level interface.
+
+While this shouldn't be an issue from the point of view of the actual behaviour of the OSM platform
+as a whole, it has three main drawbacks:
+
+- Architecturally speaking, it breaks the general principles of abstraction and layering, percolating
+ terminology and way of thinking northbound.
+- It complicates the implementation of features in LCM, since LCM needs to understand all the internals
+ of each call and figure out the actual state of N2VC and VCA internals.
+- It locks the workflow in LCM, which cannot be modified without changing N2VC. As a result, almost
+ every new feature impacting N2VC (support of native charms, secure key management, support of
+ relations, NS primitives) forces to change in the workflow in LCM, as current workflows are entirely
+ VCA-oriented (rather than LCM-oriented) due to the aforementioned problem in layering.
+
+This feature is intended to re-define the N2VC API to restore the layering, make it truly
+platform-independent (agnostic of the VCA terminology), and able to support multiple VCA instances.
+It will require the implementation of a generic class in N2VC, defining those API calls and the
+expected behaviour.
+
+The main premises of the new N2VC API would be the following:
+
+- **High-level abstraction API.** The current platform-dependent objects managed by N2VC (machines,
+ charms, models, applications) will be replaced by a new set of generic platform-independent objects
+ (execution environment, configuration SW, namespaces).
+- Keep a decoupling among the elements to be configured, the configuration SW and the execution
+ environment, so that the workflows in current VCA can be reproduced.
+- **API calls should be defined in such a way that allows to do things in parallel.** For instance, it
+ is not necessary to wait for the RO to finish the instantiation to before makeing VCA to create the
+ execution environments, or vice-versa.
+- **Layering and isolation.** LCM workflow (sequence of N2VC calls) could change in the future and it
+ should not impact N2VC or vice-versa.
+- **Transparency in progress of operation and states.** N2VC should be able to update directly the Mongo
+ records associated to the operation in progress.
+
+## Demo or definition of done
+
+All DEVOPS tests using VNF/NS packages with charms should work in the same way as they are working today.
+
--- /dev/null
+# Interoperability with Azure public clouds #
+
+## Proposer ##
+- Alfonso Tierno (Telefonica)
+- Gerardo Garcia (Telefonica)
+- Francisco Javier Ramon (Telefonica)
+
+## Type ##
+
+**Feature**
+
+## Target MDG/TF ##
+
+RO
+
+## Description ##
+
+Currently OSM can deploy VNFs in VIMs of the following types: VMware,
+Openstack, Openvim and AWS.
+With this feature the public cloud portfolio will be extended to Azure.
+This will ease the adoption and testing of OSM. Azure public clouds
+could be part of multi-site deployments in production.
+
+In practice, this feature would require interacting with Azure, in a
+similar way as it is done with AWS.
+
+## Demo or definition of done ##
+
+Deploy a NSD example with at lease one VM and one private network at public
+cloud Azure
--- /dev/null
+# Osmclient migration to Python3
+
+## Proposers
+
+- Gerardo Garcia (Telefonica)
+- Alfonso Tierno (Telefonica)
+- Francisco Javier Ramon (Telefonica)
+
+## Type
+
+Feature
+
+## Target MDG/TF
+
+Devops
+
+## Description
+
+Python 2 End of Life is expected for January 1st 2020. We need to address
+the migration to Python3 before that date.
+
+## Demo or definition of done
+
+- A new debian package will be produced: python3-osmclient
+- The new debian package will be used by the osmclient Dockerfile in Devops stage3.
+
--- /dev/null
+# osmclient package creation and validation tool
+
+## Proposers
+
+- Felipe Vicens (ATOS)
+- Gerardo Garcia (Telefonica)
+- Francisco Javier Ramon (Telefonica)
+
+## Type
+
+Feature
+
+## Target MDG/TF
+
+Devops/osmclient
+
+## Description
+
+The creation of the package in OSM currently is made via a DevOps package tool located in DevOps repository.
+This feature aims the migration of the bash script to the osmclient in order to have it integrated with
+the osm command-line tool. Additionally, this feature proposes the syntax validation of the descriptors as well as
+the generation of the tar.gz
+
+## Demo or definition of done
+
+The execution of the following commands using the OSM client to create, validate or build descriptors
+
+- 'osm package-create': Creates an OSM package for nsd, vnfd and nst
+- 'osm package-validate': Syntax validation of OSM descriptors
+- 'osm package-build': Generate .tar.gz files of nsd and vnfd
+++ /dev/null
-# Support of multiple VCA instances and a platform-independent API in N2VC
-
-## Proposers
-
-- Gerardo Garcia (Telefonica)
-- Alfonso Tierno (Telefonica)
-- Francisco Javier Ramon (Telefonica)
-- Adam Israel (Canonical)
-- José Antonio Quiles (Indra)
-
-## Type
-
-Feature
-
-## Target MDG/TF
-
-N2VC, LCM
-
-## Description
-
-Currently, OSM assumes that management networks of all the VNF instances are remotely accessible
-by OSM from outside the datacenter, either directly or via floating IPs, so that VCA can drive
-successfully its operations with the VNFs. While this way of work has many advantages in telco
-clouds it might be somewhat rigid in hybrid deployments where e.g. public clouds are involved. In
-fact, this imposes some networking conditions to OSM so that VCA can access to all the
-management networks of each target VIM.
-
-In addition, the current N2VC library is platform-dependent. Most of the API calls are juju-specific
-instead of providing a higher level interface.
-
-While this shouldn't be an issue from the point of view of the actual behaviour of the OSM platform
-as a whole, it has three main drawbacks:
-
-- Architecturally speaking, it breaks the general principles of abstraction and layering, percolating
- terminology and way of thinking northbound.
-- It complicates the implementation of features in LCM, since LCM needs to understand all the internals
- of each call and figure out the actual state of N2VC and VCA internals.
-- It locks the workflow in LCM, which cannot be modified without changing N2VC. As a result, almost
- every new feature impacting N2VC (support of native charms, secure key management, support of
- relations, NS primitives) forces to change in the workflow in LCM, as current workflows are entirely
- VCA-oriented (rather than LCM-oriented) due to the aforementioned problem in layering.
-
-This feature is intended to re-define the N2VC API to restore the layering, make it truly
-platform-independent (agnostic of the VCA terminology), and able to support multiple VCA instances.
-It will require the implementation of a generic class in N2VC, defining those API calls and the
-expected behaviour.
-
-The main premises of the new N2VC API would be the following:
-
-- **High-level abstraction API.** The current platform-dependent objects managed by N2VC (machines,
- charms, models, applications) will be replaced by a new set of generic platform-independent objects
- (execution environment, configuration SW, namespaces).
-- Keep a decoupling among the elements to be configured, the configuration SW and the execution
- environment, so that the workflows in current VCA can be reproduced.
-- **API calls should be defined in such a way that allows to do things in parallel.** For instance, it
- is not necessary to wait for the RO to finish the instantiation to before makeing VCA to create the
- execution environments, or vice-versa.
-- **Layering and isolation.** LCM workflow (sequence of N2VC calls) could change in the future and it
- should not impact N2VC or vice-versa.
-- **Transparency in progress of operation and states.** N2VC should be able to update directly the Mongo
- records associated to the operation in progress.
-
-## Demo or definition of done
-
-All DEVOPS tests using VNF/NS packages with charms should work in the same way as they are working today.
-
+++ /dev/null
-# Osmclient migration to Python3
-
-## Proposers
-
-- Gerardo Garcia (Telefonica)
-- Alfonso Tierno (Telefonica)
-- Francisco Javier Ramon (Telefonica)
-
-## Type
-
-Feature
-
-## Target MDG/TF
-
-Devops
-
-## Description
-
-Python 2 End of Life is expected for January 1st 2020. We need to address
-the migration to Python3 before that date.
-
-## Demo or definition of done
-
-- A new debian package will be produced: python3-osmclient
-- The new debian package will be used by the osmclient Dockerfile in Devops stage3.
-
+++ /dev/null
-# osmclient package creation and validation tool
-
-## Proposers
-
-- Felipe Vicens (ATOS)
-- Gerardo Garcia (Telefonica)
-- Francisco Javier Ramon (Telefonica)
-
-## Type
-
-Feature
-
-## Target MDG/TF
-
-Devops/osmclient
-
-## Description
-
-The creation of the package in OSM currently is made via a DevOps package tool located in DevOps repository.
-This feature aims the migration of the bash script to the osmclient in order to have it integrated with
-the osm command-line tool. Additionally, this feature proposes the syntax validation of the descriptors as well as
-the generation of the tar.gz
-
-## Demo or definition of done
-
-The execution of the following commands using the OSM client to create, validate or build descriptors
-
-- 'osm package-create': Creates an OSM package for nsd, vnfd and nst
-- 'osm package-validate': Syntax validation of OSM descriptors
-- 'osm package-build': Generate .tar.gz files of nsd and vnfd