From: garciadeblas Date: Fri, 13 Nov 2020 11:57:25 +0000 (+0000) Subject: Clean-up of gerrit: move completed features to right release folder X-Git-Url: https://osm.etsi.org/gitweb/?a=commitdiff_plain;h=refs%2Fchanges%2F87%2F9987%2F1;p=osm%2FFeatures.git Clean-up of gerrit: move completed features to right release folder Change-Id: I3090df4fbfaa6df18c91b49f80aa4b7e4479fb56 Signed-off-by: garciadeblas --- diff --git a/Features/Release3/Explicit_Port_Ordering.md b/Features/Release3/Explicit_Port_Ordering.md deleted file mode 100644 index 6a61583..0000000 --- a/Features/Release3/Explicit_Port_Ordering.md +++ /dev/null @@ -1,20 +0,0 @@ -# Explicit Port Ordering Support for VDUs # - -## Proposer ## -- Rajesh Velandy (RIFT.io) - - -## Type ## -**Feature** - -## Target MDG/TF ## -SO - -## Description ## - -Implement the ability to specify ordering of interfaces (both external and internal) so that the VM -is orchestrated with those ordering preserved. - -## Demo or definition of done ## -Create a descriptor with the ordering specified for a demo VNF and observe that the VNF interfaces -are coming up in the order specified. \ No newline at end of file diff --git a/NBI_API_for_VNF_metric_collection.md b/NBI_API_for_VNF_metric_collection.md deleted file mode 100644 index dd63736..0000000 --- a/NBI_API_for_VNF_metric_collection.md +++ /dev/null @@ -1,37 +0,0 @@ -# Enhancement of NBI API to collect VNF metrics - -## Proposers -- Vijay R S (Tata Elxsi) - -## Type - -**Feature** - -## Target MDG/TF - -NBI - -## Description - -Present API implementation for NS metric collection in NBI(Feature 7270) only fetch NFVI metrics, -not the VNF metrics. - -NFVI metrics listed as, - cpu_utilization - average_memory_utilization - disk_read_ops - disk_write_ops - disk_read_bytes - disk_write_bytes - packets_dropped_ - packets_received - packets_sent - -Improvising the NBI to fetch both VNF and NFVI metrics from Prometheus as part of the same API. - -This can be done by making NBI fetch VNFRs collection from mongodb and get the VNF metric names -with that NBI can query Prometheus to get the VNF metrics. - -## Demo or definition of done - -Working NBI API which will fetch metrics of both NFVI and VNFs. \ No newline at end of file diff --git a/RO_migration_to_Python3.md b/RO_migration_to_Python3.md deleted file mode 100644 index f4cd032..0000000 --- a/RO_migration_to_Python3.md +++ /dev/null @@ -1,28 +0,0 @@ -# RO migration to Python3 - -## Proposers - -- Alfonso Tierno (Telefonica) -- Gerardo Garcia (Telefonica) -- Francisco Javier Ramon (Telefonica) - -## Type - -Feature - -## Target MDG/TF - -RO, Devops - -## Description - -Python 2 End of Life is expected for January 1st 2020. We need to address -the migration to Python3 before that date. - -## Demo or definition of done - -- RO will run on python3. RO client will also run on python3. -- A new debian package will be produced "python3-osm-ro", totally based on Python3, - with no dependences on Python2 or Python2 libraries. -- The new debian package will be used by the RO Dockerfile in Devops stage3. - diff --git a/Release2/support_IP_ADDRESS_param_VMware_conn b/Release2/support_IP_ADDRESS_param_VMware_conn deleted file mode 100644 index d9e1342..0000000 --- a/Release2/support_IP_ADDRESS_param_VMware_conn +++ /dev/null @@ -1,19 +0,0 @@ -# Add support for parameter "ip_address” in newviminstace specification # - -## Proposer ## -Vanessa Little (TDC,VMware) - -## Type ## -**Feature** - -## Target MDG/TF ## -RO - -## Description ## -Add support for parameter "ip_address” in newviminstace specification to assign -IP address to VM if explictly stated - - -## Demo or definition of done ## -Definition of Done: Create a VM with IP address specified on the interface -identified in the VNFD \ No newline at end of file diff --git a/Release3/Explicit_Port_Ordering.md b/Release3/Explicit_Port_Ordering.md new file mode 100644 index 0000000..6a61583 --- /dev/null +++ b/Release3/Explicit_Port_Ordering.md @@ -0,0 +1,20 @@ +# Explicit Port Ordering Support for VDUs # + +## Proposer ## +- Rajesh Velandy (RIFT.io) + + +## Type ## +**Feature** + +## Target MDG/TF ## +SO + +## Description ## + +Implement the ability to specify ordering of interfaces (both external and internal) so that the VM +is orchestrated with those ordering preserved. + +## Demo or definition of done ## +Create a descriptor with the ordering specified for a demo VNF and observe that the VNF interfaces +are coming up in the order specified. \ No newline at end of file diff --git a/Release3/RBAC_for_the_platform.md b/Release3/RBAC_for_the_platform.md deleted file mode 100644 index fd120fb..0000000 --- a/Release3/RBAC_for_the_platform.md +++ /dev/null @@ -1,51 +0,0 @@ -# RBAC for the platform # - -## Proposer ## -- Gerardo Garcia (Telefonica) -- Alfonso Tierno (Telefonica) -- Francisco Javier Ramon (Telefonica) - -## Type ## -**Feature** - -## Target MDG/TF ## -SO - -## Description ## -The NFV Orchestrator requires a significant set of capabilities and privileges -to perform all its required tasks: VNF onboarding, NS design & onboarding, NS -deployment, day-2 operation, NS shutdown, or addition of new datacenters/VIMs, -among others. However, not all of those tasks are expected to be performed by -the same user in the organization, since each of those stages may have -different implications in terms of service continuity, validation, license -consumption, access to credentials, etc. - -Thus, for real operation, the system should allow the definition of different -roles, defined by admin user, with different sets of privileges. All users -should be mapped, at least, to one of these roles. - -As a minimum, it is expected that the system should be able to enforce these -privileges: -1. Allowed to onboard a VNF -2. Allowed to onboard a NS -3. Allowed to deploy a NS -4. Allowed to operate an existing NS (call to primitives, receive monitoring -data, etc.), except NS scaling. -5. Allowed to scale a NS. -6. Allowed to terminate a NS. -7. Allowed to customize the system and configure the roles. - -By default, the admin/root role should have been assigned all the privileges -above. - -## Demo or definition of done ## -- Successful creation by an admin user of the role TECHNOLOGY with privileges -#1, #2, #3, with an user (tech) on it. -- Successful creation by an admin user of the role OPERATIONS with privileges -#3, #4, #5, #6, with an user (op) on it. -- Check that tech and op are allowed to run operations of the kind authorized -in their role. -- Check that tech and op are not allowed to run operations not authorized in -their role. -- Check that users with the admin role support all the types of operations -above (from #1 to #7). \ No newline at end of file diff --git a/Release3/secureKeyManagement.md b/Release3/secureKeyManagement.md deleted file mode 100644 index 819a24e..0000000 --- a/Release3/secureKeyManagement.md +++ /dev/null @@ -1,40 +0,0 @@ -# Secure Key Management - -## Proposer ## -Michael Marchetti - -## Type ## -**Feature** - -## Target MDG/TF ## -SO,RO,VCA - -## Description ## - -Please refer to this presentation: -https://docbox.etsi.org/OSG/OSM/05-CONTRIBUTIONS/2017 -//OSM(17)000023_ssh_key_management_in_osm_release_1.pptx - -This feature relates to the usage of proxy charms and allowing a proxy charm (charm container) -ssh access to a VNF virtual machine. As outlined in the presentation, the proxy charm container -needs a private key that matches the injected public key in the VNF virtual machine. - -This feature highlights some questions: -- should the private key be injected into the container or be self generated? -- how does the associated public key get injected into the VNF VM? -- all aspects of the key methodology should be transparent to the user - (user should not have to manually add keys to the system). - -It is expected that interactions between SO, RO and VCA will be necessary in order order to -orchestratrate the keys between the proxy charm and VNF. - -Also of concern is accessibility to the ssh private key. Some consideration should be made for who -has access to the private key. When a user runs "juju config 'application'" inside the VCA or via -REST, should the private keys be visible? - -## Demo or definition of done ## - -A test case involves a VNF VM with a proxy charm, utilzing the sshproxy charm. The proxy charm -is invoked on configuration changes and performs an ssh remote session to the VNF VM. The -private/public key between the proxy charm container and the VNF VM would have be automatically -deployed allowing this ssh remote session (without passwords) to commence. diff --git a/Release3/support_VMware_integrated_openstack.md b/Release3/support_VMware_integrated_openstack.md deleted file mode 100644 index eae4e85..0000000 --- a/Release3/support_VMware_integrated_openstack.md +++ /dev/null @@ -1,18 +0,0 @@ -# Support VMware Integrated Openstack as a VIM # - -## Proposer ## -Vanessa Little (TDC,VMware) - -## Type ## -**Feature** - -## Target MDG/TF ## -RO - -## Description ## -Regression test the current generic openstack library against VMware Integrated Openstack version 2.0 (kilo) (which is included in the current release of vCloud NFV 1.5) - -If tests are successful a separate openstack connector should not be required to connect to VMware Integrated Openstack as a VIM. However, the results of the regression tests will determine the detailed requirements and feature roadmpa for Relase 4 - -## Demo or definition of done ## -Definition of Done: Network services can be successfully deployed in VMware Integrated Openstack with the same connector, Network Service and VNFD descriptors as generic openstack builds that are based on Openstack Kilo. \ No newline at end of file diff --git a/Release4/OSM_on_K8 b/Release4/OSM_on_K8 deleted file mode 100644 index 37070b4..0000000 --- a/Release4/OSM_on_K8 +++ /dev/null @@ -1,18 +0,0 @@ -# OSM on kubernetes - -## Proposer ## -Michael Marchetti (Sandvine) - -## Type ## -**Feature** - -## Target MDG/TF ## -all MDG's - -## Description ## -The OSM installer currently asssumes it will be installed on a host (or VM). The MDG -components/services are deployed in separate lxc containers. A Release4 directive is to simplify -the install process. Moving to docker containers will help with this strategy as the MDG images -that are built and tested in the CI pipeline can be installed directly without having to go through -a long term install process. Subsequently, the docker OSM components now need to be orchestrated -with one another (UI<->SO<->RO, etc), and utilizing kubernetes will simplify this. \ No newline at end of file diff --git a/Release5/Adding_relations_support_for_multi_charm_VNFs.md b/Release5/Adding_relations_support_for_multi_charm_VNFs.md deleted file mode 100644 index 45632ce..0000000 --- a/Release5/Adding_relations_support_for_multi_charm_VNFs.md +++ /dev/null @@ -1,13 +0,0 @@ -# Adding relations support for multi-charm VNFs # - -## Proposer ## -Adam Israel - -## Type ## -**Feature** - -## Target MDG/TF ## -IM, LCM, N2VC - -## Description ## -To be added \ No newline at end of file diff --git a/Release5/support_IP_ADDRESS_param_VMware_conn b/Release5/support_IP_ADDRESS_param_VMware_conn new file mode 100644 index 0000000..d9e1342 --- /dev/null +++ b/Release5/support_IP_ADDRESS_param_VMware_conn @@ -0,0 +1,19 @@ +# Add support for parameter "ip_address” in newviminstace specification # + +## Proposer ## +Vanessa Little (TDC,VMware) + +## Type ## +**Feature** + +## Target MDG/TF ## +RO + +## Description ## +Add support for parameter "ip_address” in newviminstace specification to assign +IP address to VM if explictly stated + + +## Demo or definition of done ## +Definition of Done: Create a VM with IP address specified on the interface +identified in the VNFD \ No newline at end of file diff --git a/Release5/support_VMware_integrated_openstack.md b/Release5/support_VMware_integrated_openstack.md new file mode 100644 index 0000000..eae4e85 --- /dev/null +++ b/Release5/support_VMware_integrated_openstack.md @@ -0,0 +1,18 @@ +# Support VMware Integrated Openstack as a VIM # + +## Proposer ## +Vanessa Little (TDC,VMware) + +## Type ## +**Feature** + +## Target MDG/TF ## +RO + +## Description ## +Regression test the current generic openstack library against VMware Integrated Openstack version 2.0 (kilo) (which is included in the current release of vCloud NFV 1.5) + +If tests are successful a separate openstack connector should not be required to connect to VMware Integrated Openstack as a VIM. However, the results of the regression tests will determine the detailed requirements and feature roadmpa for Relase 4 + +## Demo or definition of done ## +Definition of Done: Network services can be successfully deployed in VMware Integrated Openstack with the same connector, Network Service and VNFD descriptors as generic openstack builds that are based on Openstack Kilo. \ No newline at end of file diff --git a/Release6/Full exposure of internal RO runtime data to common OSM database.md b/Release6/Full exposure of internal RO runtime data to common OSM database.md deleted file mode 100644 index 1dfc4c6..0000000 --- a/Release6/Full exposure of internal RO runtime data to common OSM database.md +++ /dev/null @@ -1,45 +0,0 @@ -# Full exposure of internal RO runtime data to common OSM database - -## Proposer - -- Gerardo Garcia (Telefonica) -- Alfonso Tierno (Telefonica) -- Francisco Javier Ramon (Telefonica) - -## Type - -**Feature** - -## Target MDG/TF - -RO, LCM - -## Description - -Currently all runtime information related to resources is keep in an internal -database in RO, which is partially replicated in the common database, involving -a translation of models and storage mechanisms. This is inefficient for several -reasons: - -- RO-related information stored in the common database is not authoritative, -but a copy of the real authoritative info (information is duplicated). -- Therefore, there is no 100% guarantee that the info in the common databases -is kept in sync with the actual authoritative database. -- There is a translation in place, to adapt the information to the different -database paradigms (relational vs. noSQL) and map to to system-level IDs and -objects. -- Almost all changes at IM trigger a specific development to change the -relational RO database and to make the appropriate translations. This usually -requieres extensions in RO´s northbound interface capabilities too. -- Maintaining this duplicate mechanism makes RO development and evolution -artificially complex with no obvious advantages. - -Hence this feature requests the direct use of common OSM services by RO -(particularly, the common database) so that information is always up to date -and RO's maintenance is largely simplified, being current legacy RO's NBI and -translation mechanisms no longer needed. - -## Demo or definition of done - -Check that all information of state in RO is always available in the common -NoSQL databes, and that internal RO's MySQL database can be safely removed. diff --git a/Release6/Instantiation in VIMs with more than one physnet.md b/Release6/Instantiation in VIMs with more than one physnet.md deleted file mode 100644 index 7cf8264..0000000 --- a/Release6/Instantiation in VIMs with more than one physnet.md +++ /dev/null @@ -1,45 +0,0 @@ -# Instantiation in VIMs with more than one physnet - -## Proposer - -- Gerardo Garcia (Telefonica) -- Alfonso Tierno (Telefonica) -- Francisco Javier Ramon (Telefonica) - -## Type - -**Feature** - -## Target MDG/TF - -RO, CLI (optional) - -## Description - -In environments where some physical redundancy is required in terms of -networking, it is common the use of schemas with more than one switch upstream, -dividing the physical medium in groups of physical interfaces depending of the -upstream switch they are attached. In order to facilitate a sensible management -of these physical interfaces belonging to different "redundancy groups" by the -VIM, there is the possibility to classify them into the so-called 'physnets', -so that the VIM can leverage on that physical redundacy if needed. - -While this feature should not create a fundamental change in OSM operation or -the way its modelling works, and it is a fact that OSM can work with these -environments, it is also true that it works today with some limitations when -SDN Assist is in place. Thus, in a VIM with multiple physnets, only one can be -registered today when configuring a VIM target in OSM, leading to potential -underuse of resources (usually, by one half). Furthermore, when those physnets -obbey to some kind of physical active-active scheme, OSM cannot leverage on -this information to make NS/NSI deployments more reliable. - -This feature intends to solve the limitations described above. - -## Demo or definition of done - -With a VIM with multiple physnets, check that it is possible: - -- Register the VIM with all its physnets when defining a VIM target in OSM with -SDN Assist. -- Deploy a large NS requiring SDN Assist in such a VIM so that no interfaces -are excluded because of the physnet they belong. diff --git a/Release6/RBAC_for_the_platform.md b/Release6/RBAC_for_the_platform.md new file mode 100644 index 0000000..fd120fb --- /dev/null +++ b/Release6/RBAC_for_the_platform.md @@ -0,0 +1,51 @@ +# RBAC for the platform # + +## Proposer ## +- Gerardo Garcia (Telefonica) +- Alfonso Tierno (Telefonica) +- Francisco Javier Ramon (Telefonica) + +## Type ## +**Feature** + +## Target MDG/TF ## +SO + +## Description ## +The NFV Orchestrator requires a significant set of capabilities and privileges +to perform all its required tasks: VNF onboarding, NS design & onboarding, NS +deployment, day-2 operation, NS shutdown, or addition of new datacenters/VIMs, +among others. However, not all of those tasks are expected to be performed by +the same user in the organization, since each of those stages may have +different implications in terms of service continuity, validation, license +consumption, access to credentials, etc. + +Thus, for real operation, the system should allow the definition of different +roles, defined by admin user, with different sets of privileges. All users +should be mapped, at least, to one of these roles. + +As a minimum, it is expected that the system should be able to enforce these +privileges: +1. Allowed to onboard a VNF +2. Allowed to onboard a NS +3. Allowed to deploy a NS +4. Allowed to operate an existing NS (call to primitives, receive monitoring +data, etc.), except NS scaling. +5. Allowed to scale a NS. +6. Allowed to terminate a NS. +7. Allowed to customize the system and configure the roles. + +By default, the admin/root role should have been assigned all the privileges +above. + +## Demo or definition of done ## +- Successful creation by an admin user of the role TECHNOLOGY with privileges +#1, #2, #3, with an user (tech) on it. +- Successful creation by an admin user of the role OPERATIONS with privileges +#3, #4, #5, #6, with an user (op) on it. +- Check that tech and op are allowed to run operations of the kind authorized +in their role. +- Check that tech and op are not allowed to run operations not authorized in +their role. +- Check that users with the admin role support all the types of operations +above (from #1 to #7). \ No newline at end of file diff --git a/Release6/secureKeyManagement.md b/Release6/secureKeyManagement.md new file mode 100644 index 0000000..819a24e --- /dev/null +++ b/Release6/secureKeyManagement.md @@ -0,0 +1,40 @@ +# Secure Key Management + +## Proposer ## +Michael Marchetti + +## Type ## +**Feature** + +## Target MDG/TF ## +SO,RO,VCA + +## Description ## + +Please refer to this presentation: +https://docbox.etsi.org/OSG/OSM/05-CONTRIBUTIONS/2017 +//OSM(17)000023_ssh_key_management_in_osm_release_1.pptx + +This feature relates to the usage of proxy charms and allowing a proxy charm (charm container) +ssh access to a VNF virtual machine. As outlined in the presentation, the proxy charm container +needs a private key that matches the injected public key in the VNF virtual machine. + +This feature highlights some questions: +- should the private key be injected into the container or be self generated? +- how does the associated public key get injected into the VNF VM? +- all aspects of the key methodology should be transparent to the user + (user should not have to manually add keys to the system). + +It is expected that interactions between SO, RO and VCA will be necessary in order order to +orchestratrate the keys between the proxy charm and VNF. + +Also of concern is accessibility to the ssh private key. Some consideration should be made for who +has access to the private key. When a user runs "juju config 'application'" inside the VCA or via +REST, should the private keys be visible? + +## Demo or definition of done ## + +A test case involves a VNF VM with a proxy charm, utilzing the sshproxy charm. The proxy charm +is invoked on configuration changes and performs an ssh remote session to the VNF VM. The +private/public key between the proxy charm container and the VNF VM would have be automatically +deployed allowing this ssh remote session (without passwords) to commence. diff --git a/Release7/Ansible_OSM_installer.md b/Release7/Ansible_OSM_installer.md deleted file mode 100644 index 20de63e..0000000 --- a/Release7/Ansible_OSM_installer.md +++ /dev/null @@ -1,46 +0,0 @@ -# Install OSM to OpenStack using Ansible # - -## Proposer(s) ## - -Antonio Marsico (BT) - -## Type ## - -Feature - -## Target MDG/TF ## - -Devops, Other - -## Description ## - -Installing OSM on top of an OpenStack infrastructure may be cumbersome. Networks, Security groups, Flavors and Images are a few example of what you need to setup before being ready to install OSM on OpenStack. - -This feature proposes the automatic installation of OSM on top of an OpenStack infrastructure. It is based on Ansible, a well-known automation tool. The Ansible playbook performs the following operations: - -* Create an external network and its subnet -* Download the Ubuntu 18.04 image and upload it -* Create a security group to allow SSH and HTTP access to an instance -* Create a VM flavour compatible to OSM -* Deploy an Ubuntu 18.04 instance -* Install OSM when the instance becomes available - -All these tasks are performed only if required. This is a feature of Ansible. For instance, if the Ubuntu 18.04 image is already present, the task is skipped. - -## Demo or definition of done ## - -It is an open question to the community if it worth adding this as an option to the `install_osm.sh` installer or leave it as a standalone feature. - -If the standard invocation of Ansible is used, the following procedure is required. - -### Playbook execution - -In order to execute the playbook, it is required an OpenStack openrc file. It can be downloaded from the OpenStack web interface Horizon. - -After that, it can be loaded with the following command: - -`$ . openrc` - -Then, all the credentials are loaded in the bash environment. Now it is possible to execute the playbook to configure OpenStack and install OSM: - -`$ ansible-playbook site.yml` \ No newline at end of file diff --git a/Release7/Juniper_Contrail_SDN_Plugin.md b/Release7/Juniper_Contrail_SDN_Plugin.md deleted file mode 100644 index 08678df..0000000 --- a/Release7/Juniper_Contrail_SDN_Plugin.md +++ /dev/null @@ -1,27 +0,0 @@ -# Juniper Contrail SDN Controller support # - -## Proposer ## -Adam Israel (Canonical) -Arno van Huyssteen (Canonical) -David Garcia (Canonical) -Eduardo Sousa (Canonical) - -## Type ## -**Feature** - -## Target MDG/TF ## -RO - -## Description ## -OSM currently supports FloodLight, ONOS and ODL; these are the most used in -the open source world. While there is a benefit in having this compatibility -in OSM, there is a recurring need to support commercial SDN controllers, since -those will be by far more prevalent in real carrier deployments. - -The objective of this proposal is to bring to OSM one of the more widely used -SDN Controllers in the market. While Juniper Contrail provides a wide array of -features, the scope of this proposal will be to cover SDN Assist use cases. - -## Demo or definition of done ## -* Being able to add Juniper Contrail as a SDN Controller. -* Being able to use Juniper Contrail for SDN Assist use cases. diff --git a/Release7/NBI_API_for_VNF_metric_collection.md b/Release7/NBI_API_for_VNF_metric_collection.md new file mode 100644 index 0000000..dd63736 --- /dev/null +++ b/Release7/NBI_API_for_VNF_metric_collection.md @@ -0,0 +1,37 @@ +# Enhancement of NBI API to collect VNF metrics + +## Proposers +- Vijay R S (Tata Elxsi) + +## Type + +**Feature** + +## Target MDG/TF + +NBI + +## Description + +Present API implementation for NS metric collection in NBI(Feature 7270) only fetch NFVI metrics, +not the VNF metrics. + +NFVI metrics listed as, + cpu_utilization + average_memory_utilization + disk_read_ops + disk_write_ops + disk_read_bytes + disk_write_bytes + packets_dropped_ + packets_received + packets_sent + +Improvising the NBI to fetch both VNF and NFVI metrics from Prometheus as part of the same API. + +This can be done by making NBI fetch VNFRs collection from mongodb and get the VNF metric names +with that NBI can query Prometheus to get the VNF metrics. + +## Demo or definition of done + +Working NBI API which will fetch metrics of both NFVI and VNFs. \ No newline at end of file diff --git a/Release7/OSM_on_K8 b/Release7/OSM_on_K8 new file mode 100644 index 0000000..37070b4 --- /dev/null +++ b/Release7/OSM_on_K8 @@ -0,0 +1,18 @@ +# OSM on kubernetes + +## Proposer ## +Michael Marchetti (Sandvine) + +## Type ## +**Feature** + +## Target MDG/TF ## +all MDG's + +## Description ## +The OSM installer currently asssumes it will be installed on a host (or VM). The MDG +components/services are deployed in separate lxc containers. A Release4 directive is to simplify +the install process. Moving to docker containers will help with this strategy as the MDG images +that are built and tested in the CI pipeline can be installed directly without having to go through +a long term install process. Subsequently, the docker OSM components now need to be orchestrated +with one another (UI<->SO<->RO, etc), and utilizing kubernetes will simplify this. \ No newline at end of file diff --git a/Release7/RO_migration_to_Python3.md b/Release7/RO_migration_to_Python3.md new file mode 100644 index 0000000..f4cd032 --- /dev/null +++ b/Release7/RO_migration_to_Python3.md @@ -0,0 +1,28 @@ +# RO migration to Python3 + +## Proposers + +- Alfonso Tierno (Telefonica) +- Gerardo Garcia (Telefonica) +- Francisco Javier Ramon (Telefonica) + +## Type + +Feature + +## Target MDG/TF + +RO, Devops + +## Description + +Python 2 End of Life is expected for January 1st 2020. We need to address +the migration to Python3 before that date. + +## Demo or definition of done + +- RO will run on python3. RO client will also run on python3. +- A new debian package will be produced "python3-osm-ro", totally based on Python3, + with no dependences on Python2 or Python2 libraries. +- The new debian package will be used by the RO Dockerfile in Devops stage3. + diff --git a/Release7/ha_proxy_charms.md b/Release7/ha_proxy_charms.md deleted file mode 100644 index 053c116..0000000 --- a/Release7/ha_proxy_charms.md +++ /dev/null @@ -1,38 +0,0 @@ -# HA Proxy Charms # - -## Proposer ## - -- Tytus Kurek (Canonical) -- David Garcia (Canonical) -- Dominik Fleischmann (Canonical) - -## Type ## - -**Feature** - -## Target MDG/TF ## - -IM, N2VC, LCM - -## Description ## - -Some features (8681 and 7657) are assuring that an HA VCA will be available -to OSM. Nontheless to achieve full High Availability in Charm Workloads the Charms -must also be designed to achieve that. - -For this, an additional value will have to be added to the descriptor to state the -number of units of each Charm that should be created. This will require modifications -in IM and LCM. - -Furthermore, N2VC will have to be modified to recognize the leader unit of each charm -when executing actions. - -Finally to assure that future charms can be written with high availability in mind an -example proxy charm will be provided. - -## Demo or definition of done ## - -Once this feature is done it will be possible to deploy several units of proxy charms in -a LXD Cluster to assure redundancy and high availability between them. This way if one of -the cluster node fails it won't affect the operations of the workloads. The charms will -have to be written with high availability support. diff --git a/Release8/Adding_relations_support_for_multi_charm_VNFs.md b/Release8/Adding_relations_support_for_multi_charm_VNFs.md new file mode 100644 index 0000000..45632ce --- /dev/null +++ b/Release8/Adding_relations_support_for_multi_charm_VNFs.md @@ -0,0 +1,13 @@ +# Adding relations support for multi-charm VNFs # + +## Proposer ## +Adam Israel + +## Type ## +**Feature** + +## Target MDG/TF ## +IM, LCM, N2VC + +## Description ## +To be added \ No newline at end of file diff --git a/Release8/Ansible_OSM_installer.md b/Release8/Ansible_OSM_installer.md new file mode 100644 index 0000000..20de63e --- /dev/null +++ b/Release8/Ansible_OSM_installer.md @@ -0,0 +1,46 @@ +# Install OSM to OpenStack using Ansible # + +## Proposer(s) ## + +Antonio Marsico (BT) + +## Type ## + +Feature + +## Target MDG/TF ## + +Devops, Other + +## Description ## + +Installing OSM on top of an OpenStack infrastructure may be cumbersome. Networks, Security groups, Flavors and Images are a few example of what you need to setup before being ready to install OSM on OpenStack. + +This feature proposes the automatic installation of OSM on top of an OpenStack infrastructure. It is based on Ansible, a well-known automation tool. The Ansible playbook performs the following operations: + +* Create an external network and its subnet +* Download the Ubuntu 18.04 image and upload it +* Create a security group to allow SSH and HTTP access to an instance +* Create a VM flavour compatible to OSM +* Deploy an Ubuntu 18.04 instance +* Install OSM when the instance becomes available + +All these tasks are performed only if required. This is a feature of Ansible. For instance, if the Ubuntu 18.04 image is already present, the task is skipped. + +## Demo or definition of done ## + +It is an open question to the community if it worth adding this as an option to the `install_osm.sh` installer or leave it as a standalone feature. + +If the standard invocation of Ansible is used, the following procedure is required. + +### Playbook execution + +In order to execute the playbook, it is required an OpenStack openrc file. It can be downloaded from the OpenStack web interface Horizon. + +After that, it can be loaded with the following command: + +`$ . openrc` + +Then, all the credentials are loaded in the bash environment. Now it is possible to execute the playbook to configure OpenStack and install OSM: + +`$ ansible-playbook site.yml` \ No newline at end of file diff --git a/Release8/Full exposure of internal RO runtime data to common OSM database.md b/Release8/Full exposure of internal RO runtime data to common OSM database.md new file mode 100644 index 0000000..1dfc4c6 --- /dev/null +++ b/Release8/Full exposure of internal RO runtime data to common OSM database.md @@ -0,0 +1,45 @@ +# Full exposure of internal RO runtime data to common OSM database + +## Proposer + +- Gerardo Garcia (Telefonica) +- Alfonso Tierno (Telefonica) +- Francisco Javier Ramon (Telefonica) + +## Type + +**Feature** + +## Target MDG/TF + +RO, LCM + +## Description + +Currently all runtime information related to resources is keep in an internal +database in RO, which is partially replicated in the common database, involving +a translation of models and storage mechanisms. This is inefficient for several +reasons: + +- RO-related information stored in the common database is not authoritative, +but a copy of the real authoritative info (information is duplicated). +- Therefore, there is no 100% guarantee that the info in the common databases +is kept in sync with the actual authoritative database. +- There is a translation in place, to adapt the information to the different +database paradigms (relational vs. noSQL) and map to to system-level IDs and +objects. +- Almost all changes at IM trigger a specific development to change the +relational RO database and to make the appropriate translations. This usually +requieres extensions in RO´s northbound interface capabilities too. +- Maintaining this duplicate mechanism makes RO development and evolution +artificially complex with no obvious advantages. + +Hence this feature requests the direct use of common OSM services by RO +(particularly, the common database) so that information is always up to date +and RO's maintenance is largely simplified, being current legacy RO's NBI and +translation mechanisms no longer needed. + +## Demo or definition of done + +Check that all information of state in RO is always available in the common +NoSQL databes, and that internal RO's MySQL database can be safely removed. diff --git a/Release8/Instantiation in VIMs with more than one physnet.md b/Release8/Instantiation in VIMs with more than one physnet.md new file mode 100644 index 0000000..7cf8264 --- /dev/null +++ b/Release8/Instantiation in VIMs with more than one physnet.md @@ -0,0 +1,45 @@ +# Instantiation in VIMs with more than one physnet + +## Proposer + +- Gerardo Garcia (Telefonica) +- Alfonso Tierno (Telefonica) +- Francisco Javier Ramon (Telefonica) + +## Type + +**Feature** + +## Target MDG/TF + +RO, CLI (optional) + +## Description + +In environments where some physical redundancy is required in terms of +networking, it is common the use of schemas with more than one switch upstream, +dividing the physical medium in groups of physical interfaces depending of the +upstream switch they are attached. In order to facilitate a sensible management +of these physical interfaces belonging to different "redundancy groups" by the +VIM, there is the possibility to classify them into the so-called 'physnets', +so that the VIM can leverage on that physical redundacy if needed. + +While this feature should not create a fundamental change in OSM operation or +the way its modelling works, and it is a fact that OSM can work with these +environments, it is also true that it works today with some limitations when +SDN Assist is in place. Thus, in a VIM with multiple physnets, only one can be +registered today when configuring a VIM target in OSM, leading to potential +underuse of resources (usually, by one half). Furthermore, when those physnets +obbey to some kind of physical active-active scheme, OSM cannot leverage on +this information to make NS/NSI deployments more reliable. + +This feature intends to solve the limitations described above. + +## Demo or definition of done + +With a VIM with multiple physnets, check that it is possible: + +- Register the VIM with all its physnets when defining a VIM target in OSM with +SDN Assist. +- Deploy a large NS requiring SDN Assist in such a VIM so that no interfaces +are excluded because of the physnet they belong. diff --git a/Release8/Juniper_Contrail_SDN_Plugin.md b/Release8/Juniper_Contrail_SDN_Plugin.md new file mode 100644 index 0000000..08678df --- /dev/null +++ b/Release8/Juniper_Contrail_SDN_Plugin.md @@ -0,0 +1,27 @@ +# Juniper Contrail SDN Controller support # + +## Proposer ## +Adam Israel (Canonical) +Arno van Huyssteen (Canonical) +David Garcia (Canonical) +Eduardo Sousa (Canonical) + +## Type ## +**Feature** + +## Target MDG/TF ## +RO + +## Description ## +OSM currently supports FloodLight, ONOS and ODL; these are the most used in +the open source world. While there is a benefit in having this compatibility +in OSM, there is a recurring need to support commercial SDN controllers, since +those will be by far more prevalent in real carrier deployments. + +The objective of this proposal is to bring to OSM one of the more widely used +SDN Controllers in the market. While Juniper Contrail provides a wide array of +features, the scope of this proposal will be to cover SDN Assist use cases. + +## Demo or definition of done ## +* Being able to add Juniper Contrail as a SDN Controller. +* Being able to use Juniper Contrail for SDN Assist use cases. diff --git a/Release8/charm_based_osm_installation.md b/Release8/charm_based_osm_installation.md new file mode 100644 index 0000000..8abd264 --- /dev/null +++ b/Release8/charm_based_osm_installation.md @@ -0,0 +1,50 @@ +# Installing OSM to Kubernetes using Charms # + +## Proposer(s) ## + +Adam Israel (Canonical) +David Garcia (Canonical) +Dominik Fleischmann (Canonical) + +## Type ## + +Feature + +## Target MDG/TF ## + +Devops, Other + +## Description ## + +We propose to extend the current installer to support installing OSM on top of Kubernetes using Juju and charms. + +This would be implemented as a new, optional switch to the installer. The user would be able to use the current '-c' switch to specify that they want the charmed version of OSM to be installed, rather than the default Docker Swarm. + +OSM charms are an open source collection of charms, under the Apache 2.0 license, that deploy OSM on top of Juju and Kubernetes. + +Each charm uses the Docker image produced by the community, so the user is getting the same OSM code, delivered via an alternative method that is designed to be modeled and operated at scale. + +Additionally, we'd like to move these charms under OSM governance, into a new git repository called osm-charms. This repository will contain the charms, interfaces, and assorted scripts related to building, testing, and publishing said charms. + +This is similar to how charms live upstream in OpenStack: +https://opendev.org/openstack?q=charm&tab=&sort=recentupdate + +Lastly, we'll work with the devops community to integrate charms into Jenkins, allowing it to run through stages 1-4 the same as the Docker Swarm and Kubernetes installer options. + +This will allow the charmed installation of OSM to undergo the same rigorous testing as other installer methods do in order to ensure the same quality. + +We will then work with the upstream community to improve and extend the current testing, i.e., adding Robot tests and exercising use-cases of interest to commercial users of OSM. + + +## Demo or definition of done ## + +As part of the installation of OSM, the user can pass the -c switch to `install_osm.sh` to override the default behavior. + +A user running the command 'install_osm.sh -c charmed' would install a single-instance version of OSM on top of microk8s via charms. + +Additional optional switches would be supported, including: + +--ha: install in High Availability mode +--k8s_endpoint: Specify the endpoint of an existing Kubernetes to use instead of microk8s +--bundle: Specify a custom bundle.yaml to use to deploy OSM via charms +--microstack: Install and configure microstack alongside OSM diff --git a/Release8/ha_proxy_charms.md b/Release8/ha_proxy_charms.md new file mode 100644 index 0000000..053c116 --- /dev/null +++ b/Release8/ha_proxy_charms.md @@ -0,0 +1,38 @@ +# HA Proxy Charms # + +## Proposer ## + +- Tytus Kurek (Canonical) +- David Garcia (Canonical) +- Dominik Fleischmann (Canonical) + +## Type ## + +**Feature** + +## Target MDG/TF ## + +IM, N2VC, LCM + +## Description ## + +Some features (8681 and 7657) are assuring that an HA VCA will be available +to OSM. Nontheless to achieve full High Availability in Charm Workloads the Charms +must also be designed to achieve that. + +For this, an additional value will have to be added to the descriptor to state the +number of units of each Charm that should be created. This will require modifications +in IM and LCM. + +Furthermore, N2VC will have to be modified to recognize the leader unit of each charm +when executing actions. + +Finally to assure that future charms can be written with high availability in mind an +example proxy charm will be provided. + +## Demo or definition of done ## + +Once this feature is done it will be possible to deploy several units of proxy charms in +a LXD Cluster to assure redundancy and high availability between them. This way if one of +the cluster node fails it won't affect the operations of the workloads. The charms will +have to be written with high availability support. diff --git a/charm_based_osm_installation.md b/charm_based_osm_installation.md deleted file mode 100644 index 8abd264..0000000 --- a/charm_based_osm_installation.md +++ /dev/null @@ -1,50 +0,0 @@ -# Installing OSM to Kubernetes using Charms # - -## Proposer(s) ## - -Adam Israel (Canonical) -David Garcia (Canonical) -Dominik Fleischmann (Canonical) - -## Type ## - -Feature - -## Target MDG/TF ## - -Devops, Other - -## Description ## - -We propose to extend the current installer to support installing OSM on top of Kubernetes using Juju and charms. - -This would be implemented as a new, optional switch to the installer. The user would be able to use the current '-c' switch to specify that they want the charmed version of OSM to be installed, rather than the default Docker Swarm. - -OSM charms are an open source collection of charms, under the Apache 2.0 license, that deploy OSM on top of Juju and Kubernetes. - -Each charm uses the Docker image produced by the community, so the user is getting the same OSM code, delivered via an alternative method that is designed to be modeled and operated at scale. - -Additionally, we'd like to move these charms under OSM governance, into a new git repository called osm-charms. This repository will contain the charms, interfaces, and assorted scripts related to building, testing, and publishing said charms. - -This is similar to how charms live upstream in OpenStack: -https://opendev.org/openstack?q=charm&tab=&sort=recentupdate - -Lastly, we'll work with the devops community to integrate charms into Jenkins, allowing it to run through stages 1-4 the same as the Docker Swarm and Kubernetes installer options. - -This will allow the charmed installation of OSM to undergo the same rigorous testing as other installer methods do in order to ensure the same quality. - -We will then work with the upstream community to improve and extend the current testing, i.e., adding Robot tests and exercising use-cases of interest to commercial users of OSM. - - -## Demo or definition of done ## - -As part of the installation of OSM, the user can pass the -c switch to `install_osm.sh` to override the default behavior. - -A user running the command 'install_osm.sh -c charmed' would install a single-instance version of OSM on top of microk8s via charms. - -Additional optional switches would be supported, including: - ---ha: install in High Availability mode ---k8s_endpoint: Specify the endpoint of an existing Kubernetes to use instead of microk8s ---bundle: Specify a custom bundle.yaml to use to deploy OSM via charms ---microstack: Install and configure microstack alongside OSM