Quickstarts¶
This sections uses some of the OSM Community examples to provide quick examples of onboarding and instantiation.
Single VDU Linux machine with simple action through Proxy Charm¶
This example implements an Ubuntu virtual machine, packaged with a script that applies sample “touch” action through SSH automatically after instantiation (Day-1 operation), using a Juju-based Proxy Charm execution environment, documented here.
Onboarding¶
Onboarding requirements¶
OSM Client installed in linux
Internet access to clone packages
Step 1: Clone the OSM packages to your local machine¶
If you don’t have the OSM Packages folder cloned yet, proceed with cloning it:
git clone --recursive https://osm.etsi.org/gitlab/vnf-onboarding/osm-packages
Step 2: Explore the packages¶
First, explore the folder charm-packages/ha_proxy_charm_vnf
, in particular, the file ha_proxy_charm_vnfd.yaml
. This is the SOL006-compliant VNF descriptor, which models a single VDU with the specified image (named ubuntu18.04
here), three connection-points (mgmtVM-eth0-int, dataVM-xe0-int), certain computing resources (which you can adapt), a day-1 primitive named “touch”, specified under the initial-config-primitive
section, which runs a “touch” command for creating a file in the VDU, and finally, a day-2 primitive under the same name which enables the same action on demand.
“config” primitive: defines the credentials that the Ubuntu VM already has (predefined with cloud-init at the
cloud_init/cloud-config.txt
file), so it can be configured.“touch” initial-config (day-1) primitive: runs the “touch” command automatically, creating the file “/home/ubuntu/first-touch” in the VDU machine.
“touch” config (day-2) primitive: runs the “touch” command after on demand invocation, creating the file “/home/ubuntu/touched” in the VDU machine.
Then, explore the folder charm-packages/ha_proxy_charm_ns
, in particular, the file ha_proxy_charm_nsd.yaml
. This is the SOL006-compliant NS descriptor, which models a Network Service containing the VNF specified above. It basically maps the VNF connection points with VIM-level networks.
Step 3: Upload the packages to the catalogue¶
Using the folders above, you can directly validate, compress and upload the contents to the OSM catalogues as packages, in this way:
# Upload the VNF package first
osm nfpkg-create ha_proxy_charm_vnf
# Then, upload the NS package that refers to the VNF package
osm nspkg-create ha_proxy_charm_ns
With this last step, the onboarding process has finished.
Instantiation¶
Instantiation requirements¶
Full OSM installation (Release 9+)
Access to a VIM with any Ubuntu image under the specified name.
Step 1: Ensure your infrastructure is ready¶
Ensure you have a VIM created with a default management network, for example, for OpenStack we would use the following command:
osm vim-create --name MY_VIM --tenant MY_VIM_TENANT --user MY_TENANT_USER --password MY_TENANT_PASSWORD --auth_url 'http://MY_KEYSTONE_URL' --account_type openstack --config '{management_network_name: MY_MANAGEMENT_NETWORK}'
Your management network should exist already in your VIM and should be reachable from the OSM machine.
Step 2: Instantiate the Network Service¶
Launch the Network Service with the following command:
osm ns-create --ns_name NS_NAME --nsd_name ha_proxy_charm-ns --vim_account MY_VIM
Note that the management network defined at the NSD as “mgmtnet”, set with “mgmt-network: true”, is replaced at instantiation time with the management network you defined when creating your VIM.
Step 3: Visualize the results¶
Once instantiated, you can see the NS status with the osm ns-list
command or visiting the GUI.
Furthermore, you can check:
The Day-1 primitive creation & execution result, which with the Juju-based execution environments can be seen by running the
juju status --model NS_ID
command.The Day-1 primitive results in the destination machine will show the created files in the /home/ubuntu folder.
The Day-2 primitive execution results on demand after running it with:
osm ns-action NS_NAME --vnf_name 1 --action_name touch
Possible quick customizations¶
Some common customizations that make this package easily reusable are:
Modify the VDU’s image, computing resources and/or interfaces at the VNFD.
Add more VDUs, with different characteristics at the VNFD.
Change the command to run in the “touch” primitive, by modifying the
charms/simple/src
file contents, specifically theon_touch_action
function.
Single VDU router with SNMP Metrics and Ansible Playbook¶
This example implements a VyOS-based virtual machine with routing/firewall functions, packaged with an Ansible Playbook that is applied automatically after instantiation (Day-1 operation), using Helm-based execution environments, which are documented here.
Onboarding¶
Onboarding requirements¶
OSM Client installed in linux
Internet access to clone packages
Step 1: Clone the OSM packages to your local machine¶
If you don’t have the OSM Packages folder cloned yet, proceed with cloning it:
git clone --recursive https://osm.etsi.org/gitlab/vnf-onboarding/osm-packages
Step 2: Explore the packages¶
First, explore the folder snmp_setcomm_ee_vnf
, in particular, the file snmp_setcomm_ee_vnfd.yaml
. This is the SOL006-compliant VNF descriptor, which models a single VDU with the specified image (named vyos-1.1.7
here), three connection-points (internal, external, management), certain computing resources (which you can adapt) and a day-1 primitive named “monitor”, specified under the initial-config-primitive
section, which runs the following primitives:
“config” primitive: defines the credentials that the VyOS VM already has (predefined with cloud-init at the
cloud_init/vyos-userdata
file), so it can be configured.“generate_snmp” primitive: activates the SNMP monitoring as explained here
“ansible_playbook” primitive: runs a playbook, in this case named
community.yaml
, which is included in thehelm-charts/eechart/source
folder of the package.
Then, explore the folder snmp_setcomm_ee_nsd
, in particular, the file snmp_setcomm_ee_nsd.yaml
. This is the SOL006-compliant NS descriptor, which models a Network Service containing the VNF specified above. It basically maps the VNF connection points with VIM-level networks.
Step 3: Upload the packages to the catalogue¶
Using the folders above, you can directly validate, compress and upload the contents to the OSM catalogues as packages, in this way:
# Upload the VNF package first
osm nfpkg-create snmp_setcomm_ee_vnf
# Then, upload the NS package that refers to the VNF package
osm nspkg-create snmp_setcomm_ee_ns
With this last step, the onboarding process has finished.
Instantiation¶
Instantiation requirements¶
Full OSM installation (Release 9+)
Access to a VIM with this VyOS image (needs to be uncompressed first), named as
vyos-1.1.7
in your images catalogue.
Step 1: Ensure your infrastructure is ready¶
Ensure you have a VIM created with a default management network, for example, for OpenStack we would use the following command:
osm vim-create --name MY_VIM --tenant MY_VIM_TENANT --user MY_TENANT_USER --password MY_TENANT_PASSWORD --auth_url 'http://MY_KEYSTONE_URL' --account_type openstack --config '{management_network_name: MY_MANAGEMENT_NETWORK}'
Your management network should exist already in your VIM and should be reachable from the OSM machine.
Step 2: Instantiate the Network Service¶
Launch the Network Service with the following command (in this example we are using “osm-ext” as the network name)
osm ns-create --ns_name NS_NAME --nsd_name snmp_setcomm_ee-ns --vim_account MY_VIM
Note that the management network defined at the NSD as “mgmtnet”, is replaced at instantiation time with “osm-ext” as our real management network name.
Step 3: Visualize the results¶
Once instantiated, you can see the NS status with the osm ns-list
command or visiting the GUI.
Furthermore, you can check:
The SNMP metrics at the Prometheus dashboard (which usually runs at port 9091), which can be then integrated to the Grafana Dashboard.
The primitives execution result, which with the Helm-based execution environments can be seen by exploring the POD’s log. For example:
kubectl logs -n osm eechart-0016281061-0 -f
Possible quick customizations¶
Some common customizations that make this package easily reusable are:
Modify the VDU’s image, computing resources and/or interfaces at the VNFD.
Add more VDUs, with different characteristics at the VNFD.
Modify the SNMP variables or MIBs to use for metrics collection, at the
snmp_setcomm_ee_vnf/helm-charts/eechart/snmp
folder.Modify the playbook contents at the
snmp_setcomm_ee_vnf/helm-charts/eechart/source/community.yaml
folder.Modify the day-0 (cloud_init) configurations, with the
snmp_setcomm_ee_vnf/cloud_init/vyos-userdata
file and the “config” primitive contents at the VNFD.
Squid CNF modeled with Juju bundles¶
This example implements an Squid web proxy operator for Kubernetes. On instantiation, an operator charm is run, creating the pod spec with all the information Kubernetes need to spin up the deployment.
Onboarding¶
Onboarding requirements¶
OSM Client installed in linux
Internet access to clone packages
Step 1: Clone the OSM packages to your local machine¶
If you don’t have the OSM Packages folder cloned yet, proceed with cloning it:
git clone --recursive https://osm.etsi.org/gitlab/vnf-onboarding/osm-packages
Step 2: Explore the packages¶
The file squid_metrics_cnf_ns/squid_cnf_ns.yaml
is the SOL006-compliant network service descriptor, which maps the CNF to VIM-level networks. There is only one networks that this service will use:
“mgmtnet”: the management network, which represents the internal, private management network that is used for controlling the Squid service
The vnf directory squid_cnf
contains:
the SOL006-compliant virtual network function descriptor
squid_cnf.yaml
a
juju-bundles
folder where the juju bundle and all the charms are located.
Explore Day 0 Actions¶
For day-0 configuration in OSM, cloud-init is usually used, but that is not available in CNFs. Instead, in the juju bundle, we could be able to define some options to the charms, that will be considered at deployment time. These configs could trigger some internal actions in the charms, so we can consider these as day-0 actions.
In the file squid_metrics_cnf/juju-bundles/bundle.yaml
, we can set the options here:
description: Squid Bundle
bundle: kubernetes
applications:
squid:
charm: ./charms/squid-operator
scale: 1
options: {} # <-- OPTIONS FOR THE CHARM HERE
Explore Day 1 Actions¶
Day 1 actions are specifies under the initial-config-primitive
section in the vnfd descriptor.
initial-config-primitive:
- seq: 0
name: add-url
parameter:
- name: application-name
data-type: STRING
value: squid
- name: url
data-type: STRING
value: "osm.etsi.org"
There is only one day 1 action in this package, which is add-url
. There are two parameters passed, application-name
and url
. The first one indicates the name of the application the action should be executed on. The second one indicates the domain that we would like to enable through the use of our proxy.
In this action we are allowing access to osm.etsi.org
through the proxy.
Explore Day 2 Actions¶
Day 2 actions are specifies under the config-primitive
section in the vnfd descriptor.
config-primitive:
- name: add-url
parameter:
- name: application-name
data-type: STRING
default-value: squid
- name: url
data-type: STRING
default-value: ""
- name: delete-url
parameter:
- name: application-name
data-type: STRING
default-value: squid
- name: url
data-type: STRING
default-value: ""
There are two actions available: add-url
and delete-url
. The first action is the same as the one used for day 1 configuration. The second one is similar, because the set of parameters that are passed to the action are the same; the main difference is that this action will remove previously added url from squid, so that we are not longer able to reach that url through the proxy.
Step 3: Upload the packages to the catalogue¶
Using the folders above, you can directly validate, compress and upload the contents to the OSM catalogues as packages, in this way:
# Upload the VNF package first
osm nfpkg-create squid_metrics_cnf/
# Then, upload the NS package that refers to the VNF package
osm nspkg-create squid_metrics_cnf_ns/
With this last step, the onboarding process has finished.
Instantiation¶
Instantiation requirements¶
Full OSM installation (Release 9+)
A VIM added with a Kubernetes cluster registered and enabled
Step 1: Ensure your infrastructure is ready¶
Ensure you have a VIM created, and a K8s cluster registered to it:
osm vim-create --name MY_VIM --tenant MY_VIM_TENANT --user MY_TENANT_USER --password MY_TENANT_PASSWORD --auth_url 'http://MY_KEYSTONE_URL' --account_type openstack --config '{management_network_name: MY_MANAGEMENT_NETWORK}'
osm k8scluster-add MY_CLUSTER --creds kubeconfig.yaml --vim MY_VIM --k8s-nets '{net1: MY_MANAGEMENT_NETWORK}' --version "1.20" --description="My Kubernetes Cluster"
If you do not want to associate the Kubernetes cluster with an actual VIM, you can always create a dummy vim-account and register your cluster with that:
osm vim-create --name MY_DUMMY_VIM_NAME --user u --password p --tenant p --account_type dummy --auth_url http://localhost/dummy
osm k8scluster-add MY_CLUSTER --creds kubeconfig.yaml --vim MY_DUMMY_VIM_NAME --k8s-nets '{net1: MY_MANAGEMENT_NETWORK}' --version "1.20" --description="My Kubernetes Cluster"
To ensure the K8s cluster has been added properly, execute the following command and ensure both Juju is in ENABLED
state.
osm k8scluster-list
Step 2: Instantiate the Network Service¶
Launch the Network Service with the following command (in this example we are using “osm-ext” as the network name)
osm ns-create --ns_name squid-cnf-ns \
--nsd_name squid_cnf_ns \
--vim_account MY_VIM \
--config \
'{vld: [ {name: mgmtnet, vim-network-name: osm-ext} ] }'
This shows how to provide an override to the name of the management network in the NS:
mgmtnet
is mapped to a network in the VIM calledosm-ext
Step 3: Visualize the results¶
Once instantiated, you can see the NS status with the osm ns-list
command or visiting the GUI.
Once it finished instantiating, you can get the IP address of the squid service from Kubernetes with the following commands:
cnf_id=`osm vnf-list | grep squid | awk '{ print $2 }'`
osm vnf-show --literal $cnf_id | \
yq e '.kdur[0].services[] | select(.name == "squid").external_ip[0]' -
Search for the external_ip
of the service named squid
.
Step 4: Check Day 1 action¶
First of all you need to get the external_ip
from the squid
service as mentioned before. One you have that, execute the following command to check that the day 1 action has worked properly:
https_proxy=EXTERNAL_SQUID_IP:3128 curl https://osm.etsi.org
You should be able to get some content from there. Then test with a different domain and check that you are getting a 403
error.
Note: Once the deployment is finished, it could take up to 1 minute to properly apply the configuration to squid. If you initially get a
403
error accessingosm.etsi.org
, wait a bit.
Step 5: Execute Day 2 action¶
First of all you need to get the external_ip
from the squid
service as mentioned before. One you have that, execute the following command:
osm ns-action --action_name add-url --vnf_name squid_cnf --kdu_name squid-metrics-kdu --params '{url: MY_DOMAIN}' squid-cnf-ns
Check that the action has worked with the following command:
https_proxy=EXTERNAL_SQUID_IP:3128 curl https://MY_DOMAIN
Possible quick customizations¶
Uncomment the commented lines in squid_metrics_cnf/juju-bundles/bundle.yaml
in order to enable metrics.
description: Squid Bundle
bundle: kubernetes
applications:
squid:
charm: ./charms/squid-operator
scale: 1
options:
enable-exporter: true
prometheus:
charm: ./charms/prometheus-operator
scale: 1
grafana:
charm: ./charms/grafana-operator
scale: 1
relations:
- - prometheus:target
- squid:prometheus-target
- - grafana:grafana-source
- prometheus:grafana-source
This will add Prometheus and Grafana to the deployment, and also it will enable a node exporter in the squid so you are able to see the cpu and network metrics from it.
Single VDU Virtual Desktop with Native Charms¶
This example implements an Ubuntu Mate virtual machine with XRDP, created from a stock Ubuntu Cloud image (https://cloud-images.ubuntu.com/focal/current/). On instantiation, a native charm is run, installing all the required packages and configuring the VM to run as a desktop.
Onboarding¶
Onboarding requirements¶
OSM Client installed in linux
Internet access to clone packages
Step 1: Clone the OSM packages to your local machine¶
If you don’t have the OSM Packages folder cloned yet, proceed with cloning it:
git clone --recursive https://osm.etsi.org/gitlab/vnf-onboarding/osm-packages
Step 2: Explore the packages¶
The file hackfest_virtual-pc_ns/hackfest_virtual-pc_nsd.yaml
is the SOL006-compliant network service descriptor, which maps the VNF to VIM-level networks. There are two networks that this service will use:
“mgmtnet”: the management network, which represents the internal, private management network that is used for controlling the VM
“private”: a secondary network that should only expose the RDP port, providing the point of presence for remote workers. This network can be created by OSM on demand
The vnf directory hackfest_virtual-pc_vnfd
contains:
the SOL006-compliant virtual network function descriptor
virtual-pc_vnfd.yaml
a cloud-init folder where some day 0 operations can take place
a charms folder, where the remainder of the day 0, day 1 and day 2 operations code is stored
Explore Day 0 Actions¶
Actions that are encoded in files before the VNF starts are known as day 0 actions. In this package, we set a password for the ubuntu user so that it is immediately available for use. This is done via the Openstack cloud-init mechanism. The VNFD has the following entry, which instructs OSM to use the file virtual-pc_init
from the cloud-init
directory as the cloud init file when launching the vm.
vdu:
- cloud-init-file: virtual-pc_init
Further information on cloud-init can be found here: https://cloudinit.readthedocs.io/en/latest/topics/modules.html
The use of day 0 is not required for native charms to work. It is used as an example in this VNFD.
Explore Day 1 Actions¶
When using native charms, day 1 actions are performed by the charms themselves. First, the VNFD contains information about the execution environment:
execution-environment-list:
- id: virtual-pc-ee
juju:
charm: virtual-pc
proxy: false
Where:
id
is an arbitrary identifierjuju
indicates that this will be a charm running under Jujucharm
is the name of the folder in the package where the compiled charm is locatedproxy
is a flag that tells OSM to create an execution environment for the charm (true
), or that the charm will execute within the scope of the NF itself (false
)
Now that we know this is a Juju charm, we can look at the folder structure:
hackfest_virtual-pc_vnfd
| - charms
| | - virtual-pc
| | - virtual-pc-src
There are two directories present: virtual-pc, which contains the compiled and ready to execute charm code, and virtual-pc-src, which is where all the source code to prepare the charms is placed. The source does not need to be present in the package for deployment; it is just a convenient way of keeping them together for demonstration purposes. Also the name of the source code directory does not need to match the charm
name, as source is never used at deployment time.
As an operator, Juju itself uses an observer pattern, where there are lifecycle events that the charm can listen to and perform actions.
From https://juju.is/docs/sdk/events, we can see the install event is emitted once at the beginning of a charm’s lifecycle, so this is the best place for us to implement our day 1 actions. Looking at the source code of the charm from virtual-pc-src/src/charm.py
, there is an _on_install method which was set up to be called on the install lifecycle action. This is where we install the base APT packages that we require for the virtual desktop to function.
Explore Day 2 Actions¶
While Juju provides specific lifecycle actions, it is not restricted to just those actions. For day 2 operations, we can define any action we need, along with additional parameters to pass into the action. For example, in the VNFD, we can see:
config-primitive:
- name: add-package
execution-environment-ref: virtual-pc-ee
parameter:
- data-type: STRING
name: package
Where:
name
is the name of the action that we tell OSM to perform with theosm ns-action
commandexecution-environment-ref
points to the name of the execution environment, which is where we defined the name of the charm and that this was run using Jujuparameter
provides optional parameters by name forosm ns-action
to pass to the charm on execution
The charm code was told to listen for the add-package
event:
self.framework.observe(self.on["add-package"].action, self._add_package)
And the _add_package
method can fetch the parameters from the event itself:
packages=event.params["package"]
Step 3: Upload the packages to the catalogue¶
Using the folders above, you can directly validate, compress and upload the contents to the OSM catalogues as packages, in this way:
# Upload the VNF package first
osm nfpkg-create hackfest_virtual-pc_vnfd
# Then, upload the NS package that refers to the VNF package
osm nspkg-create hackfest_virtual-pc_ns
With this last step, the onboarding process has finished.
Instantiation¶
Instantiation requirements¶
Full OSM installation (Release 9+)
Access to a VIM with Ubuntu 20.04 image loaded and named ubuntu20.04 in the images catalogue.
Step 1: Ensure your infrastructure is ready¶
Ensure you have a VIM created with a default management network, for example, for OpenStack we would use the following command:
osm vim-create --name MY_VIM --tenant MY_VIM_TENANT --user MY_TENANT_USER --password MY_TENANT_PASSWORD --auth_url 'http://MY_KEYSTONE_URL' --account_type openstack --config '{management_network_name: MY_MANAGEMENT_NETWORK}'
Your management network should exist already in your VIM and should be reachable from the OSM machine.
Step 2: Instantiate the Network Service¶
Launch the Network Service with the following command (in this example we are using “osm-ext” as the network name)
osm ns-create --ns_name virtual-desktop \
--nsd_name hackfest_virtual-pc_ns \
--vim_account vim-name \
--config \
'{vld: [ {name: mgmtnet, vim-network-name: osm-ext} ] }'
This shows how to provide an override to the name of the management network and private network in the NS:
mgmtnet
is mapped to a network in the VIM calledosm-ext
This can be changed to match your VIM as needed.
Step 3: Visualize the results¶
Once instantiated, you can see the NS status with the osm ns-list
command or visiting the GUI. You can also get the IP address of the VNF using Openstack, or OSM commands:
osm ns-show virtual-desktop --literal | \
yq e '.vcaStatus.*.machines.0.network_interfaces.ens3.ip_addresses.0' -
With the IP address, you can SSH or RDP to the server and log in as ubuntu with password osm2021 or as set in the cloud-init file. Actions can be performed on the virtual desktop as follows:
osm ns-action virtual-desktop --vnf_name 1 --action_name update-system
to trigger the update-system function in the charm, which calls apt to update the software on the systemosm ns-action virtual-desktop --vnf_name 1 --action_name reboot
to reboot the systemosm ns-action virtual-desktop --vnf_name 1 --action_name announce --params '{message: "Hello from OSM!"}'
to display a notice to the logged in user’s desktop
Possible quick customizations¶
Some common customizations that make this package easily reusable are:
Modify the VDU’s image, computing resources and/or interfaces at the VNFD. Right now the memory and CPU requirements are fairly high. Minimum recommended are 2 CPUS and 2GB RAM
Modify the software installed by default
Modify the use of an Apt cache that is set up during the day 1 action
Add actions to add/remove users, or change passwords
OpenLDAP CNF modeled with Helm Charts¶
This example implements a CNF with an openldap helm chart from stable helm chart repository.
About the openldap helm chart¶
LDAP server is slapd from <openldap.org>. It follows OpenLDAP Public License
The LDAP helm chart can be found in Artifact Hub and it is available via the stable helm chart repo.
The helm chart uses this docker image, which follows MIT license.
Onboarding¶
Onboarding requirements¶
OSM Client installed in linux
Internet access to clone packages
Step 1: Clone the OSM packages to your local machine¶
If you don’t have the OSM Packages folder cloned yet, proceed with cloning it:
git clone --recursive https://osm.etsi.org/gitlab/vnf-onboarding/osm-packages
Step 2: Explore the packages¶
First, explore the folder openldap_knf
, in particular, the file openldap_vnfd.yaml
. This is the CNF descriptor, which models a single KDU (Kubernetes Deployment Unit) with the specified helm chart (stable/openldap
here), a single connection point (mgmt-ext
) where all Kubernetes services of this helm-chart will be exposed, and certain k8s-cluster requirements (in this case, it must have at least one network to expose services).
In most of the cases, adapting your package would be as simple as changing the helm chart. where all the Kubernetes services of this helm chart will be exposed).
It must be noted that the descriptor follows a format defined in OSM, augmenting SOL006, because the modeling of CNF or any Kubernetes applications has not yet been included in ETSI NFV SOL006.
By default, it is assumed that the helm version for the helm charts is v3
. If the helm chart is based on v2
, the descriptor should add the line helm-version: v2
in the kdu section.
Step 3: Upload the packages to the catalogue¶
Using the folders above, you can directly validate, compress and upload the contents to the OSM catalogues as packages, in this way:
# Upload the VNF package first
osm nfpkg-create openldap_knf
# Then, upload the NS package that refers to the VNF package
osm nspkg-create openldap_ns
With this last step, the onboarding process has finished.
Instantiation¶
Instantiation requirements¶
Full OSM installation (Release 9+)
A Kubernetes cluster where to run the CNF, with a Load Balancer and a default storage class. Details about the requirements can be found here.
Step 1: Ensure your infrastructure is ready¶
Ensure you have a VIM created, for example, for OpenStack we would use the following command:
osm vim-create --name MY_VIM --tenant MY_VIM_TENANT --user MY_TENANT_USER --password MY_TENANT_PASSWORD --auth_url 'http://MY_KEYSTONE_URL' --account_type openstack
Make sure that you have your Kubernetes credentials file (kubeconfig.yaml
). Then, if your Kubernetes cluster is running inside of a VIM as a set of VM, identify the VIM network where the VM are connected. If your Kubernetes cluster is running outside the VIM, identify the VIM network where the Kubernetes cluster is physically connected. Check this guide for more details.
Once you have identified the VIM network, e.g. MY_K8S_NETWORK, register the Kubernetes cluster and associate it to the VIM as follows:
osm k8scluster-add MY_CLUSTER --creds kubeconfig.yaml --vim MY_VIM --k8s-nets '{net1: MY_K8S_NETWORK}' --version "1.20" --description="My Kubernetes Cluster"
In some cases, you might be interested in using an isolated K8s cluster to deploy your KNF. Although these situations are discouraged (an isolated K8s cluster does not make sense in the context of an operator network), it is still possible by creating a dummy VIM target and associating the K8s cluster to that VIM target:
osm vim-create –name MY_LOCATION_1 –user u –password p –tenant p –account_type dummy –auth_url http://localhost/dummy osm k8scluster-add MY_CLUSTER –creds kubeconfig.yaml –vim MY_LOCATION_1 –k8s-nets ‘{k8s_net1: null}’ –version “v1.15.9” –description=”Isolated K8s cluster in MY_LOCATION_1”
Step 2: Instantiate the Network Service¶
Launch the Network Service with the following command (in this example we are using “osm-ext” as the network name)
osm ns-create --ns_name ldap --nsd_name openldap_ns --vim_account vim-name --config "{vld: [{name: mgmtnet, vim-network-name: osm-ext}]}"
Particularize your instantiation parameters¶
You can use your own instantiation parameters for the KDU, for instance to specific IP address of the Kubernetes Load Balancer, or to initialize the LDAP server with an organization, domain and admin password. KDU params must be placed under additionalParamsForVnf:[VNF_INDEX]:additionalParamsForKdu:[KDU_INDEX]:additionalParams
and they follow the structure defined in the helm chart values file values.yaml
.
vld:
- name: mgmtnet
vim-network-name: osm-ext
additionalParamsForVnf:
- member-vnf-index: openldap
additionalParamsForKdu:
- kdu_name: ldap
additionalParams:
service:
type: LoadBalancer
loadBalancerIP: '172.21.248.204' # Load Balancer IP Address
adminPassword: osm4u
configPassword: osm4u
env:
LDAP_ORGANISATION: "Example Inc."
LDAP_DOMAIN: "example.org"
LDAP_BACKEND: "hdb"
LDAP_TLS: "true"
LDAP_TLS_ENFORCE: "false"
LDAP_REMOVE_CONFIG_AFTER_SETUP: "true"
Step 3: Visualize the results¶
Once instantiated, you can see the NS status with the osm ns-list
command or visiting the GUI.
Furthermore, you can check:
The status of the KDU directly from OSM by getting the NF instance ID (
osm vnf-list --ns ldap
) and getting the status using the commandosm vnf-show VNF-ID --kdu ldap
The status of the KDU using kubectl. First get the OSM project ID (
osm project-list
), then use kubectl to get details from the namespace identified by OSM project ID, as follows:kubectl -n OSM_PROJECT_ID get all
.The status of the KDU using helm. First get the OSM project ID (
osm project-list
), then use helm to get the helm release withhelm --kubeconfig kubeconfig.yaml -n OSM_PROJECT_ID
, then use helm to get the helm release withhelm --kubeconfig kubeconfig.yaml -n OSM_PROJECT_ID
.Access to openldap server:
ldapsearch -x -H <LDAP_SERVER_IP>:389 -b dc=example,dc=org -D "cn=admin,dc=example,dc=org" -w $LDAP_ADMIN_PASSWORD
Possible quick customizations¶
Some common customizations that make this package easily reusable are:
Modify the KDU’s helm-chart to use:
a different helm chart repo:
REPO_NAME/HELM_CHART
a specific version of a helm chart:
REPO_NAME/HELM_CHART:VERSION
a helm chart file
mychart.tgz
, which has to be placed inVNF_PACKAGE_FOLDER/charts/mychart.tgz
Use different instantiation parameters, derived from the helm values file
values.yaml
Using a helm chart from a different repo¶
If your helm chart is on a repo different from the stable repo, you can add it to OSM as follows:
osm repo-add --type helm-chart --description "Bitnami repo" bitnami https://charts.bitnami.com/bitnami
osm repo-add --type helm-chart --description "Cetic repo" cetic https://cetic.github.io/helm-charts
osm repo-add --type helm-chart --description "Elastic repo" elastic https://helm.elastic.co
osm repo-list
osm repo-show bitnami
Descriptors can include that reference as follows:
helm-chart: REPO_NAME/HELM_CHART
Using a specific version of a helm chart¶
Descriptors can point to a specific version of a helm chart as follows:
helm-chart: REPO_NAME/HELM_CHART:VERSION
Using a helm chart file¶
Sometimes it could be useful to use the tar.gz file. You could even fetch a helm chart from the repo and use it directly, or after modifying it.
You need to install helm client. Then, you can use helm client to search and download charts.
helm repo add [NAME] [URL] [flags]
helm search repo [KEYWORD]
helm fetch [REPO_NAME/HELM_CHART]
helm repo add stable https://charts.helm.sh/stable
helm search repo openldap
helm fetch stable/openldap
helm fetch stable/openldap --version 1.2.6
You could even modify a downloaded helm chart, for instance to add new parameters (called values). This guide will help you.
Using different instantiation parameters, derived from the helm values file¶
The allowed instantiation parameters for a KDU comes from the helm values file values.yaml
of a helm chart. You could get the default values, as well as other chart information, as follows:
helm show chart stable/openldap
helm show readme stable/openldap
helm show values stable/openldap
When instantiating with OSM, all you need to do is place those params under additionalParamsForVnf:[VNF_INDEX]:additionalParamsForKdu:[KDU_INDEX]:additionalParams
, with the right indentation.
Starting with Juju Bundles¶
This section covers the basics to start with Juju bundles.
First of all, let’s download the charmcraft
snap that will help us creating and building the charm.
sudo snap install charmcraft
Folder structure (VNF package)¶
This is the folder structure that we will use to include Juju bundles to our package.
└── juju-bundles
├── bundle.yaml
├── charm
│ └── example-operator
└── ops
└── example-operator
Inside the juju-bundles
folder:
bundle.yaml
: File with the Juju bundle. We will show it in detail in the following sections.charms/
: Folder with the built operators (charms).ops/
: Folder with the operators’ (charms) source code.
Create charm¶
To create a charm, we will first change the directory to the folder of the operators’ source code.
cd juju-bundles/ops/
Now we will create a folder for our charm operator, and initialize it:
mkdir example-operator
charmcraft init --project-dir example-operator --name example
Good practise:
The folder name should contain the application name, followed by
-operator
.The charm name should just be the name of the application.
Now we have the charm initialized!
Deployment type and service¶
The deployment type and service of the Kubernetes pod can be defined in the metadata.yaml
adding the following content:
deployment:
type: stateful | stateless
service: cluster | loadbalancer
Add storage¶
If the workload needs some persistent storage, it can be defined in the metadata.yaml
adding the following content:
storage:
storage-name:
type: filesystem
location: /path/to/storage
The location of the storage name will be mounted in a persistent volume in Kubernetes, so the data available there will persist even if the pod restarts. Charms with storage must be stateful
.
Operator code¶
This section shows the base code for all the Kubernetes charms. If you follow copy-paste the following content, you will just ned to update the pod_spec dictionary to match want you want.
In the code, there are comments explaining what is each key
for.
src/charm.py¶
#!/usr/bin/env python3
import logging
from ops.charm import CharmBase
from ops.main import main
from ops.model import ActiveStatus
logger = logging.getLogger(__name__)
file_mount_path = "/tmp/files"
file_name = "my-file"
file_content = """
This is the content of a file
that will be mounted as a configmap
to my container
"""
class ExampleCharm(CharmBase):
def __init__(self, *args):
super().__init__(*args)
self.framework.observe(self.on.config_changed, self.configure_pod)
self.framework.observe(self.on.leader_elected, self.configure_pod)
def configure_pod(self, _):
self.model.pod.set_spec(
{
"version": 3,
"containers": [ # list of containers for the pod
{
"name": "example", # container name
"image": "httpd:2.4", # image for the container
"ports": [ # ports exposed by the container
{
"name": "http",
"containerPort": 80,
"protocol": "TCP",
}
],
"kubernetes": { # k8s specific container attributes
"livenessProbe": {"httpGet": {"path": "/", "port": 80}},
"readinessProbe": {"httpGet": {"path": "/", "port": 80}},
"startupProbe": {"httpGet": {"path": "/", "port": 80}},
},
"envConfig": { # Environment variables that wil be passed to the container
"ENVIRONMENT_VAR_1": "value",
"ENVIRONMENT_VAR_2": "value",
},
"volumeConfig": [ # files to mount as configmap
{
"name": "example-file",
"mountPath": file_mount_path,
"files": [{"path": file_name, "content": file_content}],
}
],
}
],
}
)
self.unit.status = ActiveStatus()
if __name__ == "__main__":
main(ExampleCharm)
actions.yaml¶
Remove the fortune
action included when initializing the charm. Replace the file with the following content:
{}
config.yaml¶
Remove the thing
option included when initializing the charm. Replace the file with the following content:
options: {}
Config¶
Sometimes the workload we are deploying can have several configuration options. In the charm, we can expose those options as config, and then change the configuration as desired in each deployment.
To add the config, we just need to update the config.yaml
file.
options:
port:
description: Port for service
default: 80
type: int
debug:
description: Indicate if debugging mode should be enabled or not.
default: false
type: boolean
username:
description: Default username for authenticating to the service
default: admin
type: string
To get the config value in the code it is pretty simple:
...
class ExampleCharm(CharmBase):
def __init__(self, *args):
...
def configure_pod(self, _):
port = self.config["port"]
username = self.config["username"]
debug = self.config["debug"]
Actions¶
To add an action to the charm, we need to edit the actions.yaml
file with a content similar to this:
actions:
touch:
params:
filename:
description: Filename to the file that will be created - Full path
type: string
required:
- filename
The content above defines the high-level information about the action and the parameters accepted by it.
Here is how we can implement the code for the function above in src/charm.py
:
...
import subprocess
class ExampleCharm(CharmBase):
def __init__(self, *args):
...
self.framework.observe(self.on.touch_action, self.touch)
def touch(self, event):
filename = event["filename"]
try:
subprocess.run(["touch", filename])
event.set_results({
"output": f"File {filename} created successfully"
})
except Exception as e:
event.fail(f"Touch action failed with the following exception: {e}")
IMPORTANT: The action is executed in the workload pod, not in the operator one. This means that since the action code is in python, the container must have python installed. This limitation will go away soon with Juju 2.9.
Requirements¶
Add any extra python dependencies needed by the charm in the requirements.txt
file. Those will be downloaded when building the charm.
Build charm¶
Build the charm with the following command:
charmcraft build
Now we need to move the build
folder to juju-bundles/charms/
, and then reference the built charm from the Juju bundle.
mv build/ ../../charms/example-operator
Create bundle¶
Go to the juju-bundles
folder and fill the bundle.yaml with the following content:
description: Example Bundle
bundle: kubernetes
applications:
example:
charm: "./charms/example-operator"
scale: 1
options:
port: 80
debug: true
username: osm
More¶
Lifecycle events: You can find documentation about the Lifecycle events in the charms here.
Pod spec references:
https://discourse.charmhub.io/t/k8s-spec-v3-changes/2698
https://discourse.charmhub.io/t/k8s-spec-reference/3495
Juju docs: https://juju.is/docs/sdk
Operator framework docs: https://ops.readthedocs.io/en/latest/index.html