@@ -1151,7 +1151,111 @@ To remove the NSI2 run the command: `osm nsi-delete my_shared_slice`.
## Using Kubernetes-based VNFs (KNFs)
From Release SEVEN, OSM supports Kubernetes-based VNF (KNF). In order to test it, you require a K8s cluster connected to a network in the VIM (e.g. `vim-net`).
From Release SEVEN, OSM supports Kubernetes-based Network Functions (KNF). This feature unlock more than 20.000 packages that can be deployed besides VNFs and PNFs. This section guides you to deploy your first KNF, from the installation of multiple ways of kubernetes clusters until the selection of the package and deployment.
### Kubernetes installation
KNFs feature requires an operative kubernetes cluster. There are several ways to have that kubernetes running. From the OSM perpective, the kubernetes cluster is not an isolated element, but it is a technology that enables the deployment of microservices in a cloud-native way. To handle the networks and facilitate the conection to the infrastructure, the cluster have to be associated to a VIM. There is an special case where the kubernetes cluster is installed in a baremetal environment without the management of the networking part but in general, OSM consider that the kubernetes cluster is located in a VIM.
OSM propose three different ways to install kubernetes:
1. OSM kubernetes cluster Network Service
2.[Self-managed kubernetes cluster in a VIM](15-k8s-installation.md)
The configuration of metallb in layer2 is via Configmap.
```yaml
apiVersion:v1
kind:ConfigMap
metadata:
namespace:metallb-system
name:config
data:
config:|
address-pools:
- name: default
protocol: layer2
addresses:
- 172.21.248.18-172.21.248.18
```
`TIP1`: In addresses you need to set the `IP address range` of your kubernetes cluster. This means the IP address that the kubernetes cluster will allocate in the external network interface. Be sure you don't overlap the range of the DHCP address pool of your VIM external network. In case of your kubernetes installation is `self-managed kubernetes cluster in a VIM`, you need to be sure that the network port of the VM in your Openstack is set `port_security_enabled = False` (See [Neutron configuration](https://wiki.openstack.org/wiki/Neutron/ML2PortSecurityExtensionDriver)). That will allow you to allocate IP address in the same range of your external network. If you have multiple workers, the `port_security_enabled = False` have to be configured in all workers node in the external network interface.
`TIP2`: The minimal configuration for associate the kubernetes cluster to OSM is to have at least one LoadBalancer IP for juju controller. If you can't configure the `port_security_enabled = False` in your VIM, is enough to allocate the IP address of your VM to the metallb addresses as we did in the example. This assume that the IP of the VM is `172.21.248.18`.
After the creation of config.yaml we need to apply it to kubernetes cluster.
```bash
kubectl apply -f config.yaml
```
For Microk8s is even simpler.
```bash
$ microk8s.enable metallb
Enabling MetalLB
Enter the IP address range (e.g., 10.64.140.43-10.64.140.49): 192.168.0.10-192.168.0.25
[...]
MetalLB is enabled
```
#### Kubernertes default storageclass
A kubernetes persistent volume storage can be installed to your kubernetes cluster applying the following Manifest.
In order to test Kubernetes-based VNF (KNF), you require a K8s cluster connected to a network in the VIM (e.g. `vim-net`).
`TIP`: If you have a baremetal installation of kubernetes, you will need to add a non existing VIM in order to add the kubernetes cluster.
You will have to add the K8s cluster to OSM. For that purpose, you can use these instructions:
@@ -1161,6 +1265,14 @@ osm k8scluster-list
osm k8scluster-show cluster
```
The options used to add the cluster are the following:
- --creds: Is the location of the kubeconfig file where you have the cluster credentials
- --version: Current version of your kubernetes cluster
- --vim: The name of the VIM where the kubernetes cluster is deployed
- --description: Give a description to your kubernetes cluster
- --k8s-nets: It is a dictionary of the cluster network, where the `key` is an arbitrary name and the `value` of the dictionary is the name of the network in the VIM.
Then, you might need to add some repos from where to download helm charts required by the KNF:
This section describe the common installation procedure for a kubernetes cluster that can be used to link it with OSM. First a microk8s installation and second a kubeadm installation
## Microk8s installation procedure
Microk8s is a single-package fully conformant lightweight Kubernetes that works on 42 flavours of Linux. Perfect for developer workstations, IoT, edge, and CI/CD.
Using Microk8s as a Kubernetes cluster in OSM is straightforward.
First, install Microk8s with the following commands:
```bash
$ sudo snap install microk8s --classic
microk8s v1.17.2 from Canonical✓ installed
$ sudo usermod -a-G microk8s `whoami`
$ newgrp microk8s
$ microk8s.status --wait-ready
microk8s is running
addons:
cilium: disabled
dashboard: disabled
dns: disabled
fluentd: disabled
gpu: disabled
helm3: disabled
helm: disabled
ingress: disabled
istio: disabled
jaeger: disabled
juju: disabled
knative: disabled
kubeflow: disabled
linkerd: disabled
metallb: disabled
metrics-server: disabled
prometheus: disabled
rbac: disabled
registry: disabled
storage: disabled
```
Microk8s use [addons](https://microk8s.io/docs/addons) to extend its functionality. The required addons for Microk8s to work with OSM are “storage” and “dns”.
```bash
$ microk8s.enable storage dns
Enabling default storage class
[...]
microk8s-hostpath created
Storage will be available soon
Enabling DNS
[...]
DNS is enabled
```
You will want to use the metallb addon if your Microk8s is not running in the same machine as OSM. When OSM adds a K8s cluster, it initializes the cluster so it can deploy Juju and Helm workloads on it. In the Juju initialization process, a controller will be bootstrapped on the K8s cluster, which then will be accessible by the Juju client (N2VC). When the K8s cluster is external to the OSM host machine, it must give the Juju controller an external IP accessible from OSM.
Just execute the following command and specify the IP range allocable by the load balancer.
```bash
$ microk8s.enable metallb
Enabling MetalLB
Enter the IP address range (e.g., 10.64.140.43-10.64.140.49): 192.168.0.10-192.168.0.25
[...]
MetalLB is enabled
```
Export the Microk8s configuration and add it as a K8s cluster to OSM: