diff --git a/05-osm-usage.md b/05-osm-usage.md index b9056c9bce725965d6d699e38362d8a096f9e16b..4da7f73ecc8de5d9fec083f2cf85f79cbd7b1065 100644 --- a/05-osm-usage.md +++ b/05-osm-usage.md @@ -1151,7 +1151,28 @@ To remove the NSI2 run the command: `osm nsi-delete my_shared_slice`. ## Using Kubernetes-based VNFs (KNFs) -From Release SEVEN, OSM supports Kubernetes-based VNF (KNF). In order to test it, you require a K8s cluster connected to a network in the VIM (e.g. `vim-net`). +From Release SEVEN, OSM supports Kubernetes-based Network Functions (KNF). This feature unlock more than 20.000 packages that can be deployed besides VNFs and PNFs. This section guides you to deploy your first KNF, from the installation of multiple ways of kubernetes clusters until the selection of the package and deployment. + +### Kubernetes installation + +KNFs feature requires an operative kubernetes cluster. There are several ways to have that kubernetes running. From the OSM perpective, the kubernetes cluster is not an isolated element, but it is a technology that enables the deployment of microservices in a cloud-native way. To handle the networks and facilitate the conection to the infrastructure, the cluster have to be associated to a VIM. There is an special case where the kubernetes cluster is installed in a baremetal environment without the management of the networking part but in general, OSM consider that the kubernetes cluster is located in a VIM. + +For OSM you can use one of these three different ways to install your kubernetes cluster: + +1. [OSM kubernetes cluster Network Service](15-k8s-installation.md) +2. [Self-managed kubernetes cluster in a VIM](15-k8s-installation.md) +3. [Kubernetes baremetal installation](15-k8s-installation.md) + +### OSM Kubernetes requirements + +After the kubernetes installation is completed, you need to check if you have the following components in your cluster. + +1. [Kubernetes Loadbalancer](15-k8s-installation.md): to expose your KNFs to the network +2. [Kubernetes default Storageclass](15-k8s-installation.md): to support persistent volumes. + +### Adding kubernetes cluster to OSM + +In order to test Kubernetes-based VNF (KNF), you require a K8s cluster connected to a network in the VIM (e.g. `vim-net`). If you have a baremetal installation of kubernetes, you will need to add a VIM in order to add the kubernetes cluster. You will have to add the K8s cluster to OSM. For that purpose, you can use these instructions: @@ -1161,7 +1182,17 @@ osm k8scluster-list osm k8scluster-show cluster ``` -Then, you might need to add some repos from where to download helm charts required by the KNF: +The options used to add the cluster are the following: + +- --creds: Is the location of the kubeconfig file where you have the cluster credentials +- --version: Current version of your kubernetes cluster +- --vim: The name of the VIM where the kubernetes cluster is deployed +- --description: Give a description to your kubernetes cluster +- --k8s-nets: It is a dictionary of the cluster network, where the `key` is an arbitrary name and the `value` of the dictionary is the name of the network in the VIM. In case your k8s cluster is not located in a VIM, you could use '{net1: null}' + +## Adding repositories to OSM + +You might need to add some repos from where to download helm charts required by the KNF: ```bash osm repo-add --type helm-chart --description "Bitnami repo" bitnami https://charts.bitnami.com/bitnami @@ -1171,7 +1202,13 @@ osm repo-list osm repo-show bitnami ``` -Once done, you can work with KNF in the same way as you do with any VNF. You can onboard them. For instance, you can use the example below of a KNF consisting of a single Kubernetes deployment unit based on OpenLDAP helm chart. +## KNF Service on-boarding + +KNFs can be on-boarded using helm-charts or juju-bundles. In the following section is shown an example with helm-chart and for juju-bundles. + +### KNF helm-chart + +Once the cluster is attached to your OSM, you can work with KNF in the same way as you do with any VNF. You can onboard them. For instance, you can use the example below of a KNF consisting of a single Kubernetes deployment unit based on OpenLDAP helm chart. ```bash wget http://osm-download.etsi.org/ftp/Packages/hackfests/openldap_knf.tar.gz @@ -1254,3 +1291,37 @@ osm repo-delete elastic #Delete cluster osm k8scluster-delete cluster ``` + +## KNF juju-bundle + +This is an example on how to onboard a service that use a juju-bundle. For this example the service to be onboarded is a mediawiki that is comprised by a mariadb-k8s database and a mediawiki-k8s frontend. + +```bash +wget http://osm-download.etsi.org/ftp/Packages/hackfests/mediawiki_cnf.tar.gz +wget http://osm-download.etsi.org/ftp/Packages/hackfests/mediawiki_cnf_ns.tar.gz +osm nfpkg-create mediawiki_cnf.tar.gz +osm nspkg-create mediawiki_cnf_ns.tar.gz +``` + +You can instantiate the Network Service: + +```bash +osm ns-create --ns_name hf-k8s --nsd_name ubuntu-cnf-ns --vim_account +``` + +To check the status of the deployment you can run the following command: + +```bash +osm ns-op-list hf-k8s ++--------------------------------------+-------------+-------------+-----------+---------------------+--------+ +| id | operation | action_name | status | date | detail | ++--------------------------------------+-------------+-------------+-----------+---------------------+--------+ +| 364c1378-ba86-447e-ad00-93fc1bf1bdd5 | instantiate | N/A | COMPLETED | 2020-02-24T13:49:03 | - | ++--------------------------------------+-------------+-------------+-----------+---------------------+--------+ +``` + +To remove the network service you can: + +```bash +osm ns-delete hf-k8s +``` diff --git a/15-k8s-installation.md b/15-k8s-installation.md new file mode 100644 index 0000000000000000000000000000000000000000..bd6c0d0aa19260bbfc00400eddb857d8ad3b8612 --- /dev/null +++ b/15-k8s-installation.md @@ -0,0 +1,278 @@ +# ANNEX 7 Kubernetes installation and requirements + +This section describes the standard installation procedure for a kubernetes cluster that can be used by OSM. We have two modes to represent a k8s cluster for OSM. + +Inside a VIM (single and multinet): + +![k8s-in-vim-multinet](assets/800px-k8s-in-vim-multinet.png) + +![k8s-in-vim-singlenet](assets/800px-k8s-in-vim-singlenet.png) + +Outside a VIM: + +![k8s-out-vim](assets/800px-k8s-out-vim.png) + +Your kubernetes cluster need to meet the following requirements: + +1. Kubernetes Loadbalancer: to expose your KNFs to the network +2. Kubernetes default Storageclass: to support persistent volumes. +3. Tiller permissions for k8s clusters > v1.15 + +We have three methods to create a kubernetes cluster. + +1. OSM kubernetes cluster network service +2. Local development environment +3. Manual cluster installation + +First, an OSM managed kubernetes deployment, second a microk8s installation and third a kubeadm installation. + +## Installation method 1: OSM kubernetes cluster from Network Service + +`TODO` + +## Installation method 2: Local development environment + +Microk8s is a single-package fully conformant lightweight Kubernetes that works on 42 flavours of Linux. Perfect for developer workstations, IoT, edge, and CI/CD. + +Using Microk8s as a Kubernetes cluster in OSM is straightforward. + +First, install Microk8s with the following commands: + +```bash +$ sudo snap install microk8s --classic +microk8s v1.17.2 from Canonical✓ installed +$ sudo usermod -a -G microk8s `whoami` +$ newgrp microk8s +$ microk8s.status --wait-ready +microk8s is running +addons: +cilium: disabled +dashboard: disabled +dns: disabled +fluentd: disabled +gpu: disabled +helm3: disabled +helm: disabled +ingress: disabled +istio: disabled +jaeger: disabled +juju: disabled +knative: disabled +kubeflow: disabled +linkerd: disabled +metallb: disabled +metrics-server: disabled +prometheus: disabled +rbac: disabled +registry: disabled +storage: disabled +``` + +Microk8s use [addons](https://microk8s.io/docs/addons) to extend its functionality. The required addons for Microk8s to work with OSM are “storage” and “dns”. + +```bash +$ microk8s.enable storage dns +Enabling default storage class +[...] +microk8s-hostpath created +Storage will be available soon +Enabling DNS +[...] +DNS is enabled +``` + +You may want to use the metallb addon if your Microk8s is not running in the same machine as OSM. When OSM adds a K8s cluster, it initializes the cluster so it can deploy Juju and Helm workloads on it. In the Juju initialization process, a controller will be bootstrapped on the K8s cluster, which then will be accessible by the Juju client (N2VC). When the K8s cluster is external to the OSM host machine, it must give the Juju controller an external IP accessible from OSM. + +Just execute the following command and specify the IP range allocable by the load balancer. + +```bash +$ microk8s.enable metallb +Enabling MetalLB +Enter the IP address range (e.g., 10.64.140.43-10.64.140.49): 192.168.0.10-192.168.0.25 +[...] +MetalLB is enabled +``` + +Export the Microk8s configuration and add it as a K8s cluster to OSM: + +```bash +microk8s.config > kubeconfig.yaml +osm k8scluster-add --creds kubeconfig.yaml \ + --version '1.17' \ + --vim openstack \ + --description "My K8s cluster" \ + --k8s-nets '{"net1": "osm-ext"}' \ + microk8s-cluster +``` + +## Method 3: Manual cluster installation steps for ubuntu + +For the manual installation of kubernetes cluster we will use the kubeadm procedure. + +Getting the Docker gpg key to install docker: + +```bash +curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - +``` + +Add the Docker ubuntu repository: + +```bash +sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ + $(lsb_release -cs) \ + stable" +``` + +Get the Kubernetes gpg key: + +```bash +curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - +``` + +Add the Kubernetes repository: + +```bash +cat << EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list +deb https://apt.kubernetes.io/ kubernetes-xenial main +EOF +``` + +Update your packages: + +```bash +sudo apt-get update +``` + +Install Docker, kubelet, kubeadm, and kubectl: + +```bash +sudo apt-get install -y docker kubelet kubeadm kubectl +``` + +Hold them at the current version: + +```bash +sudo apt-mark hold docker-ce kubelet kubeadm kubectl +``` + +Initialize the cluster (run only on the master): + +```bash +sudo kubeadm init --pod-network-cidr=10.244.0.0/16 +``` + +​Set up local kubeconfig: + +```bash +mkdir -p $HOME/.kube +sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config +sudo chown $(id -u):$(id -g) $HOME/.kube/config +``` + +Apply Flannel CNI network overlay: + +```bash +kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml +``` + +[OPTIONAL] Join the worker nodes to the cluster: + +```bash +kubeadm join [your unique string from the kubeadm init command] +``` + +Verify the worker nodes have joined the cluster successfully: + +```bash +kubectl get nodes +``` + +Compare this result of the kubectl get nodes command: + +```bash +NAME STATUS ROLES AGE VERSION +node1.osm.etsi.org Ready master 4m18s v1.13.5 +``` + +If you have an all-in-one node, then you may want to schedule pods in the master. You need to untaint the master to allow that. + +Untaint Master: + +```bash +kubectl taint nodes --all node-role.kubernetes.io/master- +``` + +After the creation of your cluster, you may need to fulfil the requirements of OSM. We can start with the installation of a Loadbalancer for your cluster. + +Metallb is a very powerful, easy to configure, LoadBalancer for kubernetes. To install it in your cluster, you can apply the following k8s manifest: + +```bash +kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.8.3/manifests/metallb.yaml +``` + +The configuration of metallb in layer2 is via Configmap. + +```yaml +apiVersion: v1 +kind: ConfigMap +metadata: + namespace: metallb-system + name: config +data: + config: | + address-pools: + - name: default + protocol: layer2 + addresses: + - 172.21.248.10-172.21.248.250 +``` + +After the creation of config.yaml we need to apply it to kubernetes cluster. + +```bash +kubectl apply -f config.yaml +``` + +You should ensure that the range of IP address defined in metallb are accessible from outside the cluster and is not overlapped with other devices in that network. Also this network should be reachable from OSM since OSM will need it to communicate with the cluster. + +Other configuration you need for your kubernetes cluster is the creation of the default storageclass: + +A kubernetes persistent volume storage can be installed to your kubernetes cluster applying the following Manifest. + +```bash +kubectl apply -f https://openebs.github.io/charts/openebs-operator-1.6.0.yaml +``` + +After the installation, you need to check if there is a default storageclass in your kubernetes: + +```bash +kubectl get storageclass +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +openebs-device openebs.io/local Delete WaitForFirstConsumer false 5m47s +openebs-hostpath openebs.io/local Delete WaitForFirstConsumer false 5m47s +openebs-jiva-default openebs.io/provisioner-iscsi Delete Immediate false 5m48s +openebs-snapshot-promoter volumesnapshot.external-storage.k8s.io/snapshot-promoter Delete Immediate false 5m47s +``` + +Until now, there is not default storageclass defined. With the command below we will define openebs-hostpath as default storageclass: + +```bash +ubuntu@k8s:~$ kubectl patch storageclass openebs-hostpath -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' +``` + +To check the right application of the storageclass definition, we can use the following command: + +```bash +kubectl get storageclass +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +openebs-device openebs.io/local Delete WaitForFirstConsumer false 5m47s +openebs-hostpath (default) openebs.io/local Delete WaitForFirstConsumer false 5m47s +openebs-jiva-default openebs.io/provisioner-iscsi Delete Immediate false 5m48s +openebs-snapshot-promoter volumesnapshot.external-storage.k8s.io/snapshot-promoter Delete Immediate false 5m47s +``` + +For kubernetes clusters > 1.15 there is needed special permission of tiller that can be added by the following command: + +```bash +kubectl create clusterrolebinding tiller-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default +``` diff --git a/assets/800px-k8s-in-vim-multinet.png b/assets/800px-k8s-in-vim-multinet.png new file mode 100644 index 0000000000000000000000000000000000000000..17529777296e2e13e37f1ffb65d2b7fca8eac0d0 Binary files /dev/null and b/assets/800px-k8s-in-vim-multinet.png differ diff --git a/assets/800px-k8s-in-vim-singlenet.png b/assets/800px-k8s-in-vim-singlenet.png new file mode 100644 index 0000000000000000000000000000000000000000..f07c6d3a67f1886f212ca8a63fb7ea968b8cf2dc Binary files /dev/null and b/assets/800px-k8s-in-vim-singlenet.png differ diff --git a/assets/800px-k8s-out-vim.png b/assets/800px-k8s-out-vim.png new file mode 100644 index 0000000000000000000000000000000000000000..eef60ba09e5580065037526f33a0519c97dc1a34 Binary files /dev/null and b/assets/800px-k8s-out-vim.png differ