Commit cce36245 authored by vicens's avatar vicens
Browse files

Updating content and adding images to OSM installation

parent f64507ad
Loading
Loading
Loading
Loading
+52 −93
Original line number Diff line number Diff line
@@ -1157,105 +1157,22 @@ From Release SEVEN, OSM supports Kubernetes-based Network Functions (KNF). This

KNFs feature requires an operative kubernetes cluster. There are several ways to have that kubernetes running. From the OSM perpective, the kubernetes cluster is not an isolated element, but it is a technology that enables the deployment of microservices in a cloud-native way. To handle the networks and facilitate the conection to the infrastructure, the cluster have to be associated to a VIM. There is an special case where the kubernetes cluster is installed in a baremetal environment without the management of the networking part but in general, OSM consider that the kubernetes cluster is located in a VIM.

OSM propose three different ways to install kubernetes:
For OSM you can use one of these three different ways to install your kubernetes cluster:

1. OSM kubernetes cluster Network Service
1. [OSM kubernetes cluster Network Service](15-k8s-installation.md)
2. [Self-managed kubernetes cluster in a VIM](15-k8s-installation.md)
3. [Kubernetes baremetal installation](15-k8s-installation.md)

### OSM Kubernetes requirements

After you complete the installation of kubernetes cluster, you need to check if you have the following components in your cluster.
After the kubernetes installation is completed, you need to check if you have the following components in your cluster.

1. Kubernetes Loadbalancer: to expose your KNFs to the network
2. Kubernetes default Storageclass: to support persistent volumes.
1. [Kubernetes Loadbalancer](15-k8s-installation.md): to expose your KNFs to the network
2. [Kubernetes default Storageclass](15-k8s-installation.md): to support persistent volumes.

#### Kubernetes Loadbalancer
### Adding kubernetes cluster to OSM

Metallb is a very powerful, easy to configure, LoadBalancer for kubernetes. To install it in your cluster you can apply the following k8s manifest:

```bash
kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.8.3/manifests/metallb.yaml
```

The configuration of metallb in layer2 is via Configmap.

```yaml
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 172.21.248.18-172.21.248.18
```

`TIP1`: In addresses you need to set the `IP address range` of your kubernetes cluster. This means the IP address that the kubernetes cluster will allocate in the external network interface. Be sure you don't overlap the range of the DHCP address pool of your VIM external network. In case of your kubernetes installation is `self-managed kubernetes cluster in a VIM`, you need to be sure that the network port of the VM in your Openstack is set `port_security_enabled = False` (See [Neutron configuration](https://wiki.openstack.org/wiki/Neutron/ML2PortSecurityExtensionDriver)). That will allow you to allocate IP address in the same range of your external network. If you have multiple workers, the `port_security_enabled = False` have to be configured in all workers node in the external network interface.

`TIP2`: The minimal configuration for associate the kubernetes cluster to OSM is to have at least one LoadBalancer IP for juju controller. If you can't configure the `port_security_enabled = False` in your VIM, is enough to allocate the IP address of your VM to the metallb addresses as we did in the example. This assume that the IP of the VM is `172.21.248.18`.

After the creation of config.yaml we need to apply it to kubernetes cluster.

```bash
kubectl apply -f config.yaml
```

For Microk8s is even simpler.

```bash
$ microk8s.enable metallb
Enabling MetalLB
Enter the IP address range (e.g., 10.64.140.43-10.64.140.49): 192.168.0.10-192.168.0.25
[...]
MetalLB is enabled
```

#### Kubernertes default storageclass

A kubernetes persistent volume storage can be installed to your kubernetes cluster applying the following Manifest.

```bash
kubectl apply -f https://openebs.github.io/charts/openebs-operator-1.6.0.yaml
```

After the installation, you need to check if there is a default storageclass in your kubernetes:

```bash
kubectl get storageclass
NAME                         PROVISIONER                                                RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
openebs-device               openebs.io/local                                           Delete          WaitForFirstConsumer   false                  5m47s
openebs-hostpath             openebs.io/local                                           Delete          WaitForFirstConsumer   false                  5m47s
openebs-jiva-default         openebs.io/provisioner-iscsi                               Delete          Immediate              false                  5m48s
openebs-snapshot-promoter    volumesnapshot.external-storage.k8s.io/snapshot-promoter   Delete          Immediate              false                  5m47s
```

Until now, there is not default storageclass defined. With the command below we will define openebs-hostpath as default storageclass:

```bash
ubuntu@k8s:~$ kubectl patch storageclass openebs-hostpath -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
```

To check the right application of the storageclass definition we can use the following command:

```bash
kubectl get storageclass
NAME                         PROVISIONER                                                RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
openebs-device               openebs.io/local                                           Delete          WaitForFirstConsumer   false                  5m47s
openebs-hostpath (default)   openebs.io/local                                           Delete          WaitForFirstConsumer   false                  5m47s
openebs-jiva-default         openebs.io/provisioner-iscsi                               Delete          Immediate              false                  5m48s
openebs-snapshot-promoter    volumesnapshot.external-storage.k8s.io/snapshot-promoter   Delete          Immediate              false                  5m47s
```

#### Adding kubernetes cluster to OSM

In order to test Kubernetes-based VNF (KNF), you require a K8s cluster connected to a network in the VIM (e.g. `vim-net`).

`TIP`: If you have a baremetal installation of kubernetes, you will need to add a non existing VIM in order to add the kubernetes cluster.
In order to test Kubernetes-based VNF (KNF), you require a K8s cluster connected to a network in the VIM (e.g. `vim-net`). If you have a baremetal installation of kubernetes, you will need to add a VIM in order to add the kubernetes cluster.

You will have to add the K8s cluster to OSM. For that purpose, you can use these instructions:

@@ -1271,9 +1188,11 @@ The options used to add the cluster are the following:
- --version: Current version of your kubernetes cluster
- --vim: The name of the VIM where the kubernetes cluster is deployed
- --description: Give a description to your kubernetes cluster
- --k8s-nets: It is a dictionary of the cluster network, where the `key` is an arbitrary name and the `value` of the dictionary is the name of the network in the VIM.
- --k8s-nets: It is a dictionary of the cluster network, where the `key` is an arbitrary name and the `value` of the dictionary is the name of the network in the VIM. In case your k8s cluster is not located in a VIM, you could use '{net1: null}'

Then, you might need to add some repos from where to download helm charts required by the KNF:
## Adding repositories to OSM

You might need to add some repos from where to download helm charts required by the KNF:

```bash
osm repo-add --type helm-chart --description "Bitnami repo" bitnami https://charts.bitnami.com/bitnami
@@ -1283,7 +1202,13 @@ osm repo-list
osm repo-show bitnami
```

Once done, you can work with KNF in the same way as you do with any VNF. You can onboard them. For instance, you can use the example below of a KNF consisting of a single Kubernetes deployment unit based on OpenLDAP helm chart.
## KNF Service on-boarding

KNFs can be on-boarded using helm-charts or juju-bundles. In the following section is shown an example with helm-chart and for juju-bundles.

### KNF helm-chart

Once the cluster is attached to your OSM, you can work with KNF in the same way as you do with any VNF. You can onboard them. For instance, you can use the example below of a KNF consisting of a single Kubernetes deployment unit based on OpenLDAP helm chart.

```bash
wget http://osm-download.etsi.org/ftp/Packages/hackfests/openldap_knf.tar.gz
@@ -1366,3 +1291,37 @@ osm repo-delete elastic
#Delete cluster
osm k8scluster-delete cluster
```

## KNF juju-bundle

This is an example on how to onboard a service that use a juju-bundle. For this example the service to be onboarded is a mediawiki that is comprised by a mariadb-k8s database and a mediawiki-k8s frontend.

```bash
wget http://osm-download.etsi.org/ftp/Packages/hackfests/mediawiki_cnf.tar.gz
wget http://osm-download.etsi.org/ftp/Packages/hackfests/mediawiki_cnf_ns.tar.gz
osm nfpkg-create mediawiki_cnf.tar.gz
osm nspkg-create mediawiki_cnf_ns.tar.gz
```

You can instantiate the Network Service:

```bash
osm ns-create --ns_name hf-k8s --nsd_name ubuntu-cnf-ns --vim_account <VIM_NAME|VIM_ID>
```

To check the status of the deployment you can run the following command:

```bash
osm ns-op-list hf-k8s
+--------------------------------------+-------------+-------------+-----------+---------------------+--------+
| id                                   | operation   | action_name | status    | date                | detail |
+--------------------------------------+-------------+-------------+-----------+---------------------+--------+
| 364c1378-ba86-447e-ad00-93fc1bf1bdd5 | instantiate | N/A         | COMPLETED | 2020-02-24T13:49:03 | -      |
+--------------------------------------+-------------+-------------+-----------+---------------------+--------+
```

To remove the network service you can:

```bash
osm ns-delete hf-k8s
```
+114 −19
Original line number Diff line number Diff line
# ANNEX 7 Kubernetes installation
# ANNEX 7 Kubernetes installation and requirements

This section describe the common installation procedure for a kubernetes cluster that can be used to link it with OSM. First a microk8s installation and second a kubeadm installation
This section describes the standard installation procedure for a kubernetes cluster that can be used by OSM. We have two modes to represent a k8s cluster for OSM.

## Microk8s installation procedure
Inside a VIM (single and multinet):

![k8s-in-vim-multinet](assets/800px-k8s-in-vim-multinet.png)

![k8s-in-vim-singlenet](assets/800px-k8s-in-vim-singlenet.png)

Outside a VIM:

![k8s-out-vim](assets/800px-k8s-out-vim.png)

Your kubernetes cluster need to meet the following requirements:

1. Kubernetes Loadbalancer: to expose your KNFs to the network
2. Kubernetes default Storageclass: to support persistent volumes.
3. Tiller permissions for k8s clusters > v1.15

We have three methods to create a kubernetes cluster.

1. OSM kubernetes cluster network service
2. Local development environment
3. Manual cluster installation

First, an OSM managed kubernetes deployment, second a microk8s installation and third a kubeadm installation.

## Installation method 1: OSM kubernetes cluster from Network Service

`TODO`

## Installation method 2: Local development environment

Microk8s is a single-package fully conformant lightweight Kubernetes that works on 42 flavours of Linux. Perfect for developer workstations, IoT, edge, and CI/CD.

@@ -53,7 +81,7 @@ Enabling DNS
DNS is enabled
```

You will want to use the metallb addon if your Microk8s is not running in the same machine as OSM. When OSM adds a K8s cluster, it initializes the cluster so it can deploy Juju and Helm workloads on it. In the Juju initialization process, a controller will be bootstrapped on the K8s cluster, which then will be accessible by the Juju client (N2VC). When the K8s cluster is external to the OSM host machine, it must give the Juju controller an external IP accessible from OSM.
You may want to use the metallb addon if your Microk8s is not running in the same machine as OSM. When OSM adds a K8s cluster, it initializes the cluster so it can deploy Juju and Helm workloads on it. In the Juju initialization process, a controller will be bootstrapped on the K8s cluster, which then will be accessible by the Juju client (N2VC). When the K8s cluster is external to the OSM host machine, it must give the Juju controller an external IP accessible from OSM.

Just execute the following command and specify the IP range allocable by the load balancer.

@@ -77,15 +105,17 @@ osm k8scluster-add --creds kubeconfig.yaml \
                   microk8s-cluster
```

## Kubeadm Installation procedure
## Method 3: Manual cluster installation steps for ubuntu

For the manual installation of kubernetes cluster we will use the kubeadm procedure.

Get the Docker gpg key:
Getting the Docker gpg key to install docker:

```bash
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
```

Add the Docker repository:
Add the Docker ubuntu repository:

```bash
sudo add-apt-repository    "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
@@ -125,18 +155,6 @@ Hold them at the current version:
sudo apt-mark hold docker-ce kubelet kubeadm kubectl
```

Add the iptables rule to sysctl.conf:

```bash
echo "net.bridge.bridge-nf-call-iptables=1" | sudo tee -a /etc/sysctl.conf
```

Enable iptables immediately:

```bash
sudo sysctl -p
```

Initialize the cluster (run only on the master):

```bash
@@ -176,8 +194,85 @@ NAME STATUS ROLES AGE VERSION
node1.osm.etsi.org   Ready    master   4m18s v1.13.5
```

If you have an all-in-one node, then you may want to schedule pods in the master. You need to untaint the master to allow that.

Untaint Master:

```bash
kubectl taint nodes --all node-role.kubernetes.io/master-
```

After the creation of your cluster, you may need to fulfil the requirements of OSM. We can start with the installation of a Loadbalancer for your cluster.

Metallb is a very powerful, easy to configure, LoadBalancer for kubernetes. To install it in your cluster, you can apply the following k8s manifest:

```bash
kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.8.3/manifests/metallb.yaml
```

The configuration of metallb in layer2 is via Configmap.

```yaml
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 172.21.248.10-172.21.248.250
```

After the creation of config.yaml we need to apply it to kubernetes cluster.

```bash
kubectl apply -f config.yaml
```

You should ensure that the range of IP address defined in metallb are accessible from outside the cluster and is not overlapped with other devices in that network. Also this network should be reachable from OSM since OSM will need it to communicate with the cluster.

Other configuration you need for your kubernetes cluster is the creation of the default storageclass:

A kubernetes persistent volume storage can be installed to your kubernetes cluster applying the following Manifest.

```bash
kubectl apply -f https://openebs.github.io/charts/openebs-operator-1.6.0.yaml
```

After the installation, you need to check if there is a default storageclass in your kubernetes:

```bash
kubectl get storageclass
NAME                         PROVISIONER                                                RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
openebs-device               openebs.io/local                                           Delete          WaitForFirstConsumer   false                  5m47s
openebs-hostpath             openebs.io/local                                           Delete          WaitForFirstConsumer   false                  5m47s
openebs-jiva-default         openebs.io/provisioner-iscsi                               Delete          Immediate              false                  5m48s
openebs-snapshot-promoter    volumesnapshot.external-storage.k8s.io/snapshot-promoter   Delete          Immediate              false                  5m47s
```

Until now, there is not default storageclass defined. With the command below we will define openebs-hostpath as default storageclass:

```bash
ubuntu@k8s:~$ kubectl patch storageclass openebs-hostpath -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
```

To check the right application of the storageclass definition, we can use the following command:

```bash
kubectl get storageclass
NAME                         PROVISIONER                                                RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
openebs-device               openebs.io/local                                           Delete          WaitForFirstConsumer   false                  5m47s
openebs-hostpath (default)   openebs.io/local                                           Delete          WaitForFirstConsumer   false                  5m47s
openebs-jiva-default         openebs.io/provisioner-iscsi                               Delete          Immediate              false                  5m48s
openebs-snapshot-promoter    volumesnapshot.external-storage.k8s.io/snapshot-promoter   Delete          Immediate              false                  5m47s
```

For kubernetes clusters > 1.15 there is needed special permission of tiller that can be added by the following command:

```bash
kubectl create clusterrolebinding tiller-cluster-admin --clusterrole=cluster-admin  --serviceaccount=kube-system:default
```
+52.5 KiB
Loading image diff...
+47.6 KiB
Loading image diff...
+34.7 KiB
Loading image diff...