Commit 06bfc5f0 authored by Francisco-Javier Ramon Salguero's avatar Francisco-Javier Ramon Salguero
Browse files

Minor format changes to K8s sections and chapter

parent cce36245
Loading
Loading
Loading
Loading
+21 −21
Original line number Diff line number Diff line
@@ -259,7 +259,7 @@ osm ns-create --ns_name h1 --nsd_name hackfest1-ns --vim_account openstack1 --co

### Adding additional parameters

Since release SIX, additional user parameters can be added, and they land at `vdu:cloud-init` (Jinja2 format) and/or `vnf-configuration` primitives (enclosed by `<>`). Here is an example of a VNF descriptor that uses two parameters called `touch_filename` and `touch_filename2`.
Since OSM Release SIX, additional user parameters can be added, and they land at `vdu:cloud-init` (Jinja2 format) and/or `vnf-configuration` primitives (enclosed by `<>`). Here is an example of a VNF descriptor that uses two parameters called `touch_filename` and `touch_filename2`.

```yaml
vnfd:
@@ -308,8 +308,8 @@ Day-1 and Day-2 are both managed by the VCA (VNF Configuration & Abstraction) mo

There are two types of charms:

- Native charms: the set of scripts run inside the VNF components. This kind of charms are new in Release 7.
- Proxy charms: the set of scripts run in LXC containers in an OSM-managed machine (which could be where OSM resides), which use ssh or other methods to get into the VNF instances and configure them.
- **Native charms:** the set of scripts run inside the VNF components. This kind of charms are new in Release 7.
- **Proxy charms:** the set of scripts run in LXC containers in an OSM-managed machine (which could be where OSM resides), which use ssh or other methods to get into the VNF instances and configure them.

![OSM Proxy Charms](assets/800px-OSM_proxycharms.png)

@@ -1117,7 +1117,7 @@ nst:
            nsd-connection-point-ref: nsd_cp_data
```

 The YAML above contains 2 *netslice-subnet*, one with the flag *is-shared-nss* as true and the other one with the flag *is-shared-nss* as false. The *netslice-vlds* will connect the *slice_hackfest_middle_nsd* nss with management interface and data2 with the *slice_hackfest_nsd* via *nsd_cp_data*
 The YAML above contains 2 `netslice-subnet`, one with the flag `is-shared-nss` as true and the other one with the flag `is-shared-nss` as false. The `netslice-vlds` will connect the `slice_hackfest_middle_nsd` nss with management interface and data2 with the `slice_hackfest_nsd` via `nsd_cp_data`.

To instantiate this network slice, we will use the same command used previously but changing the `nst_name` to `slice_hackfest2_nst`:

@@ -1151,28 +1151,28 @@ To remove the NSI2 run the command: `osm nsi-delete my_shared_slice`.

## Using Kubernetes-based VNFs (KNFs)

From Release SEVEN, OSM supports Kubernetes-based Network Functions (KNF). This feature unlock more than 20.000 packages that can be deployed besides VNFs and PNFs. This section guides you to deploy your first KNF, from the installation of multiple ways of kubernetes clusters until the selection of the package and deployment.
From Release SEVEN, OSM supports Kubernetes-based Network Functions (KNF). This feature unlocks more than 20.000 packages that can be deployed besides VNFs and PNFs. This section guides you to deploy your first KNF, from the installation of multiple ways of Kubernetes clusters until the selection of the package and deployment.

### Kubernetes installation

KNFs feature requires an operative kubernetes cluster. There are several ways to have that kubernetes running. From the OSM perpective, the kubernetes cluster is not an isolated element, but it is a technology that enables the deployment of microservices in a cloud-native way. To handle the networks and facilitate the conection to the infrastructure, the cluster have to be associated to a VIM. There is an special case where the kubernetes cluster is installed in a baremetal environment without the management of the networking part but in general, OSM consider that the kubernetes cluster is located in a VIM.
KNFs feature requires an operative Kubernetes cluster. There are several ways to have that Kubernetes running. From the OSM perpective, the Kubernetes cluster is not an isolated element, but it is a technology that enables the deployment of microservices in a cloud-native way. To handle the networks and facilitate the conection to the infrastructure, the cluster have to be associated to a VIM. There is an special case where the Kubernetes cluster is installed in a baremetal environment without the management of the networking part but in general, OSM consider that the Kubernetes cluster is located in a VIM.

For OSM you can use one of these three different ways to install your kubernetes cluster:
For OSM you can use one of these three different ways to install your Kubernetes cluster:

1. [OSM kubernetes cluster Network Service](15-k8s-installation.md)
2. [Self-managed kubernetes cluster in a VIM](15-k8s-installation.md)
3. [Kubernetes baremetal installation](15-k8s-installation.md)
1. [OSM Kubernetes cluster Network Service](15-k8s-installation.md#installation-method-1-osm-kubernetes-cluster-from-an-osm-network-service)
2. [Self-managed Kubernetes cluster in a VIM](15-k8s-installation.md#installation-method-2-local-development-environment)
3. [Kubernetes baremetal installation](15-k8s-installation.md#method-3-manual-cluster-installation-steps-for-ubuntu)

### OSM Kubernetes requirements

After the kubernetes installation is completed, you need to check if you have the following components in your cluster.
After the Kubernetes installation is completed, you need to check if you have the following components in your cluster.

1. [Kubernetes Loadbalancer](15-k8s-installation.md): to expose your KNFs to the network
2. [Kubernetes default Storageclass](15-k8s-installation.md): to support persistent volumes.

### Adding kubernetes cluster to OSM

In order to test Kubernetes-based VNF (KNF), you require a K8s cluster connected to a network in the VIM (e.g. `vim-net`). If you have a baremetal installation of kubernetes, you will need to add a VIM in order to add the kubernetes cluster.
In order to test Kubernetes-based VNF (KNF), you require a K8s cluster connected to a network in the VIM (e.g. `vim-net`). If you have a baremetal installation of Kubernetes, you will need to add a VIM in order to add the Kubernetes cluster.

You will have to add the K8s cluster to OSM. For that purpose, you can use these instructions:

@@ -1184,11 +1184,11 @@ osm k8scluster-show cluster

The options used to add the cluster are the following:

- --creds: Is the location of the kubeconfig file where you have the cluster credentials
- --version: Current version of your kubernetes cluster
- --vim: The name of the VIM where the kubernetes cluster is deployed
- --description: Give a description to your kubernetes cluster
- --k8s-nets: It is a dictionary of the cluster network, where the `key` is an arbitrary name and the `value` of the dictionary is the name of the network in the VIM. In case your k8s cluster is not located in a VIM, you could use '{net1: null}'
- `--creds`: Is the location of the kubeconfig file where you have the cluster credentials
- `--version`: Current version of your Kubernetes cluster
- `--vim`: The name of the VIM where the Kubernetes cluster is deployed
- `--description`: Give a description to your Kubernetes cluster
- `--k8s-nets`: It is a dictionary of the cluster network, where the `key` is an arbitrary name and the `value` of the dictionary is the name of the network in the VIM. In case your k8s cluster is not located in a VIM, you could use '{net1: null}'

## Adding repositories to OSM

@@ -1204,9 +1204,9 @@ osm repo-show bitnami

## KNF Service on-boarding

KNFs can be on-boarded using helm-charts or juju-bundles. In the following section is shown an example with helm-chart and for juju-bundles.
KNFs can be on-boarded using Helm Charts or Juju Bundles. In the following section is shown an example with Helm Chart and for Juju Bundles.

### KNF helm-chart
### KNF Helm Chart

Once the cluster is attached to your OSM, you can work with KNF in the same way as you do with any VNF. You can onboard them. For instance, you can use the example below of a KNF consisting of a single Kubernetes deployment unit based on OpenLDAP helm chart.

@@ -1226,8 +1226,8 @@ osm ns-create --ns_name ldap2 --nsd_name openldap_ns --vim_account <VIM_NAME|VIM

Check in the cluster that pods are properly created:

- The pods associated to ldap should be using version openldap:1.2.1 and have 1 replica
- The pods associated to ldap2 should be using version openldap:1.2.1 and have 2 replicas
- The pods associated to ldap should be using version `openldap:1.2.1` and have 1 replica
- The pods associated to ldap2 should be using version `openldap:1.2.1` and have 2 replicas

Now you can upgrade both NS instances:

+37 −41
Original line number Diff line number Diff line
# ANNEX 7 Kubernetes installation and requirements

This section describes the standard installation procedure for a kubernetes cluster that can be used by OSM. We have two modes to represent a k8s cluster for OSM.
This section illustrates a safe procedure to setup a Kubernetes cluster that meets the requirements described in [chapter 5](05-osm-usage.md#osm-kubernetes-requirements). Please note that there might be many alternative ways to achieve the same result (i.e. create an equivalent K8s cluster), so, in case you are using different tooling to create your K8s cluster, this annex should be taken just as informative information and refer instead to your tool's guide to the authoritative reference to achieve equivalent results.

Inside a VIM (single and multinet):
There are two modes to represent a K8s cluster in OSM.

1. Inside a VIM (single and multinet):
   ![k8s-in-vim-multinet](assets/800px-k8s-in-vim-multinet.png)

   ![k8s-in-vim-singlenet](assets/800px-k8s-in-vim-singlenet.png)

Outside a VIM:

2. Outside a VIM:
   ![k8s-out-vim](assets/800px-k8s-out-vim.png)

Your kubernetes cluster need to meet the following requirements:

1. Kubernetes Loadbalancer: to expose your KNFs to the network
2. Kubernetes default Storageclass: to support persistent volumes.
3. Tiller permissions for k8s clusters > v1.15
Your Kubernetes cluster needs to meet the following requirements:

We have three methods to create a kubernetes cluster.
1. **Kubernetes `Loadbalancer`:** to expose your KNFs to the network
2. **Kubernetes default `Storageclass`:** to support persistent volumes.
3. Tiller permissions for K8s clusters > v1.15

1. OSM kubernetes cluster network service
2. Local development environment
3. Manual cluster installation
Here we will analyse three methods to create a Kubernetes cluster:

First, an OSM managed kubernetes deployment, second a microk8s installation and third a kubeadm installation.
1. OSM Kubernetes cluster created as an OSM's Network Service (i.e. an OSM-managed Kubernetes deployment).
2. Local development environment based on MicroK8s.
3. Manual cluster installation based on `kubeadm`.

## Installation method 1: OSM kubernetes cluster from Network Service
## Installation method 1: OSM Kubernetes cluster from an OSM Network Service

`TODO`
TODO: VNF and NS Packages to be made available soon.

## Installation method 2: Local development environment

@@ -68,7 +64,7 @@ registry: disabled
storage: disabled
```

Microk8s use [addons](https://microk8s.io/docs/addons) to extend its functionality. The required addons for Microk8s to work with OSM are storage and dns.
Microk8s uses [addons](https://microk8s.io/docs/addons) to extend its functionality. The required addons for Microk8s to work with OSM are "storage" and "dns".

```bash
$ microk8s.enable storage dns
@@ -81,7 +77,7 @@ Enabling DNS
DNS is enabled
```

You may want to use the metallb addon if your Microk8s is not running in the same machine as OSM. When OSM adds a K8s cluster, it initializes the cluster so it can deploy Juju and Helm workloads on it. In the Juju initialization process, a controller will be bootstrapped on the K8s cluster, which then will be accessible by the Juju client (N2VC). When the K8s cluster is external to the OSM host machine, it must give the Juju controller an external IP accessible from OSM.
You may want to use the `metallb` addon if your Microk8s is not running in the same machine as OSM. When OSM adds a K8s cluster, it initializes the cluster so it can deploy Juju and Helm workloads on it. In the Juju initialization process, a controller will be bootstrapped on the K8s cluster, which then will be accessible by the Juju client (N2VC). When the K8s cluster is external to the OSM host machine, it must give the Juju controller an external IP accessible from OSM.

Just execute the following command and specify the IP range allocable by the load balancer.

@@ -105,9 +101,9 @@ osm k8scluster-add --creds kubeconfig.yaml \
                   microk8s-cluster
```

## Method 3: Manual cluster installation steps for ubuntu
## Method 3: Manual cluster installation steps for Ubuntu

For the manual installation of kubernetes cluster we will use the kubeadm procedure.
For the manual installation of Kubernetes cluster we will use a procedure based on `kubeadm`.

Getting the Docker gpg key to install docker:

@@ -115,7 +111,7 @@ Getting the Docker gpg key to install docker:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
```

Add the Docker ubuntu repository:
Add the Docker Ubuntu repository:

```bash
sudo add-apt-repository    "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
@@ -143,7 +139,7 @@ Update your packages:
sudo apt-get update
```

Install Docker, kubelet, kubeadm, and kubectl:
Install `docker`, `kubelet`, `kubeadm`, and `kubectl`:

```bash
sudo apt-get install -y docker kubelet kubeadm kubectl
@@ -161,7 +157,7 @@ Initialize the cluster (run only on the master):
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
```

​Set up local kubeconfig:
​Set up local `kubeconfig`:

```bash
mkdir -p $HOME/.kube
@@ -169,7 +165,7 @@ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
```

Apply Flannel CNI network overlay:
Apply `Flannel` CNI network overlay:

```bash
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
@@ -187,7 +183,7 @@ Verify the worker nodes have joined the cluster successfully:
kubectl get nodes
```

Compare this result of the kubectl get nodes command:
Compare this result of the `kubectl get nodes` command:

```bash
NAME                            STATUS   ROLES    AGE   VERSION
@@ -202,15 +198,15 @@ Untaint Master:
kubectl taint nodes --all node-role.kubernetes.io/master-
```

After the creation of your cluster, you may need to fulfil the requirements of OSM. We can start with the installation of a Loadbalancer for your cluster.
After the creation of your cluster, you may need to fulfil the requirements of OSM. We can start with the installation of a load balancer for your cluster.

Metallb is a very powerful, easy to configure, LoadBalancer for kubernetes. To install it in your cluster, you can apply the following k8s manifest:
`Metallb` is a very powerful, easy to configure, load balancer for kubernetes. To install it in your cluster, you can apply the following k8s manifest:

```bash
kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.8.3/manifests/metallb.yaml
```

The configuration of metallb in layer2 is via Configmap.
The configuration of metallb in layer2 is via `Configmap`.

```yaml
apiVersion: v1
@@ -227,23 +223,23 @@ data:
      - 172.21.248.10-172.21.248.250
```

After the creation of config.yaml we need to apply it to kubernetes cluster.
After the creation of config.yaml we need to apply it to Kubernetes cluster.

```bash
kubectl apply -f config.yaml
```

You should ensure that the range of IP address defined in metallb are accessible from outside the cluster and is not overlapped with other devices in that network. Also this network should be reachable from OSM since OSM will need it to communicate with the cluster.
You should ensure that the range of IP address defined in `metallb` are accessible from outside the cluster and is not overlapped with other devices in that network. Also this network should be reachable from OSM since OSM will need it to communicate with the cluster.

Other configuration you need for your kubernetes cluster is the creation of the default storageclass:
Other configuration you need for your kubernetes cluster is the creation of the default `storageclass`:

A kubernetes persistent volume storage can be installed to your kubernetes cluster applying the following Manifest.
A kubernetes persistent volume storage can be installed to your kubernetes cluster applying the following manifest.

```bash
kubectl apply -f https://openebs.github.io/charts/openebs-operator-1.6.0.yaml
```

After the installation, you need to check if there is a default storageclass in your kubernetes:
After the installation, you need to check if there is a default `storageclass` in your kubernetes:

```bash
kubectl get storageclass
@@ -254,13 +250,13 @@ openebs-jiva-default openebs.io/provisioner-iscsi
openebs-snapshot-promoter    volumesnapshot.external-storage.k8s.io/snapshot-promoter   Delete          Immediate              false                  5m47s
```

Until now, there is not default storageclass defined. With the command below we will define openebs-hostpath as default storageclass:
Until now, there is not default `storageclass` defined. With the command below we will define `openebs-hostpath` as default storageclass:

```bash
ubuntu@k8s:~$ kubectl patch storageclass openebs-hostpath -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
```

To check the right application of the storageclass definition, we can use the following command:
To check the right application of the `storageclass` definition, we can use the following command:

```bash
kubectl get storageclass
@@ -271,7 +267,7 @@ openebs-jiva-default openebs.io/provisioner-iscsi
openebs-snapshot-promoter    volumesnapshot.external-storage.k8s.io/snapshot-promoter   Delete          Immediate              false                  5m47s
```

For kubernetes clusters > 1.15 there is needed special permission of tiller that can be added by the following command:
For Kubernetes clusters > 1.15 there is needed special permission of Tiller that can be added by the following command:

```bash
kubectl create clusterrolebinding tiller-cluster-admin --clusterrole=cluster-admin  --serviceaccount=kube-system:default