Commit 47df6c46 authored by calvinosanc1's avatar calvinosanc1
Browse files

K8s_01-Robot tests packages

parent de8be78e
Loading
Loading
Loading
Loading

k8s_juju_ns/README.md

0 → 100644
+159 −0
Original line number Diff line number Diff line
# Onboarding and instantiation

```bash
osm nfpkg-create k8s_base_vnf.tar.gz
osm nfpkg-create k8s_juju_vnf.tar.gz
osm nspkg-create k8s_base_ns.tar.gz
osm ns-create --ns_name k8s --nsd_name k8s_base --vim_account ost9-canonical-fortville --config '{vld: [ {name: mgmtnet, vim-network-name: mgmt} ] }'
```

This deploys a NS (k8s\_base\_ns) with one VNF that runs Juju (k8s\_juju\_vnf) and 4 VNFs (k8s\_base\_vnf) that run the K8s cluster (1 master, 3 workers).

VMs will be called: 

- JUJU
- NODE1
- NODE2
- NODE3
- NODE4

# Install and bootstrap of Juju in the VM where juju runs (JUJU)

## Install juju

```bash
sudo snap install juju --classic --channel=2.6/stable
ssh-keygen
```

## Add the cloud: manual provider type

```bash
# Copy the ssh key locally
ssh-copy-id ubuntu@<IP_JUJU>
juju add-cloud
    manual
    k8s
    ubuntu@<IP_JUJU>
```

## Create the controller

```bash
ssh-copy-id -i /home/ubuntu/.local/share/juju/ssh/juju_id_rsa ubuntu@<IP_JUJU>
#The same as: cat /home/ubuntu/.local/share/juju/ssh/juju_id_rsa.pub >> .ssh/authorized_keys
juju bootstrap k8s jujuk8s
```

This creates a controller locally in the machine

Juju status can be checked with:

```
juju status
```

# Add the VMs to juju (manual provider mode)

First from JUJU you need to copy the controller SSH key to all nodes of the cluster (master and workers)

```bash
#cat /home/ubuntu/.local/share/juju/ssh/juju_id_rsa.pub
#vi .ssh/authorized_keys
ssh-copy-id -i /home/ubuntu/.local/share/juju/ssh/juju_id_rsa ubuntu@<IP_NODE1>
ssh-copy-id -i /home/ubuntu/.local/share/juju/ssh/juju_id_rsa ubuntu@<IP_NODE2>
ssh-copy-id -i /home/ubuntu/.local/share/juju/ssh/juju_id_rsa ubuntu@<IP_NODE3>
ssh-copy-id -i /home/ubuntu/.local/share/juju/ssh/juju_id_rsa ubuntu@<IP_NODE4>
```

After that add the machines to juju controller mannually in JUJU: 

```bash
juju add-machine ssh:ubuntu@<IP_NODE1> --debug
juju add-machine ssh:ubuntu@<IP_NODE2> --debug
juju add-machine ssh:ubuntu@<IP_NODE3> --debug
juju add-machine ssh:ubuntu@<IP_NODE4> --debug
```

Next we look for the indentifiers of the machines and take notes of it.

```bash
juju machines
```

If they are the first machines we add, then the ids will be 0, 1, 2, 3.

# Deploy the K8s cluster with the bundle

The original bundle cannot be used as it is because it assumes that it is going to be deployed in a cloud without specifying the machines.

```bash
#juju deploy charmed-kubernetes
#wget https://api.jujucharms.com/charmstore/v5/charmed-kubernetes/archive/bundle.yaml
#vi bundle.yaml
```

The bundle has been edited to specify which application runs in which machine.

We copy the bundle from osm-descriptors repo to JUJU:

```bash
scp osm-descriptors/tid/k8s/k8s_base_ns/bundle/bundle.yaml ubuntu@<IP_JUJU>
```

Deploy the bundle and check the result:

```bash
juju deploy ./bundle.yaml --map-machines=existing,0=0,1=1,2=2,3=3
#juju deploy ./bundle.yaml --map-machines=existing,0=30,1=31,2=32,3=33
juju status
```

# Interacting with the Kubernetes cluster

In JUJU machine:

```bash
mkdir -p ~/.kube
juju scp kubernetes-master/0:config ~/.kube/config
sudo snap install kubectl --classic
kubectl cluster-info
```

# To access the K8s dashboard 

In JUJU machine:

```bash
juju config kubernetes-master enable-dashboard-addons=true
```

The dashboard could be accesible running this kubectl command:

```bash
kubectl proxy --address <IP_JUJU>
```

And now a SSH tunnel could be done from the PC to JUJU, and from the web browser access to the dashboard:

<http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/>

# Install and init Helm

## Install

```bash
sudo snap install helm --classic
```

## Init

```bash
# Optional. Create account for tiller
# kubectl --namespace kube-system create serviceaccount tiller
# kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
helm init
# helm init --service-account tiller --wait
# kubectl --namespace kube-system patch deploy tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
kubectl get all --all-namespaces
```
+133 −0
Original line number Diff line number Diff line
description: A highly-available, production-grade Kubernetes cluster.
series: bionic
machines:
  "0":
  "1":
  "2":
  "3":
services:
  containerd:
    annotations:
      gui-x: '475'
      gui-y: '800'
    charm: cs:~containers/containerd-20
    resources: {}
  easyrsa:
    annotations:
      gui-x: '90'
      gui-y: '420'
    charm: cs:~containers/easyrsa-270
    constraints: root-disk=8G
    num_units: 1
    resources:
      easyrsa: 5
    to:
    - "0"
  etcd:
    annotations:
      gui-x: '800'
      gui-y: '420'
    charm: cs:~containers/etcd-449
    constraints: root-disk=8G
    num_units: 3
    options:
      channel: 3.2/stable
    resources:
      core: 0
      etcd: 3
      snapshot: 0
    to:
    - "1"
    - "2"
    - "3"
  flannel:
    annotations:
      gui-x: '475'
      gui-y: '605'
    charm: cs:~containers/flannel-438
    resources:
      flannel-amd64: 394
      flannel-arm64: 390
      flannel-s390x: 377
  kubeapi-load-balancer:
    annotations:
      gui-x: '450'
      gui-y: '250'
    charm: cs:~containers/kubeapi-load-balancer-664
    constraints: root-disk=8G
    expose: true
    num_units: 1
    resources: {}
    to:
    - "0"
  kubernetes-master:
    annotations:
      gui-x: '800'
      gui-y: '850'
    charm: cs:~containers/kubernetes-master-724
    constraints: cores=2 mem=4G root-disk=16G
    num_units: 2
    options:
      channel: 1.15/stable
    resources:
      cdk-addons: 0
      core: 0
      kube-apiserver: 0
      kube-controller-manager: 0
      kube-proxy: 0
      kube-scheduler: 0
      kubectl: 0
    to:
    - "1"
    - "2"
  kubernetes-worker:
    annotations:
      gui-x: '90'
      gui-y: '850'
    charm: cs:~containers/kubernetes-worker-571
    constraints: cores=4 mem=4G root-disk=16G
    expose: true
    num_units: 3
    options:
      channel: 1.15/stable
    resources:
      cni-amd64: 392
      cni-arm64: 383
      cni-s390x: 395
      core: 0
      kube-proxy: 0
      kubectl: 0
      kubelet: 0
    to:
    - "1"
    - "2"
    - "3"
relations:
- - kubernetes-master:kube-api-endpoint
  - kubeapi-load-balancer:apiserver
- - kubernetes-master:loadbalancer
  - kubeapi-load-balancer:loadbalancer
- - kubernetes-master:kube-control
  - kubernetes-worker:kube-control
- - kubernetes-master:certificates
  - easyrsa:client
- - etcd:certificates
  - easyrsa:client
- - kubernetes-master:etcd
  - etcd:db
- - kubernetes-worker:certificates
  - easyrsa:client
- - kubernetes-worker:kube-api-endpoint
  - kubeapi-load-balancer:website
- - kubeapi-load-balancer:certificates
  - easyrsa:client
- - flannel:etcd
  - etcd:db
- - flannel:cni
  - kubernetes-master:cni
- - flannel:cni
  - kubernetes-worker:cni
- - containerd:containerd
  - kubernetes-worker:container-runtime
- - containerd:containerd
  - kubernetes-master:container-runtime
+54.6 KiB
Loading image diff...
+43 −0
Original line number Diff line number Diff line
nsd-catalog:
    nsd:
    -   id: k8s_juju
        name: k8s_juju
        short-name: k8s_juju
        description: NS consisting of a 4 k8s_jujumachine VNFs and 1 k8s_jujucontroller VNF connected to mgmt network
        vendor: OSM
        version: '1.0'
        logo: osm.png
        constituent-vnfd:
        -   member-vnf-index: k8s_vnf1
            vnfd-id-ref: k8s_jujumachine_vnf
        -   member-vnf-index: k8s_vnf2
            vnfd-id-ref: k8s_jujumachine_vnf
        -   member-vnf-index: k8s_vnf3
            vnfd-id-ref: k8s_jujumachine_vnf
        -   member-vnf-index: k8s_vnf4
            vnfd-id-ref: k8s_jujumachine_vnf
        -   member-vnf-index: k8s_juju
            vnfd-id-ref: k8s_jujucontroller_vnf
        vld:
        -   id: mgmtnet
            name: mgmtnet
            type: ELAN
            mgmt-network: 'true'
            vim-network-name: mgmt
            vnfd-connection-point-ref:
            -   member-vnf-index-ref: k8s_vnf1
                vnfd-id-ref: k8s_jujumachine_vnf
                vnfd-connection-point-ref: mgmt
            -   member-vnf-index-ref: k8s_vnf2
                vnfd-id-ref: k8s_jujumachine_vnf
                vnfd-connection-point-ref: mgmt
            -   member-vnf-index-ref: k8s_vnf3
                vnfd-id-ref: k8s_jujumachine_vnf
                vnfd-connection-point-ref: mgmt
            -   member-vnf-index-ref: k8s_vnf4
                vnfd-id-ref: k8s_jujumachine_vnf
                vnfd-connection-point-ref: mgmt
            -   member-vnf-index-ref: k8s_juju
                vnfd-id-ref: k8s_jujucontroller_vnf
                vnfd-connection-point-ref: mgmt
+3 −0
Original line number Diff line number Diff line
This is a VNF with a single small VDU intended to run a juju controller
that will deploy a K8s cluster using a juju bundle on a set of machines.
Loading