Commit 86560b20 authored by garciadeblas's avatar garciadeblas
Browse files

K8s_01-Robot tests packages



This reverts commit 385f0acc.
Adds k8s_juju_ns, k8s_jujucontroller_vnf and k8s_jujumachine_vnf
packages

Signed-off-by: default avatargarciadeblas <gerardo.garciadeblas@telefonica.com>
parent 79c70d5e
Loading
Loading
Loading
Loading

k8s_juju_ns/README.md

0 → 100644
+96 −0
Original line number Diff line number Diff line
# Introduction

The NS (k8s\_juju) consists of 1 deployer (k8s\_jujucontroller\_vnf) and 4 nodes (k8s\_jujumachine\_vnf)
connected to a single network or vld (mgmtnet).

The deployer is a Kubernetes installer based on Juju: it will configure the other 4 nodes to run a Kubernetes cluster. Behind the scenes, the deployer is a Juju controller where the 4 nodes are manually added to a Juju model, then a juju bundle is deployed on that model.

# Onboarding and instantiation

```bash
osm nfpkg-create k8s_jujumachine_vnf.tar.gz
osm nfpkg-create k8s_jujucontroller_vnf.tar.gz
osm nspkg-create k8s_juju_ns.tar.gz
```

# Instantiation

Instantiation parameters are controlled by `config.yaml`. The relevant parameters are the network in the VIM where the K8s API will eb exposed and where all nodes will be connected (`mgmt` in the example below) and the IP addresses from that network to be assigned to each machine (in the example below, `192.168.0.X`). Create `config.yaml`as follows:

```yaml
---
additionalParamsForVnf:
  -
    member-vnf-index: k8s_juju
    additionalParams:
        MACHINE1: "192.168.0.161"
        MACHINE2: "192.168.0.162"
        MACHINE3: "192.168.0.163"
        MACHINE4: "192.168.0.164"
        MACHINE5: ""
        MACHINE6: ""
        MACHINE7: ""
        MACHINE8: ""
        MACHINE9: ""
        MACHINE10: ""
        BUNDLE: ""
vld:
  -
    name: mgmtnet
    vim-network-name: mgmt              #The network in the VIM to connect all nodes of the clusters
    vnfd-connection-point-ref:
      -
        ip-address: "192.168.0.161"
        member-vnf-index-ref: k8s_vnf1
        vnfd-connection-point-ref: mgmt
      -
        ip-address: "192.168.0.162"
        member-vnf-index-ref: k8s_vnf2
        vnfd-connection-point-ref: mgmt
      -
        ip-address: "192.168.0.163"
        member-vnf-index-ref: k8s_vnf3
        vnfd-connection-point-ref: mgmt
      -
        ip-address: "192.168.0.164"
        member-vnf-index-ref: k8s_vnf4
        vnfd-connection-point-ref: mgmt
      -
        ip-address: "192.168.0.170"
        member-vnf-index-ref: k8s_juju
        vnfd-connection-point-ref: mgmt
```

Then, instantiate the NS

```bash
osm ns-create --ns_name k8s-cluster --nsd_name k8s_juju --vim_account <VIM_ACCOUNT> --config_file config.yaml --ssh_keys ${HOME}/.ssh/id_rsa.pub
```

# Check K8s cluster

Connect to the machine running juju (`k8s_juju` in the NS), check that kubeconfig file exists and test kubectl:

```bash
osm vnf-list --ns k8s-cluster --filter vnfd-ref=k8s_jujucontroller_vnf
ssh ubuntu@<JUJU_CONTROLLER_IP_ADDRESS>
cat .kube/config
kubectl get all
```

To access the K8s dashboard:

```bash
juju config kubernetes-master enable-dashboard-addons=true
```

The dashboard could be accesible running this kubectl command:

```bash
kubectl proxy --address <JUJU_CONTROLLER_IP_ADDRESS>
```

And now a SSH tunnel could be done from the computer to the juju controller, and from the web browser access to the dashboard:

<http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/>
+133 −0
Original line number Diff line number Diff line
description: A highly-available, production-grade Kubernetes cluster.
series: bionic
machines:
  "0":
  "1":
  "2":
  "3":
services:
  containerd:
    annotations:
      gui-x: '475'
      gui-y: '800'
    charm: cs:~containers/containerd-20
    resources: {}
  easyrsa:
    annotations:
      gui-x: '90'
      gui-y: '420'
    charm: cs:~containers/easyrsa-270
    constraints: root-disk=8G
    num_units: 1
    resources:
      easyrsa: 5
    to:
    - "0"
  etcd:
    annotations:
      gui-x: '800'
      gui-y: '420'
    charm: cs:~containers/etcd-449
    constraints: root-disk=8G
    num_units: 3
    options:
      channel: 3.2/stable
    resources:
      core: 0
      etcd: 3
      snapshot: 0
    to:
    - "1"
    - "2"
    - "3"
  flannel:
    annotations:
      gui-x: '475'
      gui-y: '605'
    charm: cs:~containers/flannel-438
    resources:
      flannel-amd64: 394
      flannel-arm64: 390
      flannel-s390x: 377
  kubeapi-load-balancer:
    annotations:
      gui-x: '450'
      gui-y: '250'
    charm: cs:~containers/kubeapi-load-balancer-664
    constraints: root-disk=8G
    expose: true
    num_units: 1
    resources: {}
    to:
    - "0"
  kubernetes-master:
    annotations:
      gui-x: '800'
      gui-y: '850'
    charm: cs:~containers/kubernetes-master-724
    constraints: cores=2 mem=4G root-disk=16G
    num_units: 2
    options:
      channel: 1.15/stable
    resources:
      cdk-addons: 0
      core: 0
      kube-apiserver: 0
      kube-controller-manager: 0
      kube-proxy: 0
      kube-scheduler: 0
      kubectl: 0
    to:
    - "1"
    - "2"
  kubernetes-worker:
    annotations:
      gui-x: '90'
      gui-y: '850'
    charm: cs:~containers/kubernetes-worker-571
    constraints: cores=4 mem=4G root-disk=16G
    expose: true
    num_units: 3
    options:
      channel: 1.15/stable
    resources:
      cni-amd64: 392
      cni-arm64: 383
      cni-s390x: 395
      core: 0
      kube-proxy: 0
      kubectl: 0
      kubelet: 0
    to:
    - "1"
    - "2"
    - "3"
relations:
- - kubernetes-master:kube-api-endpoint
  - kubeapi-load-balancer:apiserver
- - kubernetes-master:loadbalancer
  - kubeapi-load-balancer:loadbalancer
- - kubernetes-master:kube-control
  - kubernetes-worker:kube-control
- - kubernetes-master:certificates
  - easyrsa:client
- - etcd:certificates
  - easyrsa:client
- - kubernetes-master:etcd
  - etcd:db
- - kubernetes-worker:certificates
  - easyrsa:client
- - kubernetes-worker:kube-api-endpoint
  - kubeapi-load-balancer:website
- - kubeapi-load-balancer:certificates
  - easyrsa:client
- - flannel:etcd
  - etcd:db
- - flannel:cni
  - kubernetes-master:cni
- - flannel:cni
  - kubernetes-worker:cni
- - containerd:containerd
  - kubernetes-worker:container-runtime
- - containerd:containerd
  - kubernetes-master:container-runtime
+54.6 KiB
Loading image diff...
+43 −0
Original line number Diff line number Diff line
nsd-catalog:
    nsd:
    -   id: k8s_juju
        name: k8s_juju
        short-name: k8s_juju
        description: NS consisting of a 4 k8s_jujumachine VNFs and 1 k8s_jujucontroller VNF connected to mgmt network
        vendor: OSM
        version: '1.0'
        logo: osm.png
        constituent-vnfd:
        -   member-vnf-index: k8s_vnf1
            vnfd-id-ref: k8s_jujumachine_vnf
        -   member-vnf-index: k8s_vnf2
            vnfd-id-ref: k8s_jujumachine_vnf
        -   member-vnf-index: k8s_vnf3
            vnfd-id-ref: k8s_jujumachine_vnf
        -   member-vnf-index: k8s_vnf4
            vnfd-id-ref: k8s_jujumachine_vnf
        -   member-vnf-index: k8s_juju
            vnfd-id-ref: k8s_jujucontroller_vnf
        vld:
        -   id: mgmtnet
            name: mgmtnet
            type: ELAN
            mgmt-network: 'true'
            vim-network-name: mgmt
            vnfd-connection-point-ref:
            -   member-vnf-index-ref: k8s_vnf1
                vnfd-id-ref: k8s_jujumachine_vnf
                vnfd-connection-point-ref: mgmt
            -   member-vnf-index-ref: k8s_vnf2
                vnfd-id-ref: k8s_jujumachine_vnf
                vnfd-connection-point-ref: mgmt
            -   member-vnf-index-ref: k8s_vnf3
                vnfd-id-ref: k8s_jujumachine_vnf
                vnfd-connection-point-ref: mgmt
            -   member-vnf-index-ref: k8s_vnf4
                vnfd-id-ref: k8s_jujumachine_vnf
                vnfd-connection-point-ref: mgmt
            -   member-vnf-index-ref: k8s_juju
                vnfd-id-ref: k8s_jujucontroller_vnf
                vnfd-connection-point-ref: mgmt
+3 −0
Original line number Diff line number Diff line
This is a VNF with a single small VDU intended to run a juju controller
that will deploy a K8s cluster using a juju bundle on a set of machines.
Loading