Running with gitlab-runner 13.12.0 (7a6612da)  on etsi-vim 1vnrDf_L section_start:1638806728:resolve_secrets Resolving secrets section_end:1638806728:resolve_secrets section_start:1638806728:prepare_executor Preparing the "custom" executor Using Custom executor with driver Openstack 2021.10.07.0... Connecting to Openstack Provisioning an instance gitlab-builder-6-project-53-concurrent-1-job-18376 Instance gitlab-builder-6-project-53-concurrent-1-job-18376 is running on address 172.21.249.247 Checking SSH connection SSH connection has been established section_end:1638806768:prepare_executor section_start:1638806768:prepare_script Preparing environment Running on gitlab-builder-6-project-53-concurrent-1-job-18376... section_end:1638806771:prepare_script section_start:1638806771:get_sources Getting source from Git repository Fetching changes... Initialized empty Git repository in /home/ubuntu/builds/gitlab/devops/cicd/.git/ Created fresh repository. Checking out b7561bab as veleza-master-patch-89606... Skipping Git submodules setup section_end:1638806774:get_sources section_start:1638806774:step_script Executing "step_script" stage of the job script WARNING: Starting with version 14.0 the 'build_script' stage will be replaced with 'step_script': https://gitlab.com/gitlab-org/gitlab-runner/-/issues/26426 $ mkdir artifacts $ mkdir reports $ echo "GitLab Runner Installation" GitLab Runner Installation $ curl -L "https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh" | sudo bash Detected operating system as Ubuntu/bionic. Checking for curl... Detected curl... Checking for gpg... Detected gpg... Running apt-get update... done. Installing apt-transport-https... done. Installing /etc/apt/sources.list.d/runner_gitlab-runner.list...done. Importing packagecloud gpg key... done. Running apt-get update... done. The repository is setup! You can now install packages. $ sudo apt-get install gitlab-runner Reading package lists... Building dependency tree... Reading state information... The following package was automatically installed and is no longer required: grub-pc-bin Use 'sudo apt autoremove' to remove it. Suggested packages: docker-engine The following NEW packages will be installed: gitlab-runner 0 upgraded, 1 newly installed, 0 to remove and 184 not upgraded. Need to get 439 MB of archives. After this operation, 479 MB of additional disk space will be used. Get:1 https://packages.gitlab.com/runner/gitlab-runner/ubuntu bionic/main amd64 gitlab-runner amd64 14.5.1 [439 MB] Fetched 439 MB in 11s (41.4 MB/s) Selecting previously unselected package gitlab-runner. (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 60101 files and directories currently installed.) Preparing to unpack .../gitlab-runner_14.5.1_amd64.deb ... Unpacking gitlab-runner (14.5.1) ... Setting up gitlab-runner (14.5.1) ... GitLab Runner: creating gitlab-runner... Home directory skeleton not used Runtime platform  arch=amd64 os=linux pid=2479 revision=de104fcd version=14.5.1 gitlab-runner: the service is not installed Runtime platform  arch=amd64 os=linux pid=2489 revision=de104fcd version=14.5.1 gitlab-ci-multi-runner: the service is not installed Runtime platform  arch=amd64 os=linux pid=2516 revision=de104fcd version=14.5.1 Runtime platform  arch=amd64 os=linux pid=2581 revision=de104fcd version=14.5.1 INFO: Docker installation not found, skipping clear-docker-cache $ ls -a . .. .git .gitlab-ci.yml LICENSE artifacts hive reports templates $ sudo apt install docker.io -y Reading package lists... Building dependency tree... Reading state information... The following package was automatically installed and is no longer required: grub-pc-bin Use 'sudo apt autoremove' to remove it. The following additional packages will be installed: bridge-utils containerd pigz runc ubuntu-fan Suggested packages: ifupdown aufs-tools cgroupfs-mount | cgroup-lite debootstrap docker-doc rinse zfs-fuse | zfsutils The following NEW packages will be installed: bridge-utils containerd docker.io pigz runc ubuntu-fan 0 upgraded, 6 newly installed, 0 to remove and 184 not upgraded. Need to get 74.2 MB of archives. After this operation, 360 MB of additional disk space will be used. Get:1 http://nova.clouds.archive.ubuntu.com/ubuntu bionic/universe amd64 pigz amd64 2.4-1 [57.4 kB] Get:2 http://nova.clouds.archive.ubuntu.com/ubuntu bionic/main amd64 bridge-utils amd64 1.5-15ubuntu1 [30.1 kB] Get:3 http://nova.clouds.archive.ubuntu.com/ubuntu bionic-updates/universe amd64 runc amd64 1.0.1-0ubuntu2~18.04.1 [4155 kB] Get:4 http://nova.clouds.archive.ubuntu.com/ubuntu bionic-updates/universe amd64 containerd amd64 1.5.5-0ubuntu3~18.04.1 [33.0 MB] Get:5 http://nova.clouds.archive.ubuntu.com/ubuntu bionic-updates/universe amd64 docker.io amd64 20.10.7-0ubuntu5~18.04.3 [36.9 MB] Get:6 http://nova.clouds.archive.ubuntu.com/ubuntu bionic/main amd64 ubuntu-fan all 0.12.10 [34.7 kB] Fetched 74.2 MB in 2s (45.0 MB/s) Selecting previously unselected package pigz. (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 60122 files and directories currently installed.) Preparing to unpack .../0-pigz_2.4-1_amd64.deb ... Unpacking pigz (2.4-1) ... Selecting previously unselected package bridge-utils. Preparing to unpack .../1-bridge-utils_1.5-15ubuntu1_amd64.deb ... Unpacking bridge-utils (1.5-15ubuntu1) ... Selecting previously unselected package runc. Preparing to unpack .../2-runc_1.0.1-0ubuntu2~18.04.1_amd64.deb ... Unpacking runc (1.0.1-0ubuntu2~18.04.1) ... Selecting previously unselected package containerd. Preparing to unpack .../3-containerd_1.5.5-0ubuntu3~18.04.1_amd64.deb ... Unpacking containerd (1.5.5-0ubuntu3~18.04.1) ... Selecting previously unselected package docker.io. Preparing to unpack .../4-docker.io_20.10.7-0ubuntu5~18.04.3_amd64.deb ... Unpacking docker.io (20.10.7-0ubuntu5~18.04.3) ... Selecting previously unselected package ubuntu-fan. Preparing to unpack .../5-ubuntu-fan_0.12.10_all.deb ... Unpacking ubuntu-fan (0.12.10) ... Setting up runc (1.0.1-0ubuntu2~18.04.1) ... Setting up containerd (1.5.5-0ubuntu3~18.04.1) ... Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /lib/systemd/system/containerd.service. Setting up bridge-utils (1.5-15ubuntu1) ... debconf: unable to initialize frontend: Dialog debconf: (Dialog frontend will not work on a dumb terminal, an emacs shell buffer, or without a controlling terminal.) debconf: falling back to frontend: Readline Setting up ubuntu-fan (0.12.10) ... Created symlink /etc/systemd/system/multi-user.target.wants/ubuntu-fan.service → /lib/systemd/system/ubuntu-fan.service. Setting up pigz (2.4-1) ... Setting up docker.io (20.10.7-0ubuntu5~18.04.3) ... debconf: unable to initialize frontend: Dialog debconf: (Dialog frontend will not work on a dumb terminal, an emacs shell buffer, or without a controlling terminal.) debconf: falling back to frontend: Readline Adding group `docker' (GID 115) ... Done. Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /lib/systemd/system/docker.service. Created symlink /etc/systemd/system/sockets.target.wants/docker.socket → /lib/systemd/system/docker.socket. Processing triggers for systemd (237-3ubuntu10.41) ... Processing triggers for man-db (2.8.3-2ubuntu0.1) ... Processing triggers for ureadahead (0.100.0-21) ... $ sudo usermod -aG docker $USER $ newgrp docker $ cat << EOF | sudo tee /etc/docker/daemon.json # collapsed multi-line command { "registry-mirrors": ["http://172.21.1.1:5000"] } $ sudo chmod 666 /var/run/docker.sock $ sudo systemctl daemon-reload $ sudo service docker restart $ echo "OSM Installation" OSM Installation $ export PATH=/snap/bin:$PATH $ wget https://osm-download.etsi.org/ftp/osm-10.0-ten/install_osm.sh $ chmod +x ./install_osm.sh $ sudo snap install microk8s --classic --channel=1.20/stable 2021-12-06T16:08:04Z INFO Waiting for restart... microk8s (1.20/stable) v1.20.13 from Canonical* installed $ sudo sed -i "s|https://registry-1.docker.io|http://172.21.1.1:5000|" /var/snap/microk8s/current/args/containerd-template.toml $ sudo systemctl restart snap.microk8s.daemon-containerd.service $ sudo snap alias microk8s.kubectl kubectl Added: - microk8s.kubectl as kubectl $ ./install_osm.sh -y --charmed -t 10.0.3 2>&1 | tee artifacts/install_osm.log Checking required packages: software-properties-common apt-transport-https Warning: apt-key output should not be parsed (stdout is not a terminal) OK Hit:1 http://nova.clouds.archive.ubuntu.com/ubuntu bionic InRelease Hit:2 http://nova.clouds.archive.ubuntu.com/ubuntu bionic-updates InRelease Get:3 https://osm-download.etsi.org/repository/osm/debian/ReleaseTEN stable InRelease [4070 B] Hit:4 http://nova.clouds.archive.ubuntu.com/ubuntu bionic-backports InRelease Hit:5 http://security.ubuntu.com/ubuntu bionic-security InRelease Get:6 https://osm-download.etsi.org/repository/osm/debian/ReleaseTEN stable/devops amd64 Packages [487 B] Hit:7 https://packages.gitlab.com/runner/gitlab-runner/ubuntu bionic InRelease Fetched 4557 B in 2s (2955 B/s) Reading package lists... W: Conflicting distribution: https://osm-download.etsi.org/repository/osm/debian/ReleaseTEN stable InRelease (expected stable but got ) Hit:1 https://osm-download.etsi.org/repository/osm/debian/ReleaseTEN stable InRelease Hit:2 http://security.ubuntu.com/ubuntu bionic-security InRelease Hit:3 http://nova.clouds.archive.ubuntu.com/ubuntu bionic InRelease Hit:4 http://nova.clouds.archive.ubuntu.com/ubuntu bionic-updates InRelease Hit:5 http://nova.clouds.archive.ubuntu.com/ubuntu bionic-backports InRelease Hit:6 https://packages.gitlab.com/runner/gitlab-runner/ubuntu bionic InRelease Reading package lists... W: Conflicting distribution: https://osm-download.etsi.org/repository/osm/debian/ReleaseTEN stable InRelease (expected stable but got ) Hit:1 https://osm-download.etsi.org/repository/osm/debian/ReleaseTEN stable InRelease Hit:2 http://nova.clouds.archive.ubuntu.com/ubuntu bionic InRelease Hit:3 http://nova.clouds.archive.ubuntu.com/ubuntu bionic-updates InRelease Hit:4 http://nova.clouds.archive.ubuntu.com/ubuntu bionic-backports InRelease Hit:5 http://security.ubuntu.com/ubuntu bionic-security InRelease Hit:6 https://packages.gitlab.com/runner/gitlab-runner/ubuntu bionic InRelease Reading package lists... W: Conflicting distribution: https://osm-download.etsi.org/repository/osm/debian/ReleaseTEN stable InRelease (expected stable but got ) Reading package lists... Building dependency tree... Reading state information... The following package was automatically installed and is no longer required: grub-pc-bin Use 'sudo apt autoremove' to remove it. The following NEW packages will be installed: osm-devops 0 upgraded, 1 newly installed, 0 to remove and 184 not upgraded. Need to get 723 kB of archives. After this operation, 5534 kB of additional disk space will be used. Get:1 https://osm-download.etsi.org/repository/osm/debian/ReleaseTEN stable/devops amd64 osm-devops all 10.0.3.post8-1 [723 kB] Fetched 723 kB in 0s (9983 kB/s) Selecting previously unselected package osm-devops. (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 60445 files and directories currently installed.) Preparing to unpack .../osm-devops_10.0.3.post8-1_all.deb ... Unpacking osm-devops (10.0.3.post8-1) ... Setting up osm-devops (10.0.3.post8-1) ... snap "microk8s" is already installed, see 'snap help refresh' --advertise-address 172.21.249.247 Stopped. Started. /snap/microk8s/2702/bin/dqlite: symbol lookup error: /snap/microk8s/2702/bin/dqlite: undefined symbol: sqlite3_system_errno /snap/microk8s/2702/bin/dqlite: symbol lookup error: /snap/microk8s/2702/bin/dqlite: undefined symbol: sqlite3_system_errno /snap/microk8s/2702/bin/dqlite: symbol lookup error: /snap/microk8s/2702/bin/dqlite: undefined symbol: sqlite3_system_errno /snap/microk8s/2702/bin/dqlite: symbol lookup error: /snap/microk8s/2702/bin/dqlite: undefined symbol: sqlite3_system_errno /snap/microk8s/2702/bin/dqlite: symbol lookup error: /snap/microk8s/2702/bin/dqlite: undefined symbol: sqlite3_system_errno /snap/microk8s/2702/bin/dqlite: symbol lookup error: /snap/microk8s/2702/bin/dqlite: undefined symbol: sqlite3_system_errno /snap/microk8s/2702/bin/dqlite: symbol lookup error: /snap/microk8s/2702/bin/dqlite: undefined symbol: sqlite3_system_errno /snap/microk8s/2702/bin/dqlite: symbol lookup error: /snap/microk8s/2702/bin/dqlite: undefined symbol: sqlite3_system_errno /snap/microk8s/2702/bin/dqlite: symbol lookup error: /snap/microk8s/2702/bin/dqlite: undefined symbol: sqlite3_system_errno /snap/microk8s/2702/bin/dqlite: symbol lookup error: /snap/microk8s/2702/bin/dqlite: undefined symbol: sqlite3_system_errno microk8s is running high-availability: no datastore master nodes: none datastore standby nodes: none addons: enabled: ha-cluster # Configure high availability on the current node disabled: ambassador # Ambassador API Gateway and Ingress cilium # SDN, fast with full network policy dashboard # The Kubernetes dashboard dns # CoreDNS fluentd # Elasticsearch-Fluentd-Kibana logging and monitoring gpu # Automatic enablement of Nvidia CUDA helm # Helm 2 - the package manager for Kubernetes helm3 # Helm 3 - Kubernetes package manager host-access # Allow Pods connecting to Host services smoothly ingress # Ingress controller for external access istio # Core Istio service mesh services jaeger # Kubernetes Jaeger operator with its simple config keda # Kubernetes-based Event Driven Autoscaling knative # The Knative framework on Kubernetes. kubeflow # Kubeflow for easy ML deployments linkerd # Linkerd is a service mesh for Kubernetes and other frameworks metallb # Loadbalancer for your Kubernetes cluster metrics-server # K8s Metrics Server for API access to service metrics multus # Multus CNI enables attaching multiple network interfaces to pods portainer # Portainer UI for your Kubernetes cluster prometheus # Prometheus operator for monitoring and logging rbac # Role-Based Access Control for authorisation registry # Private image registry exposed on localhost:32000 storage # Storage class; allocates storage from host directory traefik # traefik Ingress controller for external access apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURBVENDQWVtZ0F3SUJBZ0lKQUloVWU4V2tFM3pxTUEwR0NTcUdTSWIzRFFFQkN3VUFNQmN4RlRBVEJnTlYKQkFNTURERXdMakUxTWk0eE9ETXVNVEFlRncweU1URXlNRFl4TmpBNE1UZGFGdzB6TVRFeU1EUXhOakE0TVRkYQpNQmN4RlRBVEJnTlZCQU1NRERFd0xqRTFNaTR4T0RNdU1UQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQCkFEQ0NBUW9DZ2dFQkFNZEZBM1paekg5ekpTUzM3QlBiamJXVGpIcUdvR29aeStJeitaNVdZMWRTbmdacXl1blMKV0E0Q3JudU10YStjczNFTm5idHlmWUltTWNIM3QxRm5QQk01QlBuaTk5ZWtXdFZPQ2JlQzJ4Ri9YOXhrZ3NocQpySENUeDc5UnYzalJaUHR0bzdqQXJYZCtrWFZMU1gzckNibXNxWmgxem5yanlQc3BUSElkRDVia1RhbG1hWU5XClRGYzVFM201SytETWhUcUt3dzBjUXVPMGNvU0FFNUhPRWN3eW1nMjhPSDA0UFRjcDNyaDdZMGh2Yk1SNmRmRGUKcDB5N0RPSXZqNVFuNUJWRUk5K3UvaGZ3Ukc1YThFM1h2bEZQTmFrMXN1YXp2SjBqQzJJUjREb2dEU0h3ZUpQZwp4bHl2YTdrb0poQTh5VEZtY2c1M3BRMTAxeFlDZTIyVmNCc0NBd0VBQWFOUU1FNHdIUVlEVlIwT0JCWUVGQlRYCm1oMEd6L2dOV00rVHVhdHN3eVMrNWJJNE1COEdBMVVkSXdRWU1CYUFGQlRYbWgwR3ovZ05XTStUdWF0c3d5UysKNWJJNE1Bd0dBMVVkRXdRRk1BTUJBZjh3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUtVU2FENkJzUFhrSmhwbgphbjFwV0VXdmN4Qjduc2hyUHNCTDJlNUZweUhtT0JzTDVyWWJTWFZOMmJ2bTNzZG1XU1N5eHdzSktsUUJUNFZtCklGaklBK2xybXpDS2R4SGJXemNXa015MFE4allaQk9TWGtuYW84V09reEdLWTJ5U3pCcThDdUVYb2xkdTNMQkIKNnd2VEZRbDV3RUZoMWdJQkNWMDNqb29hajFRcXRXMXNSZ1FRVmZiQ2orbjN5UnN1M0FZZG5aVlBiUWwzWTFwVApiSkdRZnZidll2N1k0RHA0cmh5bEJQaVN6UG9UbnRyczd5R2RMbUpKaGxaME9TeU1NTGJYT2ZzdDRwS1E2SWFJCmFVS0syS0ZGU1c1MUowMEhIZTJ6c3FBZDJ1ZFRXZ3plK3dPQ3ZvMS9SbDBEN0lEeDQzSzI3akVHSi9pL25Gd2kKVmE5MTNwYz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= server: https://172.21.249.247:16443 name: microk8s-cluster contexts: - context: cluster: microk8s-cluster user: admin name: microk8s current-context: microk8s kind: Config preferences: {} users: - name: admin user: token: YjJTNmE5RnZXUEgzVEQ0eGtUZ3Y3VHUyMStTVTN5eHh3N2JiZWR5NDNYdz0K juju (2.8/stable) 2.8.13 from Canonical* installed Enabling MetalLB Applying Metallb manifest namespace/metallb-system created secret/memberlist created podsecuritypolicy.policy/controller created podsecuritypolicy.policy/speaker created serviceaccount/controller created serviceaccount/speaker created clusterrole.rbac.authorization.k8s.io/metallb-system:controller created clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created role.rbac.authorization.k8s.io/config-watcher created role.rbac.authorization.k8s.io/pod-lister created clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created rolebinding.rbac.authorization.k8s.io/config-watcher created rolebinding.rbac.authorization.k8s.io/pod-lister created daemonset.apps/speaker created deployment.apps/controller created configmap/config created MetalLB is enabled Enabling Ingress ingressclass.networking.k8s.io/public created namespace/ingress created serviceaccount/nginx-ingress-microk8s-serviceaccount created clusterrole.rbac.authorization.k8s.io/nginx-ingress-microk8s-clusterrole created role.rbac.authorization.k8s.io/nginx-ingress-microk8s-role created clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-microk8s created rolebinding.rbac.authorization.k8s.io/nginx-ingress-microk8s created configmap/nginx-load-balancer-microk8s-conf created configmap/nginx-ingress-tcp-microk8s-conf created configmap/nginx-ingress-udp-microk8s-conf created daemonset.apps/nginx-ingress-microk8s-controller created Ingress is enabled Enabling default storage class deployment.apps/hostpath-provisioner created storageclass.storage.k8s.io/microk8s-hostpath created serviceaccount/microk8s-hostpath created clusterrole.rbac.authorization.k8s.io/microk8s-hostpath created clusterrolebinding.rbac.authorization.k8s.io/microk8s-hostpath created Storage will be available soon Enabling DNS Applying manifest serviceaccount/coredns created configmap/coredns created deployment.apps/coredns created service/kube-dns created clusterrole.rbac.authorization.k8s.io/coredns created clusterrolebinding.rbac.authorization.k8s.io/coredns created Restarting kubelet DNS is enabled Creating Juju controller "osm-vca" on microk8s/localhost Fetching Juju Dashboard 0.8.1 Creating k8s resources for controller "controller-osm-vca" Starting controller pod Bootstrap agent now started Contacting Juju controller at 172.21.249.247 to verify accessibility... Bootstrap complete, controller "osm-vca" is now available in namespace "controller-osm-vca" Now you can run juju add-model to create a new model to deploy k8s workloads. * Applying /etc/sysctl.d/10-console-messages.conf ... kernel.printk = 4 4 1 7 * Applying /etc/sysctl.d/10-ipv6-privacy.conf ... net.ipv6.conf.all.use_tempaddr = 2 net.ipv6.conf.default.use_tempaddr = 2 * Applying /etc/sysctl.d/10-kernel-hardening.conf ... kernel.kptr_restrict = 1 * Applying /etc/sysctl.d/10-link-restrictions.conf ... fs.protected_hardlinks = 1 fs.protected_symlinks = 1 * Applying /etc/sysctl.d/10-lxd-inotify.conf ... fs.inotify.max_user_instances = 1024 * Applying /etc/sysctl.d/10-magic-sysrq.conf ... kernel.sysrq = 176 * Applying /etc/sysctl.d/10-network-security.conf ... net.ipv4.conf.default.rp_filter = 1 net.ipv4.conf.all.rp_filter = 1 net.ipv4.tcp_syncookies = 1 * Applying /etc/sysctl.d/10-ptrace.conf ... kernel.yama.ptrace_scope = 1 * Applying /etc/sysctl.d/10-zeropage.conf ... vm.mmap_min_addr = 65536 * Applying /usr/lib/sysctl.d/50-default.conf ... net.ipv4.conf.all.promote_secondaries = 1 net.core.default_qdisc = fq_codel * Applying /etc/sysctl.d/60-lxd-production.conf ... fs.inotify.max_queued_events = 1048576 fs.inotify.max_user_instances = 1048576 fs.inotify.max_user_watches = 1048576 vm.max_map_count = 262144 kernel.dmesg_restrict = 1 net.ipv4.neigh.default.gc_thresh3 = 8192 net.ipv6.neigh.default.gc_thresh3 = 8192 net.core.bpf_jit_limit = 3000000000 kernel.keys.maxkeys = 2000 kernel.keys.maxbytes = 2000000 * Applying /etc/sysctl.d/99-cloudimg-ipv6.conf ... net.ipv6.conf.all.use_tempaddr = 0 net.ipv6.conf.default.use_tempaddr = 0 * Applying /etc/sysctl.d/99-sysctl.conf ... * Applying /etc/sysctl.conf ... Reading package lists... Building dependency tree... Reading state information... The following packages were automatically installed and are no longer required: ebtables grub-pc-bin libuv1 uidmap xdelta3 Use 'sudo apt autoremove' to remove them. The following packages will be REMOVED: liblxc-common* liblxc1* lxcfs* lxd* lxd-client* 0 upgraded, 0 newly installed, 5 to remove and 184 not upgraded. After this operation, 34.1 MB disk space will be freed. (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 61587 files and directories currently installed.) Removing lxd (3.0.3-0ubuntu1~18.04.1) ... Removing lxd dnsmasq configuration Removing lxcfs (3.0.3-0ubuntu1~18.04.2) ... Removing lxd-client (3.0.3-0ubuntu1~18.04.1) ... Removing liblxc-common (3.0.3-0ubuntu1~18.04.1) ... Removing liblxc1 (3.0.3-0ubuntu1~18.04.1) ... Processing triggers for man-db (2.8.3-2ubuntu0.1) ... Processing triggers for libc-bin (2.27-3ubuntu1) ... (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 61341 files and directories currently installed.) Purging configuration files for liblxc-common (3.0.3-0ubuntu1~18.04.1) ... Purging configuration files for lxd (3.0.3-0ubuntu1~18.04.1) ... Purging configuration files for lxcfs (3.0.3-0ubuntu1~18.04.2) ... Processing triggers for systemd (237-3ubuntu10.41) ... Processing triggers for ureadahead (0.100.0-21) ... lxd (4.0/stable) 4.0.8 from Canonical* installed To start your first instance, try: lxc launch ubuntu:20.04 Generating a RSA private key .......................................+++++ ..................................................+++++ writing new private key to '/home/ubuntu/.osm/client.key' ----- Cloud "lxd-cloud" added to controller "osm-vca". WARNING loading credentials: credentials for cloud lxd-cloud not found To upload a credential to the controller for cloud "lxd-cloud", use * 'add-model' with --credential option or * 'add-credential -c lxd-cloud'. Using cloud "lxd-cloud" from the controller to verify credentials. Controller credential "lxd-cloud" for user "admin" for cloud "lxd-cloud" on controller "osm-vca" added. For more information, see ‘juju show-credential lxd-cloud lxd-cloud’. jq 1.5+dfsg-1 from Michael Vogt (mvo) installed yq 4.16.1 from Mike Farah (mikefarah) installed Creating OSM model Added 'osm' model on microk8s/localhost with credential 'microk8s' for user 'admin' Deploying OSM with charms Located bundle "cs:bundle/osm-66" Resolving charm: cs:~charmed-osm/grafana-4 Resolving charm: cs:~charmed-osm/kafka-k8s from channel: stable Resolving charm: cs:~charmed-osm/keystone-9 Resolving charm: cs:~charmed-osm/lcm-8 Resolving charm: cs:~charmed-osm/mariadb-k8s from channel: stable Resolving charm: cs:~charmed-osm/mon-5 Resolving charm: cs:~charmed-osm/mongodb-k8s from channel: stable Resolving charm: cs:~charmed-osm/nbi-12 Resolving charm: cs:~charmed-osm/ng-ui-21 Resolving charm: cs:~charmed-osm/pla-9 Resolving charm: cs:~charmed-osm/pol-4 Resolving charm: cs:~charmed-osm/prometheus-4 Resolving charm: cs:~charmed-osm/ro-4 Resolving charm: cs:~charmed-osm/zookeeper-k8s from channel: stable Executing changes: - upload charm cs:~charmed-osm/grafana-4 for series kubernetes - deploy application grafana with 1 unit on kubernetes using cs:~charmed-osm/grafana-4 added resource image - set annotations for grafana - upload charm cs:~charmed-osm/kafka-k8s-21 for series kubernetes from channel stable - deploy application kafka-k8s with 1 unit on kubernetes using cs:~charmed-osm/kafka-k8s-21 - set annotations for kafka-k8s - upload charm cs:~charmed-osm/keystone-9 for series kubernetes - deploy application keystone with 1 unit on kubernetes using cs:~charmed-osm/keystone-9 added resource image - set annotations for keystone - upload charm cs:~charmed-osm/lcm-8 for series kubernetes - deploy application lcm with 1 unit on kubernetes using cs:~charmed-osm/lcm-8 added resource image - set annotations for lcm - upload charm cs:~charmed-osm/mariadb-k8s-35 for series kubernetes from channel stable - deploy application mariadb-k8s with 1 unit on kubernetes using cs:~charmed-osm/mariadb-k8s-35 - set annotations for mariadb-k8s - upload charm cs:~charmed-osm/mon-5 for series kubernetes - deploy application mon with 1 unit on kubernetes using cs:~charmed-osm/mon-5 added resource image - set annotations for mon - upload charm cs:~charmed-osm/mongodb-k8s-29 for series kubernetes from channel stable - deploy application mongodb-k8s with 1 unit on kubernetes using cs:~charmed-osm/mongodb-k8s-29 - set annotations for mongodb-k8s - upload charm cs:~charmed-osm/nbi-12 for series kubernetes - deploy application nbi with 1 unit on kubernetes using cs:~charmed-osm/nbi-12 added resource image - set annotations for nbi - upload charm cs:~charmed-osm/ng-ui-21 for series kubernetes - deploy application ng-ui with 1 unit on kubernetes using cs:~charmed-osm/ng-ui-21 added resource image - set annotations for ng-ui - upload charm cs:~charmed-osm/pla-9 for series kubernetes - deploy application pla with 1 unit on kubernetes using cs:~charmed-osm/pla-9 added resource image - set annotations for pla - upload charm cs:~charmed-osm/pol-4 for series kubernetes - deploy application pol with 1 unit on kubernetes using cs:~charmed-osm/pol-4 added resource image - set annotations for pol - upload charm cs:~charmed-osm/prometheus-4 for series kubernetes - deploy application prometheus with 1 unit on kubernetes using cs:~charmed-osm/prometheus-4 added resource backup-image added resource image - set annotations for prometheus - upload charm cs:~charmed-osm/ro-4 for series kubernetes - deploy application ro with 1 unit on kubernetes using cs:~charmed-osm/ro-4 added resource image - set annotations for ro - upload charm cs:~charmed-osm/zookeeper-k8s-37 for series kubernetes from channel stable - deploy application zookeeper-k8s with 1 unit on kubernetes using cs:~charmed-osm/zookeeper-k8s-37 - set annotations for zookeeper-k8s - add relation grafana:prometheus - prometheus:prometheus - add relation kafka-k8s:zookeeper - zookeeper-k8s:zookeeper - add relation keystone:db - mariadb-k8s:mysql - add relation lcm:kafka - kafka-k8s:kafka - add relation lcm:mongodb - mongodb-k8s:mongo - add relation ro:ro - lcm:ro - add relation ro:kafka - kafka-k8s:kafka - add relation ro:mongodb - mongodb-k8s:mongo - add relation pol:kafka - kafka-k8s:kafka - add relation pol:mongodb - mongodb-k8s:mongo - add relation mon:mongodb - mongodb-k8s:mongo - add relation mon:kafka - kafka-k8s:kafka - add relation pla:kafka - kafka-k8s:kafka - add relation pla:mongodb - mongodb-k8s:mongo - add relation nbi:mongodb - mongodb-k8s:mongo - add relation nbi:kafka - kafka-k8s:kafka - add relation nbi:prometheus - prometheus:prometheus - add relation nbi:keystone - keystone:keystone - add relation mon:prometheus - prometheus:prometheus - add relation ng-ui:nbi - nbi:nbi - add relation mon:keystone - keystone:keystone - add relation mariadb-k8s:mysql - pol:mysql Deploy of bundle completed. Waiting for deployment to finish... 0 / 14 services active 0 / 14 services active 0 / 14 services active 0 / 14 services active 0 / 14 services active 0 / 14 services active 0 / 14 services active 0 / 14 services active 0 / 14 services active 0 / 14 services active 0 / 14 services active 0 / 14 services active 0 / 14 services active 0 / 14 services active 0 / 14 services active 2 / 14 services active 4 / 14 services active 3 / 14 services active 3 / 14 services active 3 / 14 services active 3 / 14 services active 3 / 14 services active 3 / 14 services active 3 / 14 services active 3 / 14 services active 4 / 14 services active 4 / 14 services active 4 / 14 services active 4 / 14 services active 4 / 14 services active 7 / 14 services active 8 / 14 services active 8 / 14 services active 7 / 14 services active 7 / 14 services active 7 / 14 services active 7 / 14 services active 8 / 14 services active 8 / 14 services active 8 / 14 services active 10 / 14 services active 10 / 14 services active 11 / 14 services active 11 / 14 services active 11 / 14 services active 12 / 14 services active 12 / 14 services active 12 / 14 services active 12 / 14 services active 12 / 14 services active 12 / 14 services active 13 / 14 services active 14 / 14 services active OSM with charms deployed Trying to install osmclient from channel 10.0/stable osmclient (10.0/stable) v10.0.3-0-gc0a69f8 from OSM Support (osmsupport) installed osmclient snap installed aac274e4-b612-48a8-adb9-5d7ab30f8c2a 165ab63f-9e0f-4431-93f4-46976d35ed30 Your installation is now complete, follow these steps for configuring the osmclient: 1. Create the OSM_HOSTNAME environment variable with the NBI IP export OSM_HOSTNAME=nbi.172.21.249.247.nip.io:443 2. Add the previous command to your .bashrc for other Shell sessions echo "export OSM_HOSTNAME=nbi.172.21.249.247.nip.io:443" >> ~/.bashrc DONE $ echo "OSM Robot Testing" OSM Robot Testing $ source hive/openstack-etsi.rc $ export OSM_HOSTNAME=$(juju config -m osm nbi site_url | sed "s/http.*\?:\/\///"):443 $ export PROMETHEUS_HOSTNAME=$(juju config -m osm prometheus site_url | sed "s/http.*\?:\/\///") $ export PROMETHEUS_PORT=80 $ export JUJU_PASSWORD=$(juju gui 2>&1 | grep password | cut -d: -f2 | xargs) $ echo "Creating envfile" Creating envfile $ cat << EOF >> robot-systest.cfg # collapsed multi-line command $ cat /$(pwd)/hive/openstack-etsi.rc >> robot-systest.cfg $ cat robot-systest.cfg VIM_TARGET=osm VIM_MGMT_NET=osm-ext ENVIRONMENTS_FOLDER=environments PACKAGES_FOLDER=/robot-systest/osm-packages OS_CLOUD=openstack LC_ALL=C.UTF-8 LANG=C.UTF-8 OS_AUTH_URL=http://172.21.247.1:5000/v3 OS_PROJECT_ID=34a71bb7d82f4ec691d8cc11045ae83e OS_PROJECT_NAME=osm_jenkins OS_USER_DOMAIN_NAME=Default OS_PROJECT_DOMAIN_ID=default OS_USERNAME=osm_jenkins OS_PASSWORD=$ETSI_VIM_PASSWORD OS_REGION_NAME=RegionOne OS_INTERFACE=public OS_IDENTITY_API_VERSION=3 $ echo "Creating clouds.yaml file" Creating clouds.yaml file $ source hive/openstack-etsi.rc # collapsed multi-line command $ cat clouds.yaml clouds: openstack: auth: auth_url: http://172.21.247.1:5000/v3 project_name: osm_jenkins username: osm_jenkins password: [MASKED] user_domain_name: Default project_domain_name: default $ docker run --env OSM_HOSTNAME=$OSM_HOSTNAME --env PROMETHEUS_HOSTNAME=$PROMETHEUS_HOSTNAME --env PROMETHEUS_PORT=$PROMETHEUS_PORT --env JUJU_PASSWORD=$JUJU_PASSWORD --env-file $(cat robot-systest.cfg | envsubst) -v /clouds.yaml:/etc/openstack/clouds.yaml -v /$(pwd)/hive/kubeconfig.yaml:/root/.kube/config -v /reports:/robot-systest/reports opensourcemano/tests:testing-daily -c -T $TESTS_VERSION -t nothing % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 5945 100 5945 0 0 19685 0 --:--:-- --:--:-- --:--:-- 19620 debconf: unable to initialize frontend: Dialog debconf: (Dialog frontend will not work on a dumb terminal, an emacs shell buffer, or without a controlling terminal.) debconf: falling back to frontend: Readline debconf: unable to initialize frontend: Readline debconf: (This frontend requires a controlling tty.) debconf: falling back to frontend: Teletype dpkg-preconfigure: unable to re-open stdin: WARNING: apt does not have a stable CLI interface. Use with caution in scripts. debconf: unable to initialize frontend: Dialog debconf: (Dialog frontend will not work on a dumb terminal, an emacs shell buffer, or without a controlling terminal.) debconf: falling back to frontend: Readline debconf: unable to initialize frontend: Readline debconf: (This frontend requires a controlling tty.) debconf: falling back to frontend: Teletype dpkg-preconfigure: unable to re-open stdin: --2021-12-06 16:07:55-- https://osm-download.etsi.org/ftp/osm-10.0-ten/install_osm.sh Resolving osm-download.etsi.org (osm-download.etsi.org)... 195.238.226.47 Connecting to osm-download.etsi.org (osm-download.etsi.org)|195.238.226.47|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 9348 (9.1K) [text/x-sh] Saving to: ‘install_osm.sh’ 0K ......... 100% 42.0M=0s 2021-12-06 16:07:55 (42.0 MB/s) - ‘install_osm.sh’ saved [9348/9348] docker: open VIM_TARGET=osm: no such file or directory. See 'docker run --help'. section_end:1638807871:step_script section_start:1638807871:after_script Running after_script Running after script... $ echo "Collecting Logs" Collecting Logs $ for deployment in `kubectl -n osm get deployments | grep -v operator | grep -v NAME| awk '{print $1}'`; do # collapsed multi-line command section_end:1638807874:after_script section_start:1638807874:upload_artifacts_on_failure Uploading artifacts for failed job Uploading artifacts... section_end:1638807877:upload_artifacts_on_failure section_start:1638807877:cleanup_file_variables Cleaning up file based variables section_end:1638807880:cleanup_file_variables ERROR: Job failed: exit status 1