Started by timer Obtained jenkins/public-clouds-tests/Jenkinsfile from git https://osm.etsi.org/gerrit/osm/devops Running in Durability level: MAX_SURVIVABILITY [Pipeline] properties [Pipeline] node Running on osm-cicd-4 in /home/jenkins/workspace/azure_robot_tests [Pipeline] { [Pipeline] stage [Pipeline] { (Declarative: Checkout SCM) [Pipeline] checkout No credentials specified > git rev-parse --is-inside-work-tree # timeout=10 Fetching changes from the remote Git repository > git config remote.origin.url https://osm.etsi.org/gerrit/osm/devops # timeout=10 Fetching upstream changes from https://osm.etsi.org/gerrit/osm/devops > git --version # timeout=10 > git fetch --tags --force --progress https://osm.etsi.org/gerrit/osm/devops +refs/heads/*:refs/remotes/origin/* > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 > git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10 Checking out Revision 18582e9176d2d4a07d4628fc0db7c6221613c4f2 (refs/remotes/origin/master) > git config core.sparsecheckout # timeout=10 > git checkout -f 18582e9176d2d4a07d4628fc0db7c6221613c4f2 Commit message: "Feature 11037 Installation of ingress controller in OSM community installer" > git rev-list --no-walk abf6770c2ec33d6ae0b1fb93be1093081abb5a9f # timeout=10 [Pipeline] } [Pipeline] // stage [Pipeline] stage [Pipeline] { (Declarative: Agent Setup) [Pipeline] sh [azure_robot_tests] Running shell script + docker pull opensourcemano/tests:testing-daily testing-daily: Pulling from opensourcemano/tests a8b1c5f80c2d: Already exists 919f4f9eda87: Already exists 0422d4bfb18c: Already exists 4e129ed14a38: Already exists 5a3838d0480e: Pulling fs layer a6aad56986cb: Pulling fs layer 970aa44678d0: Pulling fs layer 4e37b26a88fd: Pulling fs layer 127fb3918edd: Pulling fs layer 08b06cce8de1: Pulling fs layer c83205464df6: Pulling fs layer fd236542250c: Pulling fs layer 700ab53203e2: Pulling fs layer 133e713fef5f: Pulling fs layer 7d81b2dfd1c6: Pulling fs layer 7252cb5b6a31: Pulling fs layer b47e817e12d4: Pulling fs layer 4cddbba246f2: Pulling fs layer c14668027702: Pulling fs layer 51847472a95b: Pulling fs layer 127fb3918edd: Waiting 08b06cce8de1: Waiting c83205464df6: Waiting fd236542250c: Waiting 700ab53203e2: Waiting 133e713fef5f: Waiting 7d81b2dfd1c6: Waiting 7252cb5b6a31: Waiting b47e817e12d4: Waiting 4cddbba246f2: Waiting c14668027702: Waiting 51847472a95b: Waiting 4e37b26a88fd: Waiting 970aa44678d0: Verifying Checksum 970aa44678d0: Download complete 5a3838d0480e: Verifying Checksum a6aad56986cb: Verifying Checksum a6aad56986cb: Download complete 5a3838d0480e: Pull complete a6aad56986cb: Pull complete 4e37b26a88fd: Verifying Checksum 4e37b26a88fd: Download complete 970aa44678d0: Pull complete 127fb3918edd: Verifying Checksum 127fb3918edd: Download complete 08b06cce8de1: Verifying Checksum 08b06cce8de1: Download complete 4e37b26a88fd: Pull complete 127fb3918edd: Pull complete 08b06cce8de1: Pull complete fd236542250c: Download complete 700ab53203e2: Verifying Checksum 700ab53203e2: Download complete c83205464df6: Verifying Checksum c83205464df6: Download complete 7252cb5b6a31: Verifying Checksum 7252cb5b6a31: Download complete b47e817e12d4: Verifying Checksum b47e817e12d4: Download complete 4cddbba246f2: Verifying Checksum 4cddbba246f2: Download complete c14668027702: Download complete c83205464df6: Pull complete fd236542250c: Pull complete 700ab53203e2: Pull complete 7d81b2dfd1c6: Verifying Checksum 7d81b2dfd1c6: Download complete 133e713fef5f: Verifying Checksum 133e713fef5f: Download complete 51847472a95b: Verifying Checksum 51847472a95b: Download complete 133e713fef5f: Pull complete 7d81b2dfd1c6: Pull complete 7252cb5b6a31: Pull complete b47e817e12d4: Pull complete 4cddbba246f2: Pull complete c14668027702: Pull complete 51847472a95b: Pull complete Digest: sha256:50ab57b520f7a687aac713d3669c0b2dffcff58040901bc156164d3cd62f4f37 Status: Downloaded newer image for opensourcemano/tests:testing-daily docker.io/opensourcemano/tests:testing-daily [Pipeline] } [Pipeline] // stage [Pipeline] sh [azure_robot_tests] Running shell script + docker inspect -f . opensourcemano/tests:testing-daily . [Pipeline] withDockerContainer osm-cicd-4 does not seem to be running inside a container $ docker run -t -d -u 1001:1001 -u root:root --entrypoint= -w /home/jenkins/workspace/azure_robot_tests -v /home/jenkins/workspace/azure_robot_tests:/home/jenkins/workspace/azure_robot_tests:rw,z -v /home/jenkins/workspace/azure_robot_tests@tmp:/home/jenkins/workspace/azure_robot_tests@tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** --entrypoint cat opensourcemano/tests:testing-daily [Pipeline] { [Pipeline] withEnv [Pipeline] { [Pipeline] stage [Pipeline] { (Set environment) [Pipeline] script [Pipeline] { [Pipeline] sh [azure_robot_tests] Running shell script + mkdir -m 700 /root/.ssh [Pipeline] sh [azure_robot_tests] Running shell script + ssh-keygen -t rsa -f /root/.ssh/id_rsa -N Generating public/private rsa key pair. Your identification has been saved in /root/.ssh/id_rsa Your public key has been saved in /root/.ssh/id_rsa.pub The key fingerprint is: SHA256:tdpa+pn8kRtvaTZbLjLyIXBwCtdmOCBnz9lrb7ZGHtM root@ab0fd75f02f9 The key's randomart image is: +---[RSA 3072]----+ | . + | | + + = | | . O B | | o O o | | S = . | | * .+.E | | . +oOo ..| | =.*=BBo | | o.===*o+.| +----[SHA256]-----+ [Pipeline] sh [azure_robot_tests] Running shell script + cp /root/.ssh/id_rsa /root/osm_id_rsa [Pipeline] sh [azure_robot_tests] Running shell script + echo Reading credential azure-credentials Reading credential azure-credentials [Pipeline] } [Pipeline] // script [Pipeline] withCredentials [Pipeline] { [Pipeline] sh [azure_robot_tests] Running shell script + cp **** /root/azure-creds.json [Pipeline] sh [azure_robot_tests] Running shell script + set +x [ { "cloudName": "AzureCloud", "homeTenantId": "e6746ab5-ebdc-4e9d-821b-a71bdaf63d9b", "id": "8fb7e78d-097b-413d-bc65-41d29be6bab1", "isDefault": true, "managedByTenants": [], "name": "Azure in Open", "state": "Enabled", "tenantId": "e6746ab5-ebdc-4e9d-821b-a71bdaf63d9b", "user": { "name": "7c5ba2e6-2013-49a0-bf9a-f2592030f7ff", "type": "servicePrincipal" } } ] [Pipeline] sh [azure_robot_tests] Running shell script + az vm list -o table Name ResourceGroup Location Zones ---------------- ------------------ ---------- ------- vm-CICD-Host OSM_CICD_GROUP westeurope 1 vm-VPN-Host OSM_GROUP westeurope VPN-Gateway OSM_GROUP westeurope vm-Hackfest-Host OSM_HACKFEST_GROUP westeurope [Pipeline] } [Pipeline] // withCredentials [Pipeline] } [Pipeline] // stage [Pipeline] stage [Pipeline] { (Create k8s cluster) [Pipeline] sh [azure_robot_tests] Running shell script + /robot-systest/cloud-scripts/create-k8s.sh Creating a new IaaS k8s cluster in azure + az vm create --resource-group OSM_CICD_GROUP --name k8stest202405261109 --image Canonical:0001-com-ubuntu-server-jammy:22_04-lts:latest --size Standard_A2_v2 --vnet-name OSM-CICD-net --subnet OSM-CICD-subnet --public-ip-address '' --admin-username ubuntu --priority Regular Selecting "uksouth" may reduce your costs. The region you've selected may cost more for the same services. You can disable this message in the future with the command "az config set core.display_region_identified=false". Learn more at https://go.microsoft.com/fwlink/?linkid=222571 WARNING: Consider upgrading security for your workloads using Azure Trusted Launch VMs. To know more about Trusted Launch, please visit https://aka.ms/TrustedLaunch. { "fqdns": "", "id": "/subscriptions/8fb7e78d-097b-413d-bc65-41d29be6bab1/resourceGroups/OSM_CICD_GROUP/providers/Microsoft.Compute/virtualMachines/k8stest202405261109", "location": "westeurope", "macAddress": "00-0D-3A-AF-14-92", "powerState": "VM running", "privateIpAddress": "172.21.23.10", "publicIpAddress": "", "resourceGroup": "OSM_CICD_GROUP", "zones": "" } ++ tr -d '"' ++ az vm show -d -g OSM_CICD_GROUP -n k8stest202405261109 --query privateIps + export K8S_IP=172.21.23.10 + K8S_IP=172.21.23.10 ++ az vm show --resource-group OSM_CICD_GROUP --name k8stest202405261109 --query 'networkProfile.networkInterfaces[0].id' + INTERFACE_ID='"/subscriptions/8fb7e78d-097b-413d-bc65-41d29be6bab1/resourceGroups/OSM_CICD_GROUP/providers/Microsoft.Network/networkInterfaces/k8stest202405261109VMNic"' + INTERFACE_ID=/subscriptions/8fb7e78d-097b-413d-bc65-41d29be6bab1/resourceGroups/OSM_CICD_GROUP/providers/Microsoft.Network/networkInterfaces/k8stest202405261109VMNic ++ az network nic show --id /subscriptions/8fb7e78d-097b-413d-bc65-41d29be6bab1/resourceGroups/OSM_CICD_GROUP/providers/Microsoft.Network/networkInterfaces/k8stest202405261109VMNic --query networkSecurityGroup.id + SECURITY_GROUP_ID='"/subscriptions/8fb7e78d-097b-413d-bc65-41d29be6bab1/resourceGroups/OSM_CICD_GROUP/providers/Microsoft.Network/networkSecurityGroups/k8stest202405261109NSG"' + SECURITY_GROUP_ID=/subscriptions/8fb7e78d-097b-413d-bc65-41d29be6bab1/resourceGroups/OSM_CICD_GROUP/providers/Microsoft.Network/networkSecurityGroups/k8stest202405261109NSG ++ az resource show --ids /subscriptions/8fb7e78d-097b-413d-bc65-41d29be6bab1/resourceGroups/OSM_CICD_GROUP/providers/Microsoft.Network/networkSecurityGroups/k8stest202405261109NSG --query name + SECURITY_GROUP_NAME='"k8stest202405261109NSG"' + SECURITY_GROUP_NAME=k8stest202405261109NSG + az network nsg rule create -n microk8s --nsg-name k8stest202405261109NSG --priority 2000 -g OSM_CICD_GROUP --description 'Microk8s port' --protocol TCP --destination-port-ranges 16443 { "access": "Allow", "description": "Microk8s port", "destinationAddressPrefix": "*", "destinationAddressPrefixes": [], "destinationPortRange": "16443", "destinationPortRanges": [], "direction": "Inbound", "etag": "W/\"c78b9f13-ae4a-4781-89cf-7fd698bc001b\"", "id": "/subscriptions/8fb7e78d-097b-413d-bc65-41d29be6bab1/resourceGroups/OSM_CICD_GROUP/providers/Microsoft.Network/networkSecurityGroups/k8stest202405261109NSG/securityRules/microk8s", "name": "microk8s", "priority": 2000, "protocol": "Tcp", "provisioningState": "Succeeded", "resourceGroup": "OSM_CICD_GROUP", "sourceAddressPrefix": "*", "sourceAddressPrefixes": [], "sourcePortRange": "*", "sourcePortRanges": [], "type": "Microsoft.Network/networkSecurityGroups/securityRules" } + echo 'export K8S_IP="172.21.23.10"' + echo 'export K8S_IMAGE_NAME="k8stest202405261109"' + install_remote_microk8s + set +e + ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ubuntu@172.21.23.10 'sudo apt-get update -y && sudo apt-get upgrade -y && sudo reboot' Warning: Permanently added '172.21.23.10' (ED25519) to the list of known hosts. Hit:1 http://azure.archive.ubuntu.com/ubuntu jammy InRelease Get:2 http://azure.archive.ubuntu.com/ubuntu jammy-updates InRelease [119 kB] Get:3 http://azure.archive.ubuntu.com/ubuntu jammy-backports InRelease [109 kB] Get:4 http://azure.archive.ubuntu.com/ubuntu jammy-security InRelease [110 kB] Get:5 http://azure.archive.ubuntu.com/ubuntu jammy/universe amd64 Packages [14.1 MB] Get:6 http://azure.archive.ubuntu.com/ubuntu jammy/universe Translation-en [5652 kB] Get:7 http://azure.archive.ubuntu.com/ubuntu jammy/universe amd64 c-n-f Metadata [286 kB] Get:8 http://azure.archive.ubuntu.com/ubuntu jammy/multiverse amd64 Packages [217 kB] Get:9 http://azure.archive.ubuntu.com/ubuntu jammy/multiverse Translation-en [112 kB] Get:10 http://azure.archive.ubuntu.com/ubuntu jammy/multiverse amd64 c-n-f Metadata [8372 B] Get:11 http://azure.archive.ubuntu.com/ubuntu jammy-updates/main amd64 Packages [1683 kB] Get:12 http://azure.archive.ubuntu.com/ubuntu jammy-updates/main Translation-en [312 kB] Get:13 http://azure.archive.ubuntu.com/ubuntu jammy-updates/restricted amd64 Packages [1933 kB] Get:14 http://azure.archive.ubuntu.com/ubuntu jammy-updates/restricted Translation-en [328 kB] Get:15 http://azure.archive.ubuntu.com/ubuntu jammy-updates/universe amd64 Packages [1075 kB] Get:16 http://azure.archive.ubuntu.com/ubuntu jammy-updates/universe Translation-en [246 kB] Get:17 http://azure.archive.ubuntu.com/ubuntu jammy-updates/universe amd64 c-n-f Metadata [22.1 kB] Get:18 http://azure.archive.ubuntu.com/ubuntu jammy-updates/multiverse amd64 Packages [42.7 kB] Get:19 http://azure.archive.ubuntu.com/ubuntu jammy-updates/multiverse Translation-en [10.4 kB] Get:20 http://azure.archive.ubuntu.com/ubuntu jammy-updates/multiverse amd64 c-n-f Metadata [472 B] Get:21 http://azure.archive.ubuntu.com/ubuntu jammy-backports/main amd64 Packages [67.1 kB] Get:22 http://azure.archive.ubuntu.com/ubuntu jammy-backports/main Translation-en [11.0 kB] Get:23 http://azure.archive.ubuntu.com/ubuntu jammy-backports/main amd64 c-n-f Metadata [388 B] Get:24 http://azure.archive.ubuntu.com/ubuntu jammy-backports/restricted amd64 c-n-f Metadata [116 B] Get:25 http://azure.archive.ubuntu.com/ubuntu jammy-backports/universe amd64 Packages [27.2 kB] Get:26 http://azure.archive.ubuntu.com/ubuntu jammy-backports/universe Translation-en [16.2 kB] Get:27 http://azure.archive.ubuntu.com/ubuntu jammy-backports/universe amd64 c-n-f Metadata [644 B] Get:28 http://azure.archive.ubuntu.com/ubuntu jammy-backports/multiverse amd64 c-n-f Metadata [116 B] Get:29 http://azure.archive.ubuntu.com/ubuntu jammy-security/main amd64 Packages [1472 kB] Get:30 http://azure.archive.ubuntu.com/ubuntu jammy-security/main Translation-en [253 kB] Get:31 http://azure.archive.ubuntu.com/ubuntu jammy-security/restricted amd64 Packages [1876 kB] Get:32 http://azure.archive.ubuntu.com/ubuntu jammy-security/restricted Translation-en [318 kB] Get:33 http://azure.archive.ubuntu.com/ubuntu jammy-security/universe amd64 Packages [853 kB] Get:34 http://azure.archive.ubuntu.com/ubuntu jammy-security/universe Translation-en [164 kB] Get:35 http://azure.archive.ubuntu.com/ubuntu jammy-security/universe amd64 c-n-f Metadata [16.8 kB] Get:36 http://azure.archive.ubuntu.com/ubuntu jammy-security/multiverse amd64 Packages [37.2 kB] Get:37 http://azure.archive.ubuntu.com/ubuntu jammy-security/multiverse Translation-en [7588 B] Get:38 http://azure.archive.ubuntu.com/ubuntu jammy-security/multiverse amd64 c-n-f Metadata [260 B] Fetched 31.5 MB in 14s (2319 kB/s) Reading package lists... Reading package lists... Building dependency tree... Reading state information... Calculating upgrade... The following packages will be upgraded: python3-idna 1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 52.1 kB of archives. After this operation, 33.8 kB of additional disk space will be used. Get:1 http://azure.archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-idna all 3.3-1ubuntu0.1 [52.1 kB] debconf: unable to initialize frontend: Dialog debconf: (Dialog frontend will not work on a dumb terminal, an emacs shell buffer, or without a controlling terminal.) debconf: falling back to frontend: Readline debconf: unable to initialize frontend: Readline debconf: (This frontend requires a controlling tty.) debconf: falling back to frontend: Teletype dpkg-preconfigure: unable to re-open stdin: Fetched 52.1 kB in 0s (1576 kB/s) (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 62064 files and directories currently installed.) Preparing to unpack .../python3-idna_3.3-1ubuntu0.1_all.deb ... Unpacking python3-idna (3.3-1ubuntu0.1) over (3.3-1) ... Setting up python3-idna (3.3-1ubuntu0.1) ... Running kernel seems to be up-to-date. No services need to be restarted. No containers need to be restarted. No user sessions are running outdated binaries. No VM guests are running outdated hypervisor (qemu) binaries on this host. Connection to 172.21.23.10 closed by remote host. + sleep 90 + ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ubuntu@172.21.23.10 Warning: Permanently added '172.21.23.10' (ED25519) to the list of known hosts. Welcome to Ubuntu 22.04.4 LTS (GNU/Linux 6.5.0-1021-azure x86_64) * Documentation: https://help.ubuntu.com * Management: https://landscape.canonical.com * Support: https://ubuntu.com/pro System information as of Sun May 26 11:13:18 UTC 2024 System load: 0.88 Processes: 133 Usage of /: 5.8% of 28.89GB Users logged in: 0 Memory usage: 7% IPv4 address for eth0: 172.21.23.10 Swap usage: 0% Expanded Security Maintenance for Applications is not enabled. 0 updates can be applied immediately. Enable ESM Apps to receive additional future security updates. See https://ubuntu.com/esm or run: sudo pro status + sudo snap install yq yq v4.40.5 from Mike Farah (mikefarah) installed + sudo snap install microk8s --classic microk8s (1.29/stable) v1.29.4 from Canonical** installed + sudo usermod -a -G microk8s ubuntu + newgrp microk8s microk8s is running high-availability: no datastore master nodes: 127.0.0.1:19001 datastore standby nodes: none addons: enabled: dns # (core) CoreDNS ha-cluster # (core) Configure high availability on the current node helm # (core) Helm - the package manager for Kubernetes helm3 # (core) Helm 3 - the package manager for Kubernetes disabled: cert-manager # (core) Cloud native certificate management cis-hardening # (core) Apply CIS K8s hardening community # (core) The community addons repository dashboard # (core) The Kubernetes dashboard gpu # (core) Alias to nvidia add-on host-access # (core) Allow Pods connecting to Host services smoothly hostpath-storage # (core) Storage class; allocates storage from host directory ingress # (core) Ingress controller for external access kube-ovn # (core) An advanced network fabric for Kubernetes mayastor # (core) OpenEBS MayaStor metallb # (core) Loadbalancer for your Kubernetes cluster metrics-server # (core) K8s Metrics Server for API access to service metrics minio # (core) MinIO object storage nvidia # (core) NVIDIA hardware (GPU and network) support observability # (core) A lightweight observability stack for logs, traces and metrics prometheus # (core) Prometheus operator for monitoring and logging rbac # (core) Role-Based Access Control for authorisation registry # (core) Private image registry exposed on localhost:32000 rook-ceph # (core) Distributed Ceph storage using Rook storage # (core) Alias to hostpath-storage add-on, deprecated WARNING: Do not enable or disable multiple addons in one command. This form of chained operations on addons will be DEPRECATED in the future. Please, enable one addon at a time: 'microk8s enable ' Infer repository core for addon storage Infer repository core for addon dns DEPRECATION WARNING: 'storage' is deprecated and will soon be removed. Please use 'hostpath-storage' instead. Infer repository core for addon hostpath-storage Enabling default storage class. WARNING: Hostpath storage is not suitable for production environments. A hostpath volume can grow beyond the size limit set in the volume claim manifest. deployment.apps/hostpath-provisioner created storageclass.storage.k8s.io/microk8s-hostpath created serviceaccount/microk8s-hostpath created clusterrole.rbac.authorization.k8s.io/microk8s-hostpath created clusterrolebinding.rbac.authorization.k8s.io/microk8s-hostpath created Storage will be available soon. Addon core/dns is already enabled + ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ubuntu@172.21.23.10 Warning: Permanently added '172.21.23.10' (ED25519) to the list of known hosts. Welcome to Ubuntu 22.04.4 LTS (GNU/Linux 6.5.0-1021-azure x86_64) * Documentation: https://help.ubuntu.com * Management: https://landscape.canonical.com * Support: https://ubuntu.com/pro System information as of Sun May 26 11:13:18 UTC 2024 System load: 0.88 Processes: 133 Usage of /: 5.8% of 28.89GB Users logged in: 0 Memory usage: 7% IPv4 address for eth0: 172.21.23.10 Swap usage: 0% Expanded Security Maintenance for Applications is not enabled. 0 updates can be applied immediately. Enable ESM Apps to receive additional future security updates. See https://ubuntu.com/esm or run: sudo pro status 172.21.23.10 ++ hostname -I ++ awk '{print $1}' + PRIVATE_IP=172.21.23.10 + echo 172.21.23.10 + sudo microk8s.enable metallb:172.21.23.10-172.21.23.10 Infer repository core for addon metallb Enabling MetalLB Applying Metallb manifest customresourcedefinition.apiextensions.k8s.io/addresspools.metallb.io created customresourcedefinition.apiextensions.k8s.io/bfdprofiles.metallb.io created customresourcedefinition.apiextensions.k8s.io/bgpadvertisements.metallb.io created customresourcedefinition.apiextensions.k8s.io/bgppeers.metallb.io created customresourcedefinition.apiextensions.k8s.io/communities.metallb.io created customresourcedefinition.apiextensions.k8s.io/ipaddresspools.metallb.io created customresourcedefinition.apiextensions.k8s.io/l2advertisements.metallb.io created namespace/metallb-system created serviceaccount/controller created serviceaccount/speaker created clusterrole.rbac.authorization.k8s.io/metallb-system:controller created clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created role.rbac.authorization.k8s.io/controller created role.rbac.authorization.k8s.io/pod-lister created clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created rolebinding.rbac.authorization.k8s.io/controller created secret/webhook-server-cert created service/webhook-service created rolebinding.rbac.authorization.k8s.io/pod-lister created daemonset.apps/speaker created deployment.apps/controller created validatingwebhookconfiguration.admissionregistration.k8s.io/validating-webhook-configuration created Waiting for Metallb controller to be ready. error: timed out waiting for the condition on deployments/controller MetalLB controller is still not ready deployment.apps/controller condition met ipaddresspool.metallb.io/default-addresspool created l2advertisement.metallb.io/default-advertise-all-pools created MetalLB is enabled + ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ubuntu@172.21.23.10 Warning: Permanently added '172.21.23.10' (ED25519) to the list of known hosts. Welcome to Ubuntu 22.04.4 LTS (GNU/Linux 6.5.0-1021-azure x86_64) * Documentation: https://help.ubuntu.com * Management: https://landscape.canonical.com * Support: https://ubuntu.com/pro System information as of Sun May 26 11:13:18 UTC 2024 System load: 0.88 Processes: 133 Usage of /: 5.8% of 28.89GB Users logged in: 0 Memory usage: 7% IPv4 address for eth0: 172.21.23.10 Swap usage: 0% Expanded Security Maintenance for Applications is not enabled. 0 updates can be applied immediately. Enable ESM Apps to receive additional future security updates. See https://ubuntu.com/esm or run: sudo pro status + sudo sed -i 's/\#MOREIPS/IP.3 = 172.21.23.10/g' /var/snap/microk8s/current/certs/csr.conf.template + cat /var/snap/microk8s/current/certs/csr.conf.template [ req ] default_bits = 2048 prompt = no default_md = sha256 req_extensions = req_ext distinguished_name = dn [ dn ] C = GB ST = Canonical L = Canonical O = Canonical OU = Canonical CN = 127.0.0.1 [ req_ext ] subjectAltName = @alt_names [ alt_names ] DNS.1 = kubernetes DNS.2 = kubernetes.default DNS.3 = kubernetes.default.svc DNS.4 = kubernetes.default.svc.cluster DNS.5 = kubernetes.default.svc.cluster.local IP.1 = 127.0.0.1 IP.2 = 10.152.183.1 IP.3 = 172.21.23.10 [ v3_ext ] authorityKeyIdentifier=keyid,issuer:always basicConstraints=CA:FALSE keyUsage=keyEncipherment,dataEncipherment,digitalSignature extendedKeyUsage=serverAuth,clientAuth subjectAltName=@alt_names + echo ================================================================ ================================================================ + echo K8s cluster credentials: K8s cluster credentials: + echo ================================================================ ================================================================ + echo + ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ubuntu@172.21.23.10 'sudo microk8s.config' + sed 's/server: .*/server: https:\/\/172.21.23.10:16443/g' + tee /robot-systest/results/kubeconfig.yaml Warning: Permanently added '172.21.23.10' (ED25519) to the list of known hosts. apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUREekNDQWZlZ0F3SUJBZ0lVSDQ3Sit0Nnp1L0p2bG5UVVhuTGtCTGN2dmQwd0RRWUpLb1pJaHZjTkFRRUwKQlFBd0Z6RVZNQk1HQTFVRUF3d01NVEF1TVRVeUxqRTRNeTR4TUI0WERUSTBNRFV5TmpFeE1UTTFPRm9YRFRNMApNRFV5TkRFeE1UTTFPRm93RnpFVk1CTUdBMVVFQXd3TU1UQXVNVFV5TGpFNE15NHhNSUlCSWpBTkJna3Foa2lHCjl3MEJBUUVGQUFPQ0FROEFNSUlCQ2dLQ0FRRUF5ZWtKZ0pBcnc2REtxYzI3d20vbTlzNW9SMndYTzRoKzErTUQKMk1udmdIdUlDS09RSWR2SSthWVo4bDZUVVdYVzRRR0ZxRTVXbnI0OENsU05XU29YcE1ldXEyNlEvMEF0K3laUAozWjFvakpuN1NHbTRGQUdGOVB0WjJ6Q04vNUVOV1BkeW0yL3dxUkNmTUEzVFJzbzVZQ0RFeWt4K0o3bFptclQ1CjN0aElRYUNPWG9ieWNBTGVTZmNZbmgzZHZCcFFsbnJwczJ3aEMvTTVpbVo5NzZabkZORlEyNnJPTEU1VHMvMmoKckkwSHZQRnIvam83R2VJYlFaaFo0Ym5ZcjA2Y1lobjZmYzU1Q0V5Y0g2eklUeld2eEgyNkRCU2lLbUc5SXhCRQpRWldJZ3lmNjQ4b2JXR3JhZHJsVkphMi9qby9JcktFeGsxczU5U2QrSzFrVkNPeFhSUUlEQVFBQm8xTXdVVEFkCkJnTlZIUTRFRmdRVWxoUS9DSG03V0RGWG9SRHVOdEZPSDExUytiTXdId1lEVlIwakJCZ3dGb0FVbGhRL0NIbTcKV0RGWG9SRHVOdEZPSDExUytiTXdEd1lEVlIwVEFRSC9CQVV3QXdFQi96QU5CZ2txaGtpRzl3MEJBUXNGQUFPQwpBUUVBV2NveG1UWDZMRGpJSm9tUUhWTzNZbXkyY3lKLy9tdnduTlFrUVlaNGRPSkFHOCtZQlRhZHZaVS9yTmVECitEK0pUMVZqSkdpc1FhTHdsWW5HVDduSyt4ZEx0Vi9JOGxjUDlUcHpGN2p5czFIVUJ5ekdhUTkwelNPc3FHSTkKOFY1QTZXUjBXTEZMdVRtNmEvaXVzby82VGZ4THNUQmNZN3hqWGl3Vmxxa2RuR3l5OFhiQVo0Vk5kSEllbnVXMApETDhFRmJTVktkYTNEc2QxZHpQUW5EemxkMGZ1aTRJY0RNSkpFTk1Fc280ZTYwdzdOL1EwbmhpbGw0WUQzblUvClZxbGNBNUJ4L0lpNVZveERvWFdmRTRpNmljdlZGWkFMWEdsQVNFOG4zOGtyUWFPTC82NTBzMXEzdjlhSDN5UWEKSkEyTVg1NEdIbURBZXhjV3poQXE3U0loRGc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== server: https://172.21.23.10:16443 name: microk8s-cluster contexts: - context: cluster: microk8s-cluster user: admin name: microk8s current-context: microk8s kind: Config preferences: {} users: - name: admin user: client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN6RENDQWJTZ0F3SUJBZ0lVVjVxeWtYRkoxa0tJcnN4YklkQ2RkdTNZVy9Jd0RRWUpLb1pJaHZjTkFRRUwKQlFBd0Z6RVZNQk1HQTFVRUF3d01NVEF1TVRVeUxqRTRNeTR4TUI0WERUSTBNRFV5TmpFeE1UUXdNRm9YRFRNMApNRFV5TkRFeE1UUXdNRm93S1RFT01Bd0dBMVVFQXd3RllXUnRhVzR4RnpBVkJnTlZCQW9NRG5ONWMzUmxiVHB0CllYTjBaWEp6TUlJQklqQU5CZ2txaGtpRzl3MEJBUUVGQUFPQ0FROEFNSUlCQ2dLQ0FRRUF2WTVzMUhPYURjeTQKSjZCREdDTHdEWUtOTjhEdFoyamxFSm9SM1FVZnBwMThqaXBBcTc3ZDJwU2xMRTRtam52Mi9NaWE1L2VZMHZyaApEc1g2b3FndjlVNnJOV0JlNzNWdTZpb29yZ045WXUvNG1WNTZybWlpd3Y4NGJEQ1VyUTl2azVsaGhRUVRDekJTCmx6SXVna2VaRFNXVW5zWEtGZVRkSDVyVkNrNkRWN0ZRNGJwenhBUHNZanRwK2dPaGkxZnJ3OVhLekpIaEpFTDEKY28zUnBwTmZxOG52bnNIb3ZENWQveWJjTjJvd3lneENEL1BTZkFrdzZ4L2o2aktoaXF4VVRrc0FsMTlIUkcyZwpkZkpGekxBQmExU3czVllKMTJ2NWltZTRFc0F1V3FrZytxN1NKUVBmKzJMWXN4TUxtY29Fb3E3TTZ4bllheHQrCmx1azFyWUVzRFFJREFRQUJNQTBHQ1NxR1NJYjNEUUVCQ3dVQUE0SUJBUUF2U09zZXN1ajE2MGhIc2h6YzkrSlQKTVRBOGZxQjhWZjcyRERCMTJ6M3crak4zMU1QSzVVaE5UM1JGa29lTjhXMWpCZ2tFZlJBbHJnamdLbk9wR21tYwpBTXRYaUl3Nng2UU1OdEloaDBMeHM5SkRzdGZpYVB5RDhvcmtZR2FnRi9OdUFyR3RmWTAwaGlURXZpMWd1ZllOCmtUN1R0RW9zNVd5c2tvSDZrb3d4dS9DLys4MWFHRyt0MVo2Q2tNdlpaa3k5RmZ6NDNOYjZsenI2TE56dVF2RmIKQnRHNjZNUENyOERERTZHYWdIbUhZYzRadGRqN1UxZDlBQjJvTmQrSVppVXc4My9lL0RUT084UFlEQ20rMnBRVQpYamtxTVNFWStTMjQzd0g1UGFTdnU3WTI5Qjc3c0hPQUhHVXFjVEdMdVFqV2lWemoxZG53b3ErQlJYOFM1b21DCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb2dJQkFBS0NBUUVBdlk1czFIT2FEY3k0SjZCREdDTHdEWUtOTjhEdFoyamxFSm9SM1FVZnBwMThqaXBBCnE3N2QycFNsTEU0bWpudjIvTWlhNS9lWTB2cmhEc1g2b3FndjlVNnJOV0JlNzNWdTZpb29yZ045WXUvNG1WNTYKcm1paXd2ODRiRENVclE5dms1bGhoUVFUQ3pCU2x6SXVna2VaRFNXVW5zWEtGZVRkSDVyVkNrNkRWN0ZRNGJwegp4QVBzWWp0cCtnT2hpMWZydzlYS3pKSGhKRUwxY28zUnBwTmZxOG52bnNIb3ZENWQveWJjTjJvd3lneENEL1BTCmZBa3c2eC9qNmpLaGlxeFVUa3NBbDE5SFJHMmdkZkpGekxBQmExU3czVllKMTJ2NWltZTRFc0F1V3FrZytxN1MKSlFQZisyTFlzeE1MbWNvRW9xN002eG5ZYXh0K2x1azFyWUVzRFFJREFRQUJBb0lCQUJ0UzkzOGNkdDE4WUNOZQpKNjNJQTRCL0RDbzRSa0I4ejJBNFJWRHQxeVVtV0hrSndDN0JvYXRMZUEvTjZDTHIzYXVNb3ovQzRpV3ZnbGVsCjFENDBMazJYSEhqaVBtMFlLWGZad2Vscm1WeDBxUW82bzBhVzBMZDVJTUgvc3I3TGxkTFo3a1BGVlpWc1RzdDYKc1ZlWVNJaXJuU1BSOFJKODNoOGJLNUNEeHMwU0E3OTR5cGhFS0ttdWV2dkFkSmJrMTZpRHpnVDQ0KzVQeXk0NApRSnU0dklhMndyN0RjbXdHUHFjcDZzYzh1VStHVzRwRU5lTFNzZGNvTlhaWExWUlVLWFIzNXB5Y052Z044U2NJClNrMllZZDNGZ3lxYk0zU29kUnArelRHZXp1WUVybk1JUnJjdXc1WTBoOEJaajlwRXRtTll0dGJMUGJZMXR1bmIKS1lSVUJLRUNnWUVBNU91Nm5QK3pwTmNEYWxvT0c0T3hRMmNkcFIxQk5td3VTdVQ4Wk5XWGdXZWphOFc0U3RxMQpQLzZWUHJwcnNpOVk1WnNFVTNMVVpHVlZaQmprUTM5RG1zTGxDSUk3d1dHNVovL2tFZjQxc0kwMW5hNWpRQldRCkR0SDd2M0tNYkRjNWhwYnJ6Mmp0K0U5VERHdWlsMU0zU0I0MmYwNjVsb04yaCtTUE5USHh4YlVDZ1lFQTAvcWwKVFFYNjVOUUFac0Z3NU4ySE5WaWtXYzBuTG41c0ozTXBqNXlRNEY0aFU2Y0NLTnRQM2VCQWU5UGl0Q1R3ZDNVRwpDY0NsenhuYkxOcWpScXdDUldJUU9pS09wNnVQaytSeFpxeHpWc3hPc0ZNZGNJYUs3UDk2Q3pRMjQwSHU1ajhNCitDNVI3enZESERrWWZLQnFxcjA5Rm5xSEdtSUplQVhKTkVnancva0NnWUEyZ1RucGI2aGlNeCtKRHZBVTlRSWIKdmsxOHByNkVLclhLOTBKdzc3b25BWG9UaXZ3YU5vQzVQL0JoQXhucFR3U25ob1U5S1RZUXdWL1hlV1cvL0drbwpUQWNrUTMzZXlWblB2VW1jVHg2UmJzMjVRWEE0TGVvaTNUUkhuUXA2S1p2MHc3SlpxKzRkRlNYODZ4UEhXL1RwCm91ZnUzOXVvVHB6R05sRXlwVFdma1FLQmdBVytsUnd4UFV0V00yeVZjV25DVnhlcS8wa3Z0aEFjZlBIVXZSdVgKTXBYaDl5VTlNV0hLRDdBRGs3dkhVaTF2a3RTcDV5LzhlSUhVUVl4Rm9JY1p3algvSWxGdy9reXM2WWNvZWFvKwpvaUdJQjluZlpyelY2STNqbm5zUFB5MkphOS94ZFhpRVNOUWkybmE1VldDTE9GaklLQStqNG1Wa29yME42eFh2Cm5vaUJBb0dBSElXMk1YN3g4eTNuc1FNTjFnYVJ2TTI5N25COUhjVGNYR3E2cUhEd3Z1OHptY3VrNjdJMFNJRmYKbkNTNmM4SklLVEZ5cnRKcXBzWmJVN3U5UCtTazl3N2o3S0xXQU96bStmZDJ3eXlIVWZlQTdlQ1RZN2JlQVlqUgpiWlJzTkJhUWI1L3lXeUhKczlYdjZMWHQrYU9iTTRWbWc2MjJXWXV3RzVlUWhETlRVNzg9Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg== + echo 'export K8S_CREDENTIALS=/robot-systest/results/kubeconfig.yaml' + echo File with new environment was created at /robot-systest/results/k8s_environment.rc File with new environment was created at /robot-systest/results/k8s_environment.rc [Pipeline] sh [azure_robot_tests] Running shell script + cat /robot-systest/results/k8s_environment.rc export CLOUD_TYPE="azure" export USE_PAAS_K8S="FALSE" export K8S_IP="172.21.23.10" export K8S_IMAGE_NAME="k8stest202405261109" export K8S_CREDENTIALS=/robot-systest/results/kubeconfig.yaml [Pipeline] } [Pipeline] // stage [Pipeline] stage [Pipeline] { (Install OSM) [Pipeline] sh [azure_robot_tests] Running shell script + /robot-systest/cloud-scripts/create-osm-vm.sh + az vm create --resource-group OSM_CICD_GROUP --name osmtest202405261116 --image Canonical:0001-com-ubuntu-server-jammy:22_04-lts:latest --size Standard_D4as_v4 --vnet-name OSM-CICD-net --subnet OSM-CICD-subnet --public-ip-address '' --admin-username ubuntu --priority Regular --os-disk-size-gb 64 Selecting "uksouth" may reduce your costs. The region you've selected may cost more for the same services. You can disable this message in the future with the command "az config set core.display_region_identified=false". Learn more at https://go.microsoft.com/fwlink/?linkid=222571 WARNING: Consider upgrading security for your workloads using Azure Trusted Launch VMs. To know more about Trusted Launch, please visit https://aka.ms/TrustedLaunch. ERROR: Subnet(OSM-CICD-subnet) does not exist, but failed to create a new subnet with address prefix 10.0.0.0/24. It may be caused by name or address prefix conflict. Please specify an appropriate subnet name with --subnet or a valid address prefix value with --subnet-address-prefix. [Pipeline] } [Pipeline] // stage [Pipeline] stage [Pipeline] { (Add VIM and K8s cluster to OSM) Stage 'Add VIM and K8s cluster to OSM' skipped due to earlier failure(s) [Pipeline] } [Pipeline] // stage [Pipeline] stage [Pipeline] { (Run Robot tests) Stage 'Run Robot tests' skipped due to earlier failure(s) [Pipeline] } [Pipeline] // stage [Pipeline] stage [Pipeline] { (Declarative: Post Actions) [Pipeline] echo Retrieve container logs [Pipeline] sh [azure_robot_tests] Running shell script + . /robot-systest/results/osm_environment.rc /home/jenkins/workspace/azure_robot_tests@tmp/durable-e51d28a9/script.sh: 3: .: cannot open /robot-systest/results/osm_environment.rc: No such file [Pipeline] } [Pipeline] // stage [Pipeline] } [Pipeline] // withEnv [Pipeline] } $ docker stop --time=1 ab0fd75f02f93dc990e1b5ac2fcc8ab774509e4d2a68be25b026ecd3658ddff4 $ docker rm -f ab0fd75f02f93dc990e1b5ac2fcc8ab774509e4d2a68be25b026ecd3658ddff4 [Pipeline] // withDockerContainer [Pipeline] } [Pipeline] // node [Pipeline] End of Pipeline ERROR: script returned exit code 1 Finished: FAILURE