bash: ./vm-install-osm.sh: No such file or directory bash: vm-install-osm.sh: command not found bash: vm-install-osm.sh: command not found ./vm-install-osm.sh started at Wed Jun 7 09:09:42 UTC 2023 microk8s (1.26/stable) v1.26.4 from Canonical** installed snap "jq" is already installed, see 'snap help refresh' server = "http://172.21.1.1:5000" [host."http://172.21.1.1:5000"] capabilities = ["pull", "resolve"] skip_verify = true plain-http = true Infer repository core for addon storage DEPRECIATION WARNING: 'storage' is deprecated and will soon be removed. Please use 'hostpath-storage' instead. Infer repository core for addon hostpath-storage Enabling default storage class. WARNING: Hostpath storage is not suitable for production environments. deployment.apps/hostpath-provisioner created storageclass.storage.k8s.io/microk8s-hostpath created serviceaccount/microk8s-hostpath created clusterrole.rbac.authorization.k8s.io/microk8s-hostpath created clusterrolebinding.rbac.authorization.k8s.io/microk8s-hostpath created Storage will be available soon. Added: - microk8s.kubectl as kubectl --2023-06-07 09:10:25-- https://osm-download.etsi.org/ftp/osm-13.0-thirteen/install_osm.sh Resolving osm-download.etsi.org (osm-download.etsi.org)... 195.238.226.47 Connecting to osm-download.etsi.org (osm-download.etsi.org)|195.238.226.47|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 10436 (10K) [text/x-sh] Saving to: 'install_osm.sh' Checking required packages to add ETSI OSM debian repo: software-properties-common apt-transport-https 0K .......... 100% 18.7M=0.001s 2023-06-07 09:10:25 (18.7 MB/s) - 'install_osm.sh' saved [10436/10436] OK Get:1 https://osm-download.etsi.org/repository/osm/debian/testing-daily testing InRelease [4086 B] Hit:2 http://nova.clouds.archive.ubuntu.com/ubuntu focal InRelease Hit:3 http://security.ubuntu.com/ubuntu focal-security InRelease Hit:4 http://nova.clouds.archive.ubuntu.com/ubuntu focal-updates InRelease Hit:5 http://nova.clouds.archive.ubuntu.com/ubuntu focal-backports InRelease Get:6 https://osm-download.etsi.org/repository/osm/debian/testing-daily testing/devops amd64 Packages [501 B] Fetched 4587 B in 1s (4556 B/s) Reading package lists... W: Conflicting distribution: https://osm-download.etsi.org/repository/osm/debian/testing-daily testing InRelease (expected testing but got ) Hit:1 http://security.ubuntu.com/ubuntu focal-security InRelease Hit:2 http://nova.clouds.archive.ubuntu.com/ubuntu focal InRelease Hit:3 http://nova.clouds.archive.ubuntu.com/ubuntu focal-updates InRelease Hit:4 http://nova.clouds.archive.ubuntu.com/ubuntu focal-backports InRelease Hit:5 https://osm-download.etsi.org/repository/osm/debian/testing-daily testing InRelease Reading package lists... W: Conflicting distribution: https://osm-download.etsi.org/repository/osm/debian/testing-daily testing InRelease (expected testing but got ) Hit:1 http://security.ubuntu.com/ubuntu focal-security InRelease Hit:2 http://nova.clouds.archive.ubuntu.com/ubuntu focal InRelease Hit:3 http://nova.clouds.archive.ubuntu.com/ubuntu focal-updates InRelease Hit:4 http://nova.clouds.archive.ubuntu.com/ubuntu focal-backports InRelease Hit:5 https://osm-download.etsi.org/repository/osm/debian/testing-daily testing InRelease Reading package lists... W: Conflicting distribution: https://osm-download.etsi.org/repository/osm/debian/testing-daily testing InRelease (expected testing but got ) Reading package lists... Building dependency tree... Reading state information... The following NEW packages will be installed: osm-devops 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 18.0 MB of archives. After this operation, 108 MB of additional disk space will be used. Get:1 https://osm-download.etsi.org/repository/osm/debian/testing-daily testing/devops amd64 osm-devops all 12.0.0.post166-1 [18.0 MB] perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LC_TIME = "tr_TR.UTF-8", LC_MONETARY = "tr_TR.UTF-8", LC_ADDRESS = "tr_TR.UTF-8", LC_TELEPHONE = "tr_TR.UTF-8", LC_NAME = "tr_TR.UTF-8", LC_MEASUREMENT = "tr_TR.UTF-8", LC_IDENTIFICATION = "tr_TR.UTF-8", LC_NUMERIC = "tr_TR.UTF-8", LC_PAPER = "tr_TR.UTF-8", LANG = "C.UTF-8" are supported and installed on your system. perl: warning: Falling back to a fallback locale ("C.UTF-8"). locale: Cannot set LC_ALL to default locale: No such file or directory Fetched 18.0 MB in 0s (47.7 MB/s) Selecting previously unselected package osm-devops. (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 95072 files and directories currently installed.) Preparing to unpack .../osm-devops_12.0.0.post166-1_all.deb ... Unpacking osm-devops (12.0.0.post166-1) ... Setting up osm-devops (12.0.0.post166-1) ... ## Wed Jun 7 09:10:41 UTC 2023 source: INFO: logging sourced ## Wed Jun 7 09:10:41 UTC 2023 source: INFO: config sourced ## Wed Jun 7 09:10:41 UTC 2023 source: INFO: container sourced ## Wed Jun 7 09:10:41 UTC 2023 source: INFO: git_functions sourced ## Wed Jun 7 09:10:41 UTC 2023 source: INFO: track sourced snap "jq" is already installed, see 'snap help refresh' Track start release: https://osm.etsi.org/InstallLog.php?&installation_id=1686129041-7M97j6zSKJwGBvh3&local_ts=1686129041&event=start&operation=release&value=testing-daily&comment=&tags= Track start docker_tag: https://osm.etsi.org/InstallLog.php?&installation_id=1686129041-7M97j6zSKJwGBvh3&local_ts=1686129041&event=start&operation=docker_tag&value=testing-daily&comment=&tags= Track start installation_type: https://osm.etsi.org/InstallLog.php?&installation_id=1686129041-7M97j6zSKJwGBvh3&local_ts=1686129041&event=start&operation=installation_type&value=Charmed&comment=&tags= ## Wed Jun 7 09:10:42 UTC 2023 source: INFO: logging sourced ## Wed Jun 7 09:10:42 UTC 2023 source: INFO: config sourced ## Wed Jun 7 09:10:42 UTC 2023 source: INFO: container sourced ## Wed Jun 7 09:10:42 UTC 2023 source: INFO: git_functions sourced ## Wed Jun 7 09:10:42 UTC 2023 source: INFO: track sourced snap "microk8s" is already installed, see 'snap help refresh' --advertise-address 172.21.249.248 Stopped. microk8s is running high-availability: no datastore master nodes: 127.0.0.1:19001 datastore standby nodes: none addons: enabled: ha-cluster # (core) Configure high availability on the current node helm # (core) Helm - the package manager for Kubernetes helm3 # (core) Helm 3 - the package manager for Kubernetes disabled: cert-manager # (core) Cloud native certificate management community # (core) The community addons repository dashboard # (core) The Kubernetes dashboard dns # (core) CoreDNS gpu # (core) Automatic enablement of Nvidia CUDA host-access # (core) Allow Pods connecting to Host services smoothly hostpath-storage # (core) Storage class; allocates storage from host directory ingress # (core) Ingress controller for external access kube-ovn # (core) An advanced network fabric for Kubernetes mayastor # (core) OpenEBS MayaStor metallb # (core) Loadbalancer for your Kubernetes cluster metrics-server # (core) K8s Metrics Server for API access to service metrics minio # (core) MinIO object storage observability # (core) A lightweight observability stack for logs, traces and metrics prometheus # (core) Prometheus operator for monitoring and logging rbac # (core) Role-Based Access Control for authorisation registry # (core) Private image registry exposed on localhost:32000 storage # (core) Alias to hostpath-storage add-on, deprecated apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUREekNDQWZlZ0F3SUJBZ0lVV2JhZUkvYlo0WDlTQkUwclVML00veFdjSmpzd0RRWUpLb1pJaHZjTkFRRUwKQlFBd0Z6RVZNQk1HQTFVRUF3d01NVEF1TVRVeUxqRTRNeTR4TUI0WERUSXpNRFl3TnpBNU1EazBPVm9YRFRNegpNRFl3TkRBNU1EazBPVm93RnpFVk1CTUdBMVVFQXd3TU1UQXVNVFV5TGpFNE15NHhNSUlCSWpBTkJna3Foa2lHCjl3MEJBUUVGQUFPQ0FROEFNSUlCQ2dLQ0FRRUF4ZzN6U3NJNG1rWHBoaFhNVE41WEZ2cDNYemR5VWdBVUQ4TFMKMDdYT01lOFFnc1M5TFd6UVlmWUJ6azVXQTRpdlhTVVROTkRqc0FtVUlUSm5HbjZPSGR0NzJqOUh0ajlWbXY0cApaaXdkUDhkNEJQMWdjSHhxQmFrSHJIbWdvUUZOQmh4Z1kya0MzZUk1TVlidytzODd6TlpSNXplZDVnZlJFdndoClRFN1AvQm5NQTZkSUZPcW1uazVWVnlCamdoVE41dEZOYXRSNU1sUlZUbTdWMUxwOUJTWXVzY0wvUllkcWZxTGIKSUUxbm81ck5GOG8zcEQ2WWJnRkYzejNHOGExMVZBZzBNbXcydTkycWVER0ZWN2V1a2NCcGx3ZTA3Yys5V0tWawpYdmIrRjBPT0hYVU1CdlZXaFJWK0JZaSt5WHBZanJ6KzE4bEE2eFhrQjA5dUxYRnNsd0lEQVFBQm8xTXdVVEFkCkJnTlZIUTRFRmdRVWthU3c3RTZwdUU3V0RPeWJ0eGg4d3ZPWXI3b3dId1lEVlIwakJCZ3dGb0FVa2FTdzdFNnAKdUU3V0RPeWJ0eGg4d3ZPWXI3b3dEd1lEVlIwVEFRSC9CQVV3QXdFQi96QU5CZ2txaGtpRzl3MEJBUXNGQUFPQwpBUUVBZW5WeGFoMEViWEpLTUwwczhLRkhXbXkrcUlEaFNDMVRmZHZyOWlPclVLSmg5MzNLWEVUV3kwSFJBUDZyCkZsYnJ0RnBSS21mTWFEcVREaVFXQ3NBYVZuaCtDenZXaTFka3dRUWxOZnlJT25aMFk1WlpZdTIvT1hpQWVzVFUKZDZCMlhSaGQ5c0RlS01CVWlKeTBJcElydEZKUkV0aURPRTJycDBISmVPUnZpdFRYS2E1R1g1M1ptMkU1OEdpSwp3QlU1bUhYaEdCZGFvZUtnSmNXbkd6cG1FREpndmZ2akhkVCtucFp5VFpGTW5kTFJoLzA1QllJU2tIMzJnbXpSClBpdHBaRkx3bjd6S2lCZFpSWnlraTgwOW9Nc2FURU5rb2dlNnYyWVZadFVVUitFd21UL0RROHFtUWV0SytUMzAKLzVEeUNQaW81Nmh5NzZHT1RMcGR5MGxmTVE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== server: https://172.21.249.248:16443 name: microk8s-cluster contexts: - context: cluster: microk8s-cluster user: admin name: microk8s current-context: microk8s kind: Config preferences: {} users: - name: admin user: token: ZjB0QnVHZW1la0dYOVFwRXNkaXJ6ai9jRHZ5NjNxSDROd3NJSlpNMFloRT0K Track k8scluster k8scluster_ok: https://osm.etsi.org/InstallLog.php?&installation_id=1686129041-7M97j6zSKJwGBvh3&local_ts=1686129059&event=k8scluster&operation=k8scluster_ok&value=&comment=&tags= juju (2.9/stable) 2.9.42 from Canonical** installed Track juju juju_ok: https://osm.etsi.org/InstallLog.php?&installation_id=1686129041-7M97j6zSKJwGBvh3&local_ts=1686129075&event=juju&operation=juju_ok&value=&comment=&tags= Infer repository core for addon metallb Enabling MetalLB Applying Metallb manifest customresourcedefinition.apiextensions.k8s.io/addresspools.metallb.io created customresourcedefinition.apiextensions.k8s.io/bfdprofiles.metallb.io created customresourcedefinition.apiextensions.k8s.io/bgpadvertisements.metallb.io created customresourcedefinition.apiextensions.k8s.io/bgppeers.metallb.io created customresourcedefinition.apiextensions.k8s.io/communities.metallb.io created customresourcedefinition.apiextensions.k8s.io/ipaddresspools.metallb.io created customresourcedefinition.apiextensions.k8s.io/l2advertisements.metallb.io created namespace/metallb-system created serviceaccount/controller created serviceaccount/speaker created clusterrole.rbac.authorization.k8s.io/metallb-system:controller created clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created role.rbac.authorization.k8s.io/controller created role.rbac.authorization.k8s.io/pod-lister created clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created rolebinding.rbac.authorization.k8s.io/controller created secret/webhook-server-cert created service/webhook-service created rolebinding.rbac.authorization.k8s.io/pod-lister created daemonset.apps/speaker created deployment.apps/controller created validatingwebhookconfiguration.admissionregistration.k8s.io/validating-webhook-configuration created Waiting for Metallb controller to be ready. error: timed out waiting for the condition on deployments/controller MetalLB controller is still not ready deployment.apps/controller condition met ipaddresspool.metallb.io/default-addresspool created l2advertisement.metallb.io/default-advertise-all-pools created MetalLB is enabled Infer repository core for addon ingress Enabling Ingress ingressclass.networking.k8s.io/public created ingressclass.networking.k8s.io/nginx created namespace/ingress created serviceaccount/nginx-ingress-microk8s-serviceaccount created clusterrole.rbac.authorization.k8s.io/nginx-ingress-microk8s-clusterrole created role.rbac.authorization.k8s.io/nginx-ingress-microk8s-role created clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-microk8s created rolebinding.rbac.authorization.k8s.io/nginx-ingress-microk8s created configmap/nginx-load-balancer-microk8s-conf created configmap/nginx-ingress-tcp-microk8s-conf created configmap/nginx-ingress-udp-microk8s-conf created daemonset.apps/nginx-ingress-microk8s-controller created Ingress is enabled Infer repository core for addon hostpath-storage Infer repository core for addon dns Addon core/hostpath-storage is already enabled Enabling DNS Using host configuration from /run/systemd/resolve/resolv.conf Applying manifest serviceaccount/coredns created configmap/coredns created deployment.apps/coredns created service/kube-dns created clusterrole.rbac.authorization.k8s.io/coredns created clusterrolebinding.rbac.authorization.k8s.io/coredns created Restarting kubelet DNS is enabled Creating Juju controller "osm-vca" on microk8s/localhost Bootstrap to Kubernetes cluster identified as microk8s/localhost Fetching Juju Dashboard 0.8.1 Creating k8s resources for controller "controller-osm-vca" Downloading images Starting controller pod Bootstrap agent now started Contacting Juju controller at 172.21.249.248 to verify accessibility... Bootstrap complete, controller "osm-vca" is now available in namespace "controller-osm-vca" Now you can run juju add-model <model-name> to create a new model to deploy k8s workloads. Track bootstrap_k8s bootstrap_k8s_ok: https://osm.etsi.org/InstallLog.php?&installation_id=1686129041-7M97j6zSKJwGBvh3&local_ts=1686129271&event=bootstrap_k8s&operation=bootstrap_k8s_ok&value=&comment=&tags= * Applying /etc/sysctl.d/10-console-messages.conf ... kernel.printk = 4 4 1 7 * Applying /etc/sysctl.d/10-ipv6-privacy.conf ... net.ipv6.conf.all.use_tempaddr = 2 net.ipv6.conf.default.use_tempaddr = 2 * Applying /etc/sysctl.d/10-kernel-hardening.conf ... kernel.kptr_restrict = 1 * Applying /etc/sysctl.d/10-link-restrictions.conf ... fs.protected_hardlinks = 1 fs.protected_symlinks = 1 * Applying /etc/sysctl.d/10-magic-sysrq.conf ... kernel.sysrq = 176 * Applying /etc/sysctl.d/10-network-security.conf ... net.ipv4.conf.default.rp_filter = 2 net.ipv4.conf.all.rp_filter = 2 * Applying /etc/sysctl.d/10-ptrace.conf ... kernel.yama.ptrace_scope = 1 * Applying /etc/sysctl.d/10-zeropage.conf ... vm.mmap_min_addr = 65536 * Applying /usr/lib/sysctl.d/50-default.conf ... net.ipv4.conf.default.promote_secondaries = 1 sysctl: setting key "net.ipv4.conf.all.promote_secondaries": Invalid argument net.ipv4.ping_group_range = 0 2147483647 net.core.default_qdisc = fq_codel fs.protected_regular = 1 fs.protected_fifos = 1 * Applying /usr/lib/sysctl.d/50-pid-max.conf ... kernel.pid_max = 4194304 * Applying /etc/sysctl.d/60-lxd-production.conf ... fs.inotify.max_queued_events = 1048576 fs.inotify.max_user_instances = 1048576 fs.inotify.max_user_watches = 1048576 vm.max_map_count = 262144 kernel.dmesg_restrict = 1 net.ipv4.neigh.default.gc_thresh3 = 8192 net.ipv6.neigh.default.gc_thresh3 = 8192 sysctl: setting key "net.core.bpf_jit_limit": Invalid argument kernel.keys.maxkeys = 2000 kernel.keys.maxbytes = 2000000 * Applying /etc/sysctl.d/99-cloudimg-ipv6.conf ... net.ipv6.conf.all.use_tempaddr = 0 net.ipv6.conf.default.use_tempaddr = 0 * Applying /etc/sysctl.d/99-sysctl.conf ... net.ipv4.ip_forward = 1 * Applying /usr/lib/sysctl.d/protect-links.conf ... fs.protected_fifos = 1 fs.protected_hardlinks = 1 fs.protected_regular = 2 fs.protected_symlinks = 1 * Applying /etc/sysctl.conf ... net.ipv4.ip_forward = 1 Reading package lists... Building dependency tree... Reading state information... Package 'lxcfs' is not installed, so not removed Package 'liblxc1' is not installed, so not removed Package 'lxd' is not installed, so not removed Package 'lxd-client' is not installed, so not removed 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. lxd (5.0/stable) 5.0.2-838e1b2 from Canonical** refreshed Error: Failed to parse the preseed: yaml: unmarshal errors: line 22: field managed not found in type api.InitNetworksProjectPost perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LC_ADDRESS = "tr_TR.UTF-8", LC_NAME = "tr_TR.UTF-8", LC_MONETARY = "tr_TR.UTF-8", LC_PAPER = "tr_TR.UTF-8", LC_IDENTIFICATION = "tr_TR.UTF-8", LC_TELEPHONE = "tr_TR.UTF-8", LC_MEASUREMENT = "tr_TR.UTF-8", LC_TIME = "tr_TR.UTF-8", LC_NUMERIC = "tr_TR.UTF-8", LANG = "C.UTF-8" are supported and installed on your system. perl: warning: Falling back to a fallback locale ("C.UTF-8"). If this is your first time running LXD on this machine, you should also run: lxd init To start your first container, try: lxc launch ubuntu:22.04 Or for a virtual machine: lxc launch ubuntu:22.04 --vm Error: Device doesn't exist Error: Network not found Generating a RSA private key ........................................+++++ ..................................................+++++ writing new private key to '/home/ubuntu/.osm/client.key' ----- Cloud "lxd-cloud" added to controller "osm-vca". WARNING loading credentials: credentials for cloud lxd-cloud not found To upload a credential to the controller for cloud "lxd-cloud", use * 'add-model' with --credential option or * 'add-credential -c lxd-cloud'. Using cloud "lxd-cloud" from the controller to verify credentials. Controller credential "lxd-cloud" for user "admin" for cloud "lxd-cloud" on controller "osm-vca" added. For more information, see ‘juju show-credential lxd-cloud lxd-cloud’. Track bootstrap_lxd bootstrap_lxd_ok: https://osm.etsi.org/InstallLog.php?&installation_id=1686129041-7M97j6zSKJwGBvh3&local_ts=1686129297&event=bootstrap_lxd&operation=bootstrap_lxd_ok&value=&comment=&tags= Creating OSM model Added 'osm' model on microk8s/localhost with credential 'microk8s' for user 'admin' Deploying OSM with charms Creating Password Overlay Located bundle "osm" in charm-hub, revision 440 Located charm "osm-grafana" in charm-hub, channel latest/stable Located charm "nginx-ingress-integrator" in charm-hub, channel latest/stable Located charm "kafka-k8s" in charm-hub, channel latest/stable Located charm "osm-keystone" in charm-hub, channel latest/beta Located charm "osm-lcm" in charm-hub, channel latest/beta Located charm "charmed-osm-mariadb-k8s" in charm-hub, channel stable Located charm "osm-mon" in charm-hub, channel latest/beta Located charm "mongodb-k8s" in charm-hub, channel 5/edge Located charm "osm-nbi" in charm-hub, channel latest/beta Located charm "osm-ng-ui" in charm-hub, channel latest/beta Located charm "osm-pol" in charm-hub, channel latest/beta Located charm "osm-prometheus" in charm-hub, channel latest/stable Located charm "osm-ro" in charm-hub, channel latest/beta Located charm "osm-vca-integrator" in charm-hub, channel latest/beta Located charm "zookeeper-k8s" in charm-hub, channel latest/stable Executing changes: - upload charm osm-grafana from charm-hub for series kubernetes from channel latest/stable with architecture=amd64 - deploy application grafana from charm-hub with 1 unit on kubernetes with latest/stable using osm-grafana added resource image - set annotations for grafana - upload charm nginx-ingress-integrator from charm-hub from channel latest/stable with architecture=amd64 - deploy application ingress from charm-hub with 1 unit with latest/stable using nginx-ingress-integrator - set annotations for ingress - upload charm kafka-k8s from charm-hub from channel latest/stable with architecture=amd64 - deploy application kafka from charm-hub with 1 unit with latest/stable using kafka-k8s added resource jmx-prometheus-jar added resource kafka-image - set annotations for kafka - upload charm osm-keystone from charm-hub from channel latest/beta with architecture=amd64 - deploy application keystone from charm-hub with 1 unit with latest/beta using osm-keystone added resource keystone-image - set annotations for keystone - upload charm osm-lcm from charm-hub from channel latest/beta with architecture=amd64 - deploy application lcm from charm-hub with 1 unit with latest/beta using osm-lcm added resource lcm-image - set annotations for lcm - upload charm charmed-osm-mariadb-k8s from charm-hub for series kubernetes with architecture=amd64 - deploy application mariadb from charm-hub with 1 unit on kubernetes using charmed-osm-mariadb-k8s - set annotations for mariadb - upload charm osm-mon from charm-hub from channel latest/beta with architecture=amd64 - deploy application mon from charm-hub with 1 unit with latest/beta using osm-mon added resource mon-image - set annotations for mon - upload charm mongodb-k8s from charm-hub for series kubernetes from channel 5/edge with architecture=amd64 - deploy application mongodb from charm-hub with 1 unit on kubernetes with 5/edge using mongodb-k8s added resource mongodb-image - set annotations for mongodb - upload charm osm-nbi from charm-hub from channel latest/beta with architecture=amd64 - deploy application nbi from charm-hub with 1 unit with latest/beta using osm-nbi added resource nbi-image - set annotations for nbi - upload charm osm-ng-ui from charm-hub from channel latest/beta with architecture=amd64 - deploy application ng-ui from charm-hub with 1 unit with latest/beta using osm-ng-ui added resource ng-ui-image - set annotations for ng-ui - upload charm osm-pol from charm-hub from channel latest/beta with architecture=amd64 - deploy application pol from charm-hub with 1 unit with latest/beta using osm-pol added resource pol-image - set annotations for pol - upload charm osm-prometheus from charm-hub for series kubernetes from channel latest/stable with architecture=amd64 - deploy application prometheus from charm-hub with 1 unit on kubernetes with latest/stable using osm-prometheus added resource backup-image added resource image - set annotations for prometheus - upload charm osm-ro from charm-hub from channel latest/beta with architecture=amd64 - deploy application ro from charm-hub with 1 unit with latest/beta using osm-ro added resource ro-image - set annotations for ro - upload charm osm-vca-integrator from charm-hub from channel latest/beta with architecture=amd64 - deploy application vca from charm-hub with 1 unit with latest/beta using osm-vca-integrator - set annotations for vca - upload charm zookeeper-k8s from charm-hub from channel latest/stable with architecture=amd64 - deploy application zookeeper from charm-hub with 1 unit with latest/stable using zookeeper-k8s added resource zookeeper-image - set annotations for zookeeper - add relation grafana:prometheus - prometheus:prometheus - add relation kafka:zookeeper - zookeeper:zookeeper - add relation keystone:db - mariadb:mysql - add relation lcm:kafka - kafka:kafka - add relation lcm:mongodb - mongodb:database - add relation lcm:vca - vca:vca - add relation ro:ro - lcm:ro - add relation ro:kafka - kafka:kafka - add relation ro:mongodb - mongodb:database - add relation pol:kafka - kafka:kafka - add relation pol:mongodb - mongodb:database - add relation mon:mongodb - mongodb:database - add relation mon:kafka - kafka:kafka - add relation mon:vca - vca:vca - add relation nbi:mongodb - mongodb:database - add relation nbi:kafka - kafka:kafka - add relation nbi:ingress - ingress:ingress - add relation nbi:prometheus - prometheus:prometheus - add relation nbi:keystone - keystone:keystone - add relation mon:prometheus - prometheus:prometheus - add relation ng-ui:nbi - nbi:nbi - add relation ng-ui:ingress - ingress:ingress - add relation mon:keystone - keystone:keystone - add relation mariadb:mysql - pol:mysql - add relation grafana:db - mariadb:mysql Deploy of bundle completed. Waiting for deployment to finish... 0 / 15 services active 0 / 15 services active 0 / 15 services active 1 / 15 services active 1 / 15 services active 1 / 15 services active 0 / 15 services active 0 / 15 services active 0 / 15 services active 0 / 15 services active 1 / 15 services active 1 / 15 services active 1 / 15 services active 1 / 15 services active 1 / 15 services active 2 / 15 services active 2 / 15 services active 2 / 15 services active 2 / 15 services active 2 / 15 services active 2 / 15 services active 3 / 15 services active 3 / 15 services active 3 / 15 services active 4 / 15 services active 4 / 15 services active 5 / 15 services active 5 / 15 services active 5 / 15 services active 5 / 15 services active 4 / 15 services active 4 / 15 services active 4 / 15 services active 4 / 15 services active 4 / 15 services active 4 / 15 services active 4 / 15 services active 4 / 15 services active 4 / 15 services active 4 / 15 services active 4 / 15 services active 4 / 15 services active 4 / 15 services active 4 / 15 services active 5 / 15 services active 5 / 15 services active 5 / 15 services active 6 / 15 services active 6 / 15 services active 6 / 15 services active 6 / 15 services active 6 / 15 services active 6 / 15 services active 6 / 15 services active 7 / 15 services active 7 / 15 services active 7 / 15 services active 7 / 15 services active 8 / 15 services active 8 / 15 services active 9 / 15 services active 9 / 15 services active 9 / 15 services active 14 / 15 services active 14 / 15 services active 14 / 15 services active 13 / 15 services active 13 / 15 services active 13 / 15 services active 13 / 15 services active 13 / 15 services active 13 / 15 services active 13 / 15 services active 13 / 15 services active 13 / 15 services active 14 / 15 services active 14 / 15 services active 14 / 15 services active 14 / 15 services active 14 / 15 services active 14 / 15 services active 14 / 15 services active 14 / 15 services active 14 / 15 services active 14 / 15 services active 14 / 15 services active 14 / 15 services active 14 / 15 services active 14 / 15 services active 14 / 15 services active 14 / 15 services active 14 / 15 services active 14 / 15 services active 14 / 15 services active 14 / 15 services active 14 / 15 services active 14 / 15 services active 14 / 15 services active 14 / 15 services active 14 / 15 services active 14 / 15 services active 14 / 15 services active 14 / 15 services active 14 / 15 services active 14 / 15 services active