FailedConsole Output

Started by upstream project "buildall-stage_2-merge-v13" build number 318
originally caused by:
 Started by timer
 > git rev-parse --is-inside-work-tree # timeout=10
Setting origin to https://osm.etsi.org/gerrit/osm/LCM.git
 > git config remote.origin.url https://osm.etsi.org/gerrit/osm/LCM.git # timeout=10
Fetching origin...
Fetching upstream changes from origin
 > git --version # timeout=10
 > git config --get remote.origin.url # timeout=10
 > git fetch --tags --progress origin +refs/heads/*:refs/remotes/origin/*
Seen branch in repository origin/ELCM
Seen branch in repository origin/bug-585
Seen branch in repository origin/bug1511
Seen branch in repository origin/feature5837
Seen branch in repository origin/feature7106
Seen branch in repository origin/feature7184
Seen branch in repository origin/feature7928
Seen branch in repository origin/lcm-bug-585
Seen branch in repository origin/master
Seen branch in repository origin/n2vc
Seen branch in repository origin/netslice
Seen branch in repository origin/ng-ro-refactor
Seen branch in repository origin/paas
Seen branch in repository origin/sol006
Seen branch in repository origin/sol006v331
Seen branch in repository origin/v10.0
Seen branch in repository origin/v11.0
Seen branch in repository origin/v12.0
Seen branch in repository origin/v13.0
Seen branch in repository origin/v14.0
Seen branch in repository origin/v15.0
Seen branch in repository origin/v3.1
Seen branch in repository origin/v4.0
Seen branch in repository origin/v5.0
Seen branch in repository origin/v6.0
Seen branch in repository origin/v7.0
Seen branch in repository origin/v8.0
Seen branch in repository origin/v9.0
Seen 28 remote branches
Obtained Jenkinsfile from c96cb8bb9cef9b19c51baefe6a67c22fa8a71830
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] properties
[Pipeline] node
Running on osm-cicd-3 in /home/jenkins/workspace/LCM-stage_2-merge_v13.0
[Pipeline] {
[Pipeline] checkout
No credentials specified
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://osm.etsi.org/gerrit/osm/LCM.git # timeout=10
Fetching without tags
Fetching upstream changes from https://osm.etsi.org/gerrit/osm/LCM.git
 > git --version # timeout=10
 > git fetch --no-tags --force --progress https://osm.etsi.org/gerrit/osm/LCM.git +refs/heads/*:refs/remotes/origin/*
Checking out Revision c96cb8bb9cef9b19c51baefe6a67c22fa8a71830 (v13.0)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f c96cb8bb9cef9b19c51baefe6a67c22fa8a71830
Commit message: "Update requirements-dev to point to v13.0 branch"
 > git rev-list --no-walk c96cb8bb9cef9b19c51baefe6a67c22fa8a71830 # timeout=10
Running in /home/jenkins/workspace/LCM-stage_2-merge_v13.0/devops
[Pipeline] dir
[Pipeline] {
[Pipeline] git
No credentials specified
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://osm.etsi.org/gerrit/osm/devops # timeout=10
Fetching upstream changes from https://osm.etsi.org/gerrit/osm/devops
 > git --version # timeout=10
 > git fetch --tags --force --progress https://osm.etsi.org/gerrit/osm/devops +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/v13.0^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/v13.0^{commit} # timeout=10
Checking out Revision df75b7aed0e184455b9de743f82003a1a2513956 (refs/remotes/origin/v13.0)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f df75b7aed0e184455b9de743f82003a1a2513956
 > git branch -a -v --no-abbrev # timeout=10
 > git branch -D v13.0 # timeout=10
 > git checkout -b v13.0 df75b7aed0e184455b9de743f82003a1a2513956
Commit message: "Update gen-repo.sh to fix download from artifactory"
 > git rev-list --no-walk df75b7aed0e184455b9de743f82003a1a2513956 # timeout=10
[Pipeline] }
[Pipeline] // dir
[Pipeline] load
[Pipeline] { (devops/jenkins/ci-pipelines/ci_stage_2.groovy)
[Pipeline] }
[Pipeline] // load
[Pipeline] echo
do_stage_3= false
[Pipeline] load
[Pipeline] { (devops/jenkins/ci-pipelines/ci_helper.groovy)
[Pipeline] }
[Pipeline] // load
[Pipeline] stage
[Pipeline] { (Prepare)
[Pipeline] sh
[LCM-stage_2-merge_v13.0] Running shell script
+ env
JENKINS_HOME=/var/lib/jenkins
SSH_CLIENT=212.234.161.1 13726 22
USER=jenkins
RUN_CHANGES_DISPLAY_URL=https://osm.etsi.org/jenkins/job/LCM-stage_2-merge/job/v13.0/334/display/redirect?page=changes
GERRIT_PROJECT=osm/LCM
XDG_SESSION_TYPE=tty
SHLVL=0
NODE_LABELS=osm-cicd-3 osm3 stage_2
HUDSON_URL=https://osm.etsi.org/jenkins/
MOTD_SHOWN=pam
OLDPWD=/home/jenkins
HOME=/home/jenkins
BUILD_URL=https://osm.etsi.org/jenkins/job/LCM-stage_2-merge/job/v13.0/334/
HUDSON_COOKIE=8590dcde-ad53-4b03-a059-eae3d7614e3d
JENKINS_SERVER_COOKIE=durable-e6be83cbcca1697ffa83794be6eb7c0c
DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1001/bus
GERRIT_PATCHSET_REVISION=c96cb8bb9cef9b19c51baefe6a67c22fa8a71830
WORKSPACE=/home/jenkins/workspace/LCM-stage_2-merge_v13.0
LOGNAME=jenkins
NODE_NAME=osm-cicd-3
GERRIT_BRANCH=v13.0
_=/usr/bin/java
RUN_ARTIFACTS_DISPLAY_URL=https://osm.etsi.org/jenkins/job/LCM-stage_2-merge/job/v13.0/334/display/redirect?page=artifacts
XDG_SESSION_CLASS=user
EXECUTOR_NUMBER=2
XDG_SESSION_ID=144
RUN_TESTS_DISPLAY_URL=https://osm.etsi.org/jenkins/job/LCM-stage_2-merge/job/v13.0/334/display/redirect?page=tests
BUILD_DISPLAY_NAME=#334
PROJECT_URL_PREFIX=https://osm.etsi.org/gerrit
HUDSON_HOME=/var/lib/jenkins
JOB_BASE_NAME=v13.0
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
BUILD_ID=334
XDG_RUNTIME_DIR=/run/user/1001
BUILD_TAG=jenkins-LCM-stage_2-merge-v13.0-334
JENKINS_URL=https://osm.etsi.org/jenkins/
LANG=C.UTF-8
JOB_URL=https://osm.etsi.org/jenkins/job/LCM-stage_2-merge/job/v13.0/
BUILD_NUMBER=334
SHELL=/bin/bash
RUN_DISPLAY_URL=https://osm.etsi.org/jenkins/job/LCM-stage_2-merge/job/v13.0/334/display/redirect
ARTIFACTORY_SERVER=artifactory-osm
GERRIT_REFSPEC=refs/changes/16/13416/1
HUDSON_SERVER_COOKIE=6d3295a483c3e6d5
JOB_DISPLAY_URL=https://osm.etsi.org/jenkins/job/LCM-stage_2-merge/job/v13.0/display/redirect
JOB_NAME=LCM-stage_2-merge/v13.0
TEST_INSTALL=false
PWD=/home/jenkins/workspace/LCM-stage_2-merge_v13.0
SSH_CONNECTION=212.234.161.1 13726 172.21.249.3 22
BRANCH_NAME=v13.0
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Checkout)
[Pipeline] sh
[LCM-stage_2-merge_v13.0] Running shell script
+ git fetch --tags
[Pipeline] sh
[LCM-stage_2-merge_v13.0] Running shell script
+ git fetch origin refs/changes/16/13416/1
From https://osm.etsi.org/gerrit/osm/LCM
 * branch            refs/changes/16/13416/1 -> FETCH_HEAD
[Pipeline] sh
[LCM-stage_2-merge_v13.0] Running shell script
+ git checkout -f c96cb8bb9cef9b19c51baefe6a67c22fa8a71830
HEAD is now at c96cb8b Update requirements-dev to point to v13.0 branch
[Pipeline] sh
[LCM-stage_2-merge_v13.0] Running shell script
+ sudo git clean -dfx
Removing .cache/
Removing .coverage
Removing .eggs/
Removing .local/
Removing cover/
Removing coverage.xml
Removing nosetests.xml
Removing osm_lcm.egg-info/
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (License Scan)
[Pipeline] echo
skip the scan for merge
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Release Note Check)
[Pipeline] fileExists
[Pipeline] echo
No releasenote check present
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Docker-Build)
[Pipeline] sh
[LCM-stage_2-merge_v13.0] Running shell script
+ docker build --build-arg APT_PROXY=http://172.21.1.1:3142 -t osm/lcm-v13.0 .
Sending build context to Docker daemon  163.2MB

Step 1/6 : FROM ubuntu:20.04
 ---> f78909c2b360
Step 2/6 : ARG APT_PROXY
 ---> Using cache
 ---> 3f6949be98ab
Step 3/6 : RUN if [ ! -z $APT_PROXY ] ; then     echo "Acquire::http::Proxy \"$APT_PROXY\";" > /etc/apt/apt.conf.d/proxy.conf ;    echo "Acquire::https::Proxy \"$APT_PROXY\";" >> /etc/apt/apt.conf.d/proxy.conf ;    fi
 ---> Using cache
 ---> 83e4b76bb37a
Step 4/6 : RUN DEBIAN_FRONTEND=noninteractive apt-get update &&     DEBIAN_FRONTEND=noninteractive apt-get -y install         debhelper         dh-python         git         python3         python3-all         python3-dev         python3-setuptools
 ---> Using cache
 ---> af2ca96d6fd7
Step 5/6 : RUN python3 -m easy_install pip==21.3.1
 ---> Using cache
 ---> 47a2c0de14eb
Step 6/6 : RUN pip install tox==3.24.5
 ---> Using cache
 ---> b4175b532254
Successfully built b4175b532254
Successfully tagged osm/lcm-v13.0:latest
[Pipeline] }
[Pipeline] // stage
[Pipeline] sh
[LCM-stage_2-merge_v13.0] Running shell script
+ id -u
[Pipeline] sh
[LCM-stage_2-merge_v13.0] Running shell script
+ id -g
[Pipeline] withDockerContainer
osm-cicd-3 does not seem to be running inside a container
$ docker run -t -d -u 1001:1001 -u root -w /home/jenkins/workspace/LCM-stage_2-merge_v13.0 -v /home/jenkins/workspace/LCM-stage_2-merge_v13.0:/home/jenkins/workspace/LCM-stage_2-merge_v13.0:rw,z -v /home/jenkins/workspace/LCM-stage_2-merge_v13.0@tmp:/home/jenkins/workspace/LCM-stage_2-merge_v13.0@tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** --entrypoint cat osm/lcm-v13.0
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Test)
[Pipeline] sh
[LCM-stage_2-merge_v13.0] Running shell script
+ groupadd -o -g 1001 -r jenkins
[Pipeline] sh
[LCM-stage_2-merge_v13.0] Running shell script
+ pwd
+ useradd -o -u 1001 -d /home/jenkins/workspace/LCM-stage_2-merge_v13.0 -r -g jenkins jenkins
[Pipeline] sh
[LCM-stage_2-merge_v13.0] Running shell script
+ echo #! /bin/sh
[Pipeline] sh
[LCM-stage_2-merge_v13.0] Running shell script
+ chmod 755 /usr/bin/mesg
[Pipeline] sh
[LCM-stage_2-merge_v13.0] Running shell script
+ runuser jenkins -c devops-stages/stage-test.sh
Launching tox
โœ” OK flake8 in 23.062 seconds
flake8 create: /tmp/.tox/flake8
flake8 installdeps: flake8==5.0.4
flake8 develop-inst: /home/jenkins/workspace/LCM-stage_2-merge_v13.0
flake8 installed: flake8==5.0.4,mccabe==0.7.0,-e git+https://osm.etsi.org/gerrit/osm/LCM.git@c96cb8bb9cef9b19c51baefe6a67c22fa8a71830#egg=osm_lcm,pycodestyle==2.9.1,pyflakes==2.5.0
flake8 run-test-pre: PYTHONHASHSEED='3576291598'
flake8 run-test: commands[0] | flake8 osm_lcm/ setup.py

ERROR: invocation failed (exit code 1), logfile: /tmp/.tox/black/log/black-0.log
================================== log start ===================================
black create: /tmp/.tox/black
black installdeps: black
black installed: black==24.1.1,click==8.1.7,mypy-extensions==1.0.0,packaging==23.2,pathspec==0.12.1,platformdirs==4.1.0,tomli==2.0.1,typing_extensions==4.9.0
black run-test-pre: PYTHONHASHSEED='1218391369'
black run-test: commands[0] | black --check --diff osm_lcm/
--- /home/jenkins/workspace/LCM-stage_2-merge_v13.0/osm_lcm/frontend_pb2.py	2024-01-26 09:50:35.644553+00:00
+++ /home/jenkins/workspace/LCM-stage_2-merge_v13.0/osm_lcm/frontend_pb2.py	2024-01-28 09:51:15.636820+00:00
@@ -237,44 +237,44 @@
 PrimitiveRequest = _reflection.GeneratedProtocolMessageType(
     "PrimitiveRequest",
     (_message.Message,),
     {
         "DESCRIPTOR": _PRIMITIVEREQUEST,
-        "__module__": "osm_lcm.frontend_pb2"
+        "__module__": "osm_lcm.frontend_pb2",
         # @@protoc_insertion_point(class_scope:osm_ee.PrimitiveRequest)
     },
 )
 _sym_db.RegisterMessage(PrimitiveRequest)
 
 PrimitiveReply = _reflection.GeneratedProtocolMessageType(
     "PrimitiveReply",
     (_message.Message,),
     {
         "DESCRIPTOR": _PRIMITIVEREPLY,
-        "__module__": "osm_lcm.frontend_pb2"
+        "__module__": "osm_lcm.frontend_pb2",
         # @@protoc_insertion_point(class_scope:osm_ee.PrimitiveReply)
     },
 )
 _sym_db.RegisterMessage(PrimitiveReply)
 
 SshKeyRequest = _reflection.GeneratedProtocolMessageType(
     "SshKeyRequest",
     (_message.Message,),
     {
         "DESCRIPTOR": _SSHKEYREQUEST,
-        "__module__": "osm_lcm.frontend_pb2"
+        "__module__": "osm_lcm.frontend_pb2",
         # @@protoc_insertion_point(class_scope:osm_ee.SshKeyRequest)
     },
 )
 _sym_db.RegisterMessage(SshKeyRequest)
 
 SshKeyReply = _reflection.GeneratedProtocolMessageType(
     "SshKeyReply",
     (_message.Message,),
     {
         "DESCRIPTOR": _SSHKEYREPLY,
-        "__module__": "osm_lcm.frontend_pb2"
+        "__module__": "osm_lcm.frontend_pb2",
         # @@protoc_insertion_point(class_scope:osm_ee.SshKeyReply)
     },
 )
 _sym_db.RegisterMessage(SshKeyReply)
 
would reformat /home/jenkins/workspace/LCM-stage_2-merge_v13.0/osm_lcm/frontend_pb2.py
--- /home/jenkins/workspace/LCM-stage_2-merge_v13.0/osm_lcm/lcm.py	2024-01-26 09:50:35.644553+00:00
+++ /home/jenkins/workspace/LCM-stage_2-merge_v13.0/osm_lcm/lcm.py	2024-01-28 09:51:16.203866+00:00
@@ -95,15 +95,13 @@
         self.main_config.load_from_env()
         self.logger.critical("Loaded configuration:" + str(self.main_config.to_dict()))
         # TODO: check if lcm_hc.py is necessary
         self.health_check_file = get_health_check_file(self.main_config.to_dict())
         self.loop = loop or asyncio.get_event_loop()
-        self.ns = (
-            self.netslice
-        ) = (
-            self.vim
-        ) = self.wim = self.sdn = self.k8scluster = self.vca = self.k8srepo = None
+        self.ns = self.netslice = self.vim = self.wim = self.sdn = self.k8scluster = (
+            self.vca
+        ) = self.k8srepo = None
 
         # logging
         log_format_simple = (
             "%(asctime)s %(levelname)s %(name)s %(filename)s:%(lineno)s %(message)s"
         )
would reformat /home/jenkins/workspace/LCM-stage_2-merge_v13.0/osm_lcm/lcm.py
--- /home/jenkins/workspace/LCM-stage_2-merge_v13.0/osm_lcm/lcm_utils.py	2024-01-26 09:50:35.644553+00:00
+++ /home/jenkins/workspace/LCM-stage_2-merge_v13.0/osm_lcm/lcm_utils.py	2024-01-28 09:51:16.203747+00:00
@@ -125,23 +125,27 @@
 
     if base_folder.get("pkg-dir"):
         artifact_path = "{}/{}/{}/{}".format(
             base_folder["folder"].split(":")[0] + extension,
             base_folder["pkg-dir"],
-            "charms"
-            if charm_type in ("native_charm", "lxc_proxy_charm", "k8s_proxy_charm")
-            else "helm-charts",
+            (
+                "charms"
+                if charm_type in ("native_charm", "lxc_proxy_charm", "k8s_proxy_charm")
+                else "helm-charts"
+            ),
             charm_name,
         )
 
     else:
         # For SOL004 packages
         artifact_path = "{}/Scripts/{}/{}".format(
             base_folder["folder"].split(":")[0] + extension,
-            "charms"
-            if charm_type in ("native_charm", "lxc_proxy_charm", "k8s_proxy_charm")
-            else "helm-charts",
+            (
+                "charms"
+                if charm_type in ("native_charm", "lxc_proxy_charm", "k8s_proxy_charm")
+                else "helm-charts"
+            ),
             charm_name,
         )
 
     return artifact_path
 
would reformat /home/jenkins/workspace/LCM-stage_2-merge_v13.0/osm_lcm/lcm_utils.py
--- /home/jenkins/workspace/LCM-stage_2-merge_v13.0/osm_lcm/netslice.py	2024-01-26 09:50:35.648553+00:00
+++ /home/jenkins/workspace/LCM-stage_2-merge_v13.0/osm_lcm/netslice.py	2024-01-28 09:51:16.602934+00:00
@@ -504,13 +504,13 @@
                 raise LcmException("Timeout waiting nsi to be ready.")
 
             db_nsir_update["operational-status"] = "running"
             db_nsir_update["detailed-status"] = "done"
             db_nsir_update["config-status"] = "configured"
-            db_nsilcmop_update[
-                "operationState"
-            ] = nsilcmop_operation_state = "COMPLETED"
+            db_nsilcmop_update["operationState"] = nsilcmop_operation_state = (
+                "COMPLETED"
+            )
             db_nsilcmop_update["statusEnteredTime"] = time()
             db_nsilcmop_update["detailed-status"] = "done"
             return
 
         except (LcmException, DbException) as e:
@@ -538,13 +538,13 @@
                     db_nsir_update["config-status"] = "configured"
                 if db_nsilcmop:
                     db_nsilcmop_update["detailed-status"] = "FAILED {}: {}".format(
                         step, exc
                     )
-                    db_nsilcmop_update[
-                        "operationState"
-                    ] = nsilcmop_operation_state = "FAILED"
+                    db_nsilcmop_update["operationState"] = nsilcmop_operation_state = (
+                        "FAILED"
+                    )
                     db_nsilcmop_update["statusEnteredTime"] = time()
             try:
                 if db_nsir:
                     db_nsir_update["_admin.nsilcmop"] = None
                     self.update_db_2("nsis", nsir_id, db_nsir_update)
@@ -736,16 +736,16 @@
             RO_nsir_id = RO_delete_action = None
             for nsir_deployed_RO in get_iterable(nsir_deployed, "RO"):
                 RO_nsir_id = nsir_deployed_RO.get("netslice_scenario_id")
                 try:
                     if not self.ro_config["ng"]:
-                        step = db_nsir_update[
-                            "detailed-status"
-                        ] = "Deleting netslice-vld at RO"
-                        db_nsilcmop_update[
-                            "detailed-status"
-                        ] = "Deleting netslice-vld at RO"
+                        step = db_nsir_update["detailed-status"] = (
+                            "Deleting netslice-vld at RO"
+                        )
+                        db_nsilcmop_update["detailed-status"] = (
+                            "Deleting netslice-vld at RO"
+                        )
                         self.logger.debug(logging_text + step)
                         desc = await RO.delete("ns", RO_nsir_id)
                         RO_delete_action = desc["action_id"]
                         nsir_deployed_RO["vld_delete_action_id"] = RO_delete_action
                         nsir_deployed_RO["vld_status"] = "DELETING"
@@ -781,21 +781,21 @@
                     db_nsir_update["operational-status"] = "failed"
                     db_nsir_update["detailed-status"] = "Deletion errors " + "; ".join(
                         failed_detail
                     )
                     db_nsilcmop_update["detailed-status"] = "; ".join(failed_detail)
-                    db_nsilcmop_update[
-                        "operationState"
-                    ] = nsilcmop_operation_state = "FAILED"
+                    db_nsilcmop_update["operationState"] = nsilcmop_operation_state = (
+                        "FAILED"
+                    )
                     db_nsilcmop_update["statusEnteredTime"] = time()
                 else:
                     db_nsir_update["operational-status"] = "terminating"
                     db_nsir_update["config-status"] = "terminating"
                     db_nsir_update["_admin.nsiState"] = "NOT_INSTANTIATED"
-                    db_nsilcmop_update[
-                        "operationState"
-                    ] = nsilcmop_operation_state = "COMPLETED"
+                    db_nsilcmop_update["operationState"] = nsilcmop_operation_state = (
+                        "COMPLETED"
+                    )
                     db_nsilcmop_update["statusEnteredTime"] = time()
                     if db_nsilcmop["operationParams"].get("autoremove"):
                         autoremove = True
 
             db_nsir_update["detailed-status"] = "done"
@@ -830,13 +830,13 @@
                     db_nsir_update["operational-status"] = "failed"
                 if db_nsilcmop:
                     db_nsilcmop_update["detailed-status"] = "FAILED {}: {}".format(
                         step, exc
                     )
-                    db_nsilcmop_update[
-                        "operationState"
-                    ] = nsilcmop_operation_state = "FAILED"
+                    db_nsilcmop_update["operationState"] = nsilcmop_operation_state = (
+                        "FAILED"
+                    )
                     db_nsilcmop_update["statusEnteredTime"] = time()
             try:
                 if db_nsir:
                     db_nsir_update["_admin.deployed"] = nsir_deployed
                     db_nsir_update["_admin.nsilcmop"] = None
would reformat /home/jenkins/workspace/LCM-stage_2-merge_v13.0/osm_lcm/netslice.py
--- /home/jenkins/workspace/LCM-stage_2-merge_v13.0/osm_lcm/vim_sdn.py	2024-01-26 09:50:35.652554+00:00
+++ /home/jenkins/workspace/LCM-stage_2-merge_v13.0/osm_lcm/vim_sdn.py	2024-01-28 09:51:17.325324+00:00
@@ -1221,13 +1221,13 @@
                         )
                         db_k8scluster_update[
                             "_admin.{}.error_msg".format(task_name)
                         ] = None
                         db_k8scluster_update["_admin.{}.id".format(task_name)] = k8s_id
-                        db_k8scluster_update[
-                            "_admin.{}.created".format(task_name)
-                        ] = uninstall_sw
+                        db_k8scluster_update["_admin.{}.created".format(task_name)] = (
+                            uninstall_sw
+                        )
                         db_k8scluster_update[
                             "_admin.{}.operationalState".format(task_name)
                         ] = "ENABLED"
                 # update database
                 step = "Updating database for " + task_name
@@ -1347,13 +1347,13 @@
                 )
                 cluster_removed = await self.helm3_k8scluster.reset(
                     cluster_uuid=k8s_h3c_id, uninstall_sw=uninstall_sw
                 )
                 db_k8scluster_update["_admin.helm-chart-v3.id"] = None
-                db_k8scluster_update[
-                    "_admin.helm-chart-v3.operationalState"
-                ] = "DISABLED"
+                db_k8scluster_update["_admin.helm-chart-v3.operationalState"] = (
+                    "DISABLED"
+                )
 
             # Try to remove from cluster_inserted to clean old versions
             if k8s_hc_id and cluster_removed:
                 step = "Removing k8scluster='{}' from k8srepos".format(k8scluster_id)
                 self.logger.debug(logging_text + step)
would reformat /home/jenkins/workspace/LCM-stage_2-merge_v13.0/osm_lcm/vim_sdn.py
--- /home/jenkins/workspace/LCM-stage_2-merge_v13.0/osm_lcm/ns.py	2024-01-26 09:50:35.648553+00:00
+++ /home/jenkins/workspace/LCM-stage_2-merge_v13.0/osm_lcm/ns.py	2024-01-28 09:51:22.348622+00:00
@@ -671,13 +671,13 @@
                     if "status" not in vdur:
                         vdur["status"] = "ERROR"
                         vnfr_update["vdur.{}.status".format(vdu_index)] = "ERROR"
                         if error_text:
                             vdur["status-detailed"] = str(error_text)
-                            vnfr_update[
-                                "vdur.{}.status-detailed".format(vdu_index)
-                            ] = "ERROR"
+                            vnfr_update["vdur.{}.status-detailed".format(vdu_index)] = (
+                                "ERROR"
+                            )
                 self.update_db_2("vnfrs", db_vnfr["_id"], vnfr_update)
         except DbException as e:
             self.logger.error("Cannot update vnf. {}".format(e))
 
     def ns_update_vnfr(self, db_vnfrs, nsr_desc_RO):
@@ -875,13 +875,13 @@
                                 target_vld["vim_info"],
                                 (other_target_vim, param.replace("-", "_")),
                                 vim_net,
                             )
                     else:  # isinstance str
-                        target_vld["vim_info"][target_vim][
-                            param.replace("-", "_")
-                        ] = vld_params[param]
+                        target_vld["vim_info"][target_vim][param.replace("-", "_")] = (
+                            vld_params[param]
+                        )
             if vld_params.get("common_id"):
                 target_vld["common_id"] = vld_params.get("common_id")
 
         # modify target["ns"]["vld"] with instantiation parameters to override vnf vim-account
         def update_ns_vld_target(target, ns_params):
@@ -1141,13 +1141,13 @@
                     if "cidr" in ip_profile_source_data:
                         ip_profile_dest_data["subnet-address"] = ip_profile_source_data[
                             "cidr"
                         ]
                     if "gateway-ip" in ip_profile_source_data:
-                        ip_profile_dest_data[
-                            "gateway-address"
-                        ] = ip_profile_source_data["gateway-ip"]
+                        ip_profile_dest_data["gateway-address"] = (
+                            ip_profile_source_data["gateway-ip"]
+                        )
                     if "dhcp-enabled" in ip_profile_source_data:
                         ip_profile_dest_data["dhcp-params"] = {
                             "enabled": ip_profile_source_data["dhcp-enabled"]
                         }
 
@@ -1829,23 +1829,27 @@
             # Get artifact path
             if base_folder["pkg-dir"]:
                 artifact_path = "{}/{}/{}/{}".format(
                     base_folder["folder"],
                     base_folder["pkg-dir"],
-                    "charms"
-                    if vca_type
-                    in ("native_charm", "lxc_proxy_charm", "k8s_proxy_charm")
-                    else "helm-charts",
+                    (
+                        "charms"
+                        if vca_type
+                        in ("native_charm", "lxc_proxy_charm", "k8s_proxy_charm")
+                        else "helm-charts"
+                    ),
                     vca_name,
                 )
             else:
                 artifact_path = "{}/Scripts/{}/{}/".format(
                     base_folder["folder"],
-                    "charms"
-                    if vca_type
-                    in ("native_charm", "lxc_proxy_charm", "k8s_proxy_charm")
-                    else "helm-charts",
+                    (
+                        "charms"
+                        if vca_type
+                        in ("native_charm", "lxc_proxy_charm", "k8s_proxy_charm")
+                        else "helm-charts"
+                    ),
                     vca_name,
                 )
 
             self.logger.debug("Artifact path > {}".format(artifact_path))
 
@@ -2253,13 +2257,13 @@
         :param other_update: Other required changes at database if provided, will be cleared
         :return:
         """
         try:
             db_dict = other_update or {}
-            db_dict[
-                "_admin.nslcmop"
-            ] = current_operation_id  # for backward compatibility
+            db_dict["_admin.nslcmop"] = (
+                current_operation_id  # for backward compatibility
+            )
             db_dict["_admin.current-operation"] = current_operation_id
             db_dict["_admin.operation-type"] = (
                 current_operation if current_operation != "IDLE" else None
             )
             db_dict["currentOperation"] = current_operation
@@ -2337,13 +2341,13 @@
             db_path = "configurationStatus.{}.".format(vca_index)
             db_dict = other_update or {}
             if status:
                 db_dict[db_path + "status"] = status
             if element_under_configuration:
-                db_dict[
-                    db_path + "elementUnderConfiguration"
-                ] = element_under_configuration
+                db_dict[db_path + "elementUnderConfiguration"] = (
+                    element_under_configuration
+                )
             if element_type:
                 db_dict[db_path + "elementType"] = element_type
             self.update_db_2("nsrs", nsr_id, db_dict)
         except DbException as e:
             self.logger.warn(
@@ -3380,13 +3384,13 @@
                             # Mgmt service found, Obtain service ip
                             ip = service.get("external_ip", service.get("cluster_ip"))
                             if isinstance(ip, list) and len(ip) == 1:
                                 ip = ip[0]
 
-                            vnfr_update_dict[
-                                "kdur.{}.ip-address".format(kdu_index)
-                            ] = ip
+                            vnfr_update_dict["kdur.{}.ip-address".format(kdu_index)] = (
+                                ip
+                            )
 
                             # Check if must update also mgmt ip at the vnf
                             service_external_cp = mgmt_service.get(
                                 "external-connection-point-ref"
                             )
@@ -3516,13 +3520,13 @@
                             k8s_credentials, reuse_cluster_uuid=cluster_id
                         )
                         db_k8scluster_update = {}
                         db_k8scluster_update["_admin.helm-chart-v3.error_msg"] = None
                         db_k8scluster_update["_admin.helm-chart-v3.id"] = k8s_id
-                        db_k8scluster_update[
-                            "_admin.helm-chart-v3.created"
-                        ] = uninstall_sw
+                        db_k8scluster_update["_admin.helm-chart-v3.created"] = (
+                            uninstall_sw
+                        )
                         db_k8scluster_update[
                             "_admin.helm-chart-v3.operationalState"
                         ] = "ENABLED"
                         self.update_db_2(
                             "k8sclusters", cluster_id, db_k8scluster_update
@@ -3891,14 +3895,13 @@
                 nsr_id,
                 nslcmop_id,
                 "instantiate_N2VC-{}".format(vca_index),
                 task_n2vc,
             )
-            task_instantiation_info[
-                task_n2vc
-            ] = self.task_name_deploy_vca + " {}.{}".format(
-                member_vnf_index or "", vdu_id or ""
+            task_instantiation_info[task_n2vc] = (
+                self.task_name_deploy_vca
+                + " {}.{}".format(member_vnf_index or "", vdu_id or "")
             )
 
     @staticmethod
     def _create_nslcmop(nsr_id, operation, params):
         """
@@ -4281,13 +4284,13 @@
                 self.logger.debug(logging_text + stage[2])
                 self.update_db_2("nsrs", nsr_id, db_nsr_update)
                 self._write_op_status(nslcmop_id, stage)
                 desc = await self.RO.delete("ns", ro_nsr_id)
                 ro_delete_action = desc["action_id"]
-                db_nsr_update[
-                    "_admin.deployed.RO.nsr_delete_action_id"
-                ] = ro_delete_action
+                db_nsr_update["_admin.deployed.RO.nsr_delete_action_id"] = (
+                    ro_delete_action
+                )
                 db_nsr_update["_admin.deployed.RO.nsr_id"] = None
                 db_nsr_update["_admin.deployed.RO.nsr_status"] = "DELETED"
             if ro_delete_action:
                 # wait until NS is deleted from VIM
                 stage[2] = "Waiting ns deleted from VIM."
@@ -4404,14 +4407,14 @@
             for index, vnf_deployed in enumerate(nsr_deployed["RO"]["vnfd"]):
                 if not vnf_deployed or not vnf_deployed["id"]:
                     continue
                 try:
                     ro_vnfd_id = vnf_deployed["id"]
-                    stage[
-                        2
-                    ] = "Deleting member_vnf_index={} ro_vnfd_id={} from RO.".format(
-                        vnf_deployed["member-vnf-index"], ro_vnfd_id
+                    stage[2] = (
+                        "Deleting member_vnf_index={} ro_vnfd_id={} from RO.".format(
+                            vnf_deployed["member-vnf-index"], ro_vnfd_id
+                        )
                     )
                     db_nsr_update["detailed-status"] = " ".join(stage)
                     self.update_db_2("nsrs", nsr_id, db_nsr_update)
                     self._write_op_status(nslcmop_id, stage)
                     await self.RO.delete("vnfd", ro_vnfd_id)
@@ -4421,13 +4424,13 @@
                     db_nsr_update["_admin.deployed.RO.vnfd.{}.id".format(index)] = None
                 except Exception as e:
                     if (
                         isinstance(e, ROclient.ROClientException) and e.http_code == 404
                     ):  # not found
-                        db_nsr_update[
-                            "_admin.deployed.RO.vnfd.{}.id".format(index)
-                        ] = None
+                        db_nsr_update["_admin.deployed.RO.vnfd.{}.id".format(index)] = (
+                            None
+                        )
                         self.logger.debug(
                             logging_text
                             + "ro_vnfd_id={} already deleted ".format(ro_vnfd_id)
                         )
                     elif (
@@ -4515,13 +4518,13 @@
             for vnfr in db_vnfrs_list:
                 vnfd_id = vnfr["vnfd-id"]
                 if vnfd_id not in db_vnfds_from_id:
                     vnfd = self.db.get_one("vnfds", {"_id": vnfd_id})
                     db_vnfds_from_id[vnfd_id] = vnfd
-                db_vnfds_from_member_index[
-                    vnfr["member-vnf-index-ref"]
-                ] = db_vnfds_from_id[vnfd_id]
+                db_vnfds_from_member_index[vnfr["member-vnf-index-ref"]] = (
+                    db_vnfds_from_id[vnfd_id]
+                )
 
             # Destroy individual execution environments when there are terminating primitives.
             # Rest of EE will be deleted at once
             # TODO - check before calling _destroy_N2VC
             # if not operation_params.get("skip_terminate_primitives"):#
@@ -4533,13 +4536,15 @@
 
             for vca_index, vca in enumerate(get_iterable(nsr_deployed, "VCA")):
                 config_descriptor = None
                 vca_member_vnf_index = vca.get("member-vnf-index")
                 vca_id = self.get_vca_id(
-                    db_vnfrs_dict.get(vca_member_vnf_index)
-                    if vca_member_vnf_index
-                    else None,
+                    (
+                        db_vnfrs_dict.get(vca_member_vnf_index)
+                        if vca_member_vnf_index
+                        else None
+                    ),
                     db_nsr,
                 )
                 if not vca or not vca.get("ee_id"):
                     continue
                 if not vca.get("member-vnf-index"):
@@ -4644,13 +4649,13 @@
                         + "Unknown k8s deployment type {}".format(
                             kdu.get("k8scluster-type")
                         )
                     )
                     continue
-                tasks_dict_info[
-                    task_delete_kdu_instance
-                ] = "Terminating KDU '{}'".format(kdu.get("kdu-name"))
+                tasks_dict_info[task_delete_kdu_instance] = (
+                    "Terminating KDU '{}'".format(kdu.get("kdu-name"))
+                )
 
             # remove from RO
             stage[1] = "Deleting ns from VIM."
             if self.ro_config.ng:
                 task_delete_ro = asyncio.ensure_future(
@@ -5407,15 +5412,13 @@
                 logging_text + "Exit Exception {} {}".format(type(e).__name__, e),
                 exc_info=True,
             )
         finally:
             if exc:
-                db_nslcmop_update[
-                    "detailed-status"
-                ] = (
-                    detailed_status
-                ) = error_description_nslcmop = "FAILED {}: {}".format(step, exc)
+                db_nslcmop_update["detailed-status"] = detailed_status = (
+                    error_description_nslcmop
+                ) = "FAILED {}: {}".format(step, exc)
                 nslcmop_operation_state = "FAILED"
             if db_nsr:
                 self._write_ns_status(
                     nsr_id=nsr_id,
                     ns_state=db_nsr[
@@ -6238,15 +6241,13 @@
                 logging_text + "Exit Exception {} {}".format(type(e).__name__, e),
                 exc_info=True,
             )
         finally:
             if exc:
-                db_nslcmop_update[
-                    "detailed-status"
-                ] = (
-                    detailed_status
-                ) = error_description_nslcmop = "FAILED {}: {}".format(step, exc)
+                db_nslcmop_update["detailed-status"] = detailed_status = (
+                    error_description_nslcmop
+                ) = "FAILED {}: {}".format(step, exc)
                 nslcmop_operation_state = "FAILED"
                 db_nsr_update["operational-status"] = old_operational_status
             if db_nsr:
                 self._write_ns_status(
                     nsr_id=nsr_id,
@@ -6731,14 +6732,14 @@
                         and scaling_type == "SCALE_OUT"
                     ):
                         vnf_config_primitive = scaling_config_action[
                             "vnf-config-primitive-name-ref"
                         ]
-                        step = db_nslcmop_update[
-                            "detailed-status"
-                        ] = "executing pre-scale scaling-config-action '{}'".format(
-                            vnf_config_primitive
+                        step = db_nslcmop_update["detailed-status"] = (
+                            "executing pre-scale scaling-config-action '{}'".format(
+                                vnf_config_primitive
+                            )
                         )
 
                         # look for primitive
                         for config_primitive in (
                             get_configuration(db_vnfd, db_vnfd["id"]) or {}
@@ -6846,33 +6847,33 @@
             # PRE-SCALE END
 
             db_nsr_update[
                 "_admin.scaling-group.{}.nb-scale-op".format(admin_scale_index)
             ] = nb_scale_op
-            db_nsr_update[
-                "_admin.scaling-group.{}.time".format(admin_scale_index)
-            ] = time()
+            db_nsr_update["_admin.scaling-group.{}.time".format(admin_scale_index)] = (
+                time()
+            )
 
             # SCALE-IN VCA - BEGIN
             if vca_scaling_info:
-                step = db_nslcmop_update[
-                    "detailed-status"
-                ] = "Deleting the execution environments"
+                step = db_nslcmop_update["detailed-status"] = (
+                    "Deleting the execution environments"
+                )
                 scale_process = "VCA"
                 for vca_info in vca_scaling_info:
                     if vca_info["type"] == "delete" and not vca_info.get("osm_kdu_id"):
                         member_vnf_index = str(vca_info["member-vnf-index"])
                         self.logger.debug(
                             logging_text + "vdu info: {}".format(vca_info)
                         )
                         if vca_info.get("osm_vdu_id"):
                             vdu_id = vca_info["osm_vdu_id"]
                             vdu_index = int(vca_info["vdu_index"])
-                            stage[
-                                1
-                            ] = "Scaling member_vnf_index={}, vdu_id={}, vdu_index={} ".format(
-                                member_vnf_index, vdu_id, vdu_index
+                            stage[1] = (
+                                "Scaling member_vnf_index={}, vdu_id={}, vdu_index={} ".format(
+                                    member_vnf_index, vdu_id, vdu_index
+                                )
                             )
                         stage[2] = step = "Scaling in VCA"
                         self._write_op_status(op_id=nslcmop_id, stage=stage)
                         vca_update = db_nsr["_admin"]["deployed"]["VCA"]
                         config_update = db_nsr["configurationStatus"]
@@ -6980,13 +6981,13 @@
             if db_nsr_update:
                 self.update_db_2("nsrs", nsr_id, db_nsr_update)
 
             # SCALE-UP VCA - BEGIN
             if vca_scaling_info:
-                step = db_nslcmop_update[
-                    "detailed-status"
-                ] = "Creating new execution environments"
+                step = db_nslcmop_update["detailed-status"] = (
+                    "Creating new execution environments"
+                )
                 scale_process = "VCA"
                 for vca_info in vca_scaling_info:
                     if vca_info["type"] == "create" and not vca_info.get("osm_kdu_id"):
                         member_vnf_index = str(vca_info["member-vnf-index"])
                         self.logger.debug(
@@ -7044,14 +7045,14 @@
                                 db_vnfr, vdu_id, vdu_count_index=vdu_index
                             )
                             if descriptor_config:
                                 vdu_name = None
                                 kdu_name = None
-                                stage[
-                                    1
-                                ] = "Scaling member_vnf_index={}, vdu_id={}, vdu_index={} ".format(
-                                    member_vnf_index, vdu_id, vdu_index
+                                stage[1] = (
+                                    "Scaling member_vnf_index={}, vdu_id={}, vdu_index={} ".format(
+                                        member_vnf_index, vdu_id, vdu_index
+                                    )
                                 )
                                 stage[2] = step = "Scaling out VCA"
                                 self._write_op_status(op_id=nslcmop_id, stage=stage)
                                 self._deploy_n2vc(
                                     logging_text=logging_text
@@ -7093,14 +7094,14 @@
                         and scaling_type == "SCALE_OUT"
                     ):
                         vnf_config_primitive = scaling_config_action[
                             "vnf-config-primitive-name-ref"
                         ]
-                        step = db_nslcmop_update[
-                            "detailed-status"
-                        ] = "executing post-scale scaling-config-action '{}'".format(
-                            vnf_config_primitive
+                        step = db_nslcmop_update["detailed-status"] = (
+                            "executing post-scale scaling-config-action '{}'".format(
+                                vnf_config_primitive
+                            )
                         )
 
                         vnfr_params = {"VDU_SCALE_INFO": scaling_info}
                         if db_vnfr.get("additionalParamsForVnf"):
                             vnfr_params.update(db_vnfr["additionalParamsForVnf"])
@@ -7206,13 +7207,13 @@
                             raise LcmException(result_detail)
                         db_nsr_update["config-status"] = old_config_status
                         scale_process = None
             # POST-SCALE END
 
-            db_nsr_update[
-                "detailed-status"
-            ] = ""  # "scaled {} {}".format(scaling_group, scaling_type)
+            db_nsr_update["detailed-status"] = (
+                ""  # "scaled {} {}".format(scaling_group, scaling_type)
+            )
             db_nsr_update["operational-status"] = (
                 "running"
                 if old_operational_status == "failed"
                 else old_operational_status
             )
@@ -7254,27 +7255,27 @@
                     stage,
                     nslcmop_id,
                     nsr_id=nsr_id,
                 )
             if exc:
-                db_nslcmop_update[
-                    "detailed-status"
-                ] = error_description_nslcmop = "FAILED {}: {}".format(step, exc)
+                db_nslcmop_update["detailed-status"] = error_description_nslcmop = (
+                    "FAILED {}: {}".format(step, exc)
+                )
                 nslcmop_operation_state = "FAILED"
                 if db_nsr:
                     db_nsr_update["operational-status"] = old_operational_status
                     db_nsr_update["config-status"] = old_config_status
                     db_nsr_update["detailed-status"] = ""
                     if scale_process:
                         if "VCA" in scale_process:
                             db_nsr_update["config-status"] = "failed"
                         if "RO" in scale_process:
                             db_nsr_update["operational-status"] = "failed"
-                        db_nsr_update[
-                            "detailed-status"
-                        ] = "FAILED scaling nslcmop={} {}: {}".format(
-                            nslcmop_id, step, exc
+                        db_nsr_update["detailed-status"] = (
+                            "FAILED scaling nslcmop={} {}: {}".format(
+                                nslcmop_id, step, exc
+                            )
                         )
             else:
                 error_description_nslcmop = None
                 nslcmop_operation_state = "COMPLETED"
                 db_nslcmop_update["detailed-status"] = "Done"
@@ -7947,20 +7948,20 @@
                     stage,
                     nslcmop_id,
                     nsr_id=nsr_id,
                 )
             if exc:
-                db_nslcmop_update[
-                    "detailed-status"
-                ] = error_description_nslcmop = "FAILED {}: {}".format(step, exc)
+                db_nslcmop_update["detailed-status"] = error_description_nslcmop = (
+                    "FAILED {}: {}".format(step, exc)
+                )
                 nslcmop_operation_state = "FAILED"
                 if db_nsr:
                     db_nsr_update["operational-status"] = old_operational_status
                     db_nsr_update["config-status"] = old_config_status
-                    db_nsr_update[
-                        "detailed-status"
-                    ] = "FAILED healing nslcmop={} {}: {}".format(nslcmop_id, step, exc)
+                    db_nsr_update["detailed-status"] = (
+                        "FAILED healing nslcmop={} {}: {}".format(nslcmop_id, step, exc)
+                    )
                     for task, task_name in tasks_dict_info.items():
                         if not task.done() or task.cancelled() or task.exception():
                             if task_name.startswith(self.task_name_deploy_vca):
                                 # A N2VC task is pending
                                 db_nsr_update["config-status"] = "failed"
@@ -8248,14 +8249,13 @@
                 nsr_id,
                 nslcmop_id,
                 "instantiate_N2VC-{}".format(vca_index),
                 task_n2vc,
             )
-            task_instantiation_info[
-                task_n2vc
-            ] = self.task_name_deploy_vca + " {}.{}".format(
-                member_vnf_index or "", vdu_id or ""
+            task_instantiation_info[task_n2vc] = (
+                self.task_name_deploy_vca
+                + " {}.{}".format(member_vnf_index or "", vdu_id or "")
             )
 
     async def heal_N2VC(
         self,
         logging_text,
@@ -8320,23 +8320,27 @@
             # Get artifact path
             if base_folder["pkg-dir"]:
                 artifact_path = "{}/{}/{}/{}".format(
                     base_folder["folder"],
                     base_folder["pkg-dir"],
-                    "charms"
-                    if vca_type
-                    in ("native_charm", "lxc_proxy_charm", "k8s_proxy_charm")
-                    else "helm-charts",
+                    (
+                        "charms"
+                        if vca_type
+                        in ("native_charm", "lxc_proxy_charm", "k8s_proxy_charm")
+                        else "helm-charts"
+                    ),
                     vca_name,
                 )
             else:
                 artifact_path = "{}/Scripts/{}/{}/".format(
                     base_folder["folder"],
-                    "charms"
-                    if vca_type
-                    in ("native_charm", "lxc_proxy_charm", "k8s_proxy_charm")
-                    else "helm-charts",
+                    (
+                        "charms"
+                        if vca_type
+                        in ("native_charm", "lxc_proxy_charm", "k8s_proxy_charm")
+                        else "helm-charts"
+                    ),
                     vca_name,
                 )
 
             self.logger.debug("Artifact path > {}".format(artifact_path))
 
would reformat /home/jenkins/workspace/LCM-stage_2-merge_v13.0/osm_lcm/ns.py

Oh no! ๐Ÿ’ฅ ๐Ÿ’” ๐Ÿ’ฅ
6 files would be reformatted, 34 files would be left unchanged.
ERROR: InvocationError for command /tmp/.tox/black/bin/black --check --diff osm_lcm/ (exited with code 1)

=================================== log end ====================================
โœ– FAIL black in 23.802 seconds
โœ” OK safety in 49.296 seconds
safety create: /tmp/.tox/safety
safety installdeps: -r/home/jenkins/workspace/LCM-stage_2-merge_v13.0/requirements.txt, safety
safety develop-inst: /home/jenkins/workspace/LCM-stage_2-merge_v13.0
safety installed: aiohttp==3.7.4.post0,async-timeout==3.0.1,attrs==22.1.0,boltons==21.0.0,certifi==2023.11.17,chardet==4.0.0,charset-normalizer==3.3.2,checksumdir==1.2.0,click==8.1.7,config-man==0.0.4,dparse==0.6.3,face==22.0.0,glom==22.1.0,grpcio==1.50.0,grpcio-tools==1.48.1,grpclib==0.4.3,h2==4.1.0,hpack==4.0.0,hyperframe==6.0.1,idna==3.4,Jinja2==3.1.2,MarkupSafe==2.1.1,multidict==6.0.2,-e git+https://osm.etsi.org/gerrit/osm/LCM.git@c96cb8bb9cef9b19c51baefe6a67c22fa8a71830#egg=osm_lcm,packaging==21.3,protobuf==3.20.3,pydantic==1.10.2,pyparsing==3.1.1,PyYAML==5.4.1,requests==2.31.0,ruamel.yaml==0.18.5,ruamel.yaml.clib==0.2.8,safety==2.3.5,six==1.16.0,tomli==2.0.1,typing_extensions==4.4.0,urllib3==2.1.0,yarl==1.8.1
safety run-test-pre: PYTHONHASHSEED='2538429602'
safety run-test: commands[0] | - safety check --full-report
+==============================================================================+

                               /$$$$$$            /$$
                              /$$__  $$          | $$
           /$$$$$$$  /$$$$$$ | $$  \__//$$$$$$  /$$$$$$   /$$   /$$
          /$$_____/ |____  $$| $$$$   /$$__  $$|_  $$_/  | $$  | $$
         |  $$$$$$   /$$$$$$$| $$_/  | $$$$$$$$  | $$    | $$  | $$
          \____  $$ /$$__  $$| $$    | $$_____/  | $$ /$$| $$  | $$
          /$$$$$$$/|  $$$$$$$| $$    |  $$$$$$$  |  $$$$/|  $$$$$$$
         |_______/  \_______/|__/     \_______/   \___/   \____  $$
                                                          /$$  | $$
                                                         |  $$$$$$/
  by pyup.io                                              \______/

+==============================================================================+

 REPORT 

  Safety is using PyUp's free open-source vulnerability database. This
data is 30 days old and limited. 
  For real-time enhanced vulnerability data, fix recommendations, severity
reporting, cybersecurity support, team and project policy management and more
sign up at https://pyup.io or email sales@pyup.io

  Safety v2.3.5 is scanning for Vulnerabilities...
  Scanning dependencies in your environment:

  -> /home/jenkins/workspace/LCM-stage_2-merge_v13.0
  -> /tmp/.tox/safety/lib/python3.8/site-packages

  Using non-commercial database
  Found and scanned 41 packages
  Timestamp 2024-01-28 09:51:48
  11 vulnerabilities found
  0 vulnerabilities ignored

+==============================================================================+
 VULNERABILITIES FOUND 
+==============================================================================+

-> Vulnerability found in aiohttp version 3.7.4.post0
   Vulnerability ID: 62583
   Affected spec: <3.9.0
   ADVISORY: Aiohttp 3.9.0 includes a fix for CVE-2023-49082: Improper
   validation makes it possible for an attacker to modify the HTTP request (e.g.
   insert a new header) or even create a new HTTP request if the attacker
   controls the HTTP method. The vulnerability occurs only if the attacker can
   control the HTTP method (GET, POST etc.) of the request. If the attacker can
   control the HTTP version of the request it will be able to modify the request
   (request smuggling).https://github.com/aio-
   libs/aiohttp/security/advisories/GHSA-qvrw-v9rv-5rjx
   CVE-2023-49082
   For more information, please visit
   https://data.safetycli.com/v/62583/f17


-> Vulnerability found in aiohttp version 3.7.4.post0
   Vulnerability ID: 62582
   Affected spec: <3.9.0
   ADVISORY: Aiohttp 3.9.0 includes a fix for CVE-2023-49081: Improper
   validation made it possible for an attacker to modify the HTTP request (e.g.
   to insert a new header) or create a new HTTP request if the attacker controls
   the HTTP version. The vulnerability only occurs if the attacker can control
   the HTTP version of the request.https://github.com/aio-
   libs/aiohttp/security/advisories/GHSA-q3qx-c6g2-7pw2
   CVE-2023-49081
   For more information, please visit
   https://data.safetycli.com/v/62582/f17


-> Vulnerability found in aiohttp version 3.7.4.post0
   Vulnerability ID: 59725
   Affected spec: <=3.8.4
   ADVISORY: Aiohttp 3.8.5 includes a fix for CVE-2023-37276: Sending a
   crafted HTTP request will cause the server to misinterpret one of the HTTP
   header values leading to HTTP request smuggling.https://github.com/aio-libs/a
   iohttp/commit/9337fb3f2ab2b5f38d7e98a194bde6f7e3d16c40https://github.com/aio-
   libs/aiohttp/security/advisories/GHSA-45c4-8wx5-qw6w
   CVE-2023-37276
   For more information, please visit
   https://data.safetycli.com/v/59725/f17


-> Vulnerability found in aiohttp version 3.7.4.post0
   Vulnerability ID: 42692
   Affected spec: <3.8.0
   ADVISORY: Aiohttp 3.8.0 adds validation of HTTP header keys and
   values to prevent header injection.https://github.com/aio-
   libs/aiohttp/issues/4818
   PVE-2021-42692
   For more information, please visit
   https://data.safetycli.com/v/42692/f17


-> Vulnerability found in aiohttp version 3.7.4.post0
   Vulnerability ID: 62327
   Affected spec: <3.8.0
   ADVISORY: Aiohttp 3.8.0 includes a fix for CVE-2023-47641: Affected
   versions of aiohttp have a security vulnerability regarding the inconsistent
   interpretation of the http protocol. HTTP/1.1 is a persistent protocol, if
   both Content-Length(CL) and Transfer-Encoding(TE) header values are present
   it can lead to incorrect interpretation of two entities that parse the HTTP
   and we can poison other sockets with this incorrect interpretation. A
   possible Proof-of-Concept (POC) would be a configuration with a reverse
   proxy(frontend) that accepts both CL and TE headers and aiohttp as backend.
   As aiohttp parses anything with chunked, we can pass a chunked123 as TE, the
   frontend entity will ignore this header and will parse Content-Length. The
   impact of this vulnerability is that it is possible to bypass any proxy rule,
   poisoning sockets to other users like passing Authentication Headers, also if
   it is present an Open Redirect an attacker could combine it to redirect
   random users to another website and log the request.https://github.com/aio-
   libs/aiohttp/security/advisories/GHSA-xx9p-xxvh-7g8j
   CVE-2023-47641
   For more information, please visit
   https://data.safetycli.com/v/62327/f17


-> Vulnerability found in aiohttp version 3.7.4.post0
   Vulnerability ID: 62326
   Affected spec: <3.8.6
   ADVISORY: Aiohttp 3.8.6 includes a fix for CVE-2023-47627: The HTTP
   parser in AIOHTTP has numerous problems with header parsing, which could lead
   to request smuggling. This parser is only used when AIOHTTP_NO_EXTENSIONS is
   enabled (or not using a prebuilt wheel).https://github.com/aio-
   libs/aiohttp/security/advisories/GHSA-gfw2-4jvh-wgfg
   CVE-2023-47627
   For more information, please visit
   https://data.safetycli.com/v/62326/f17


-> Vulnerability found in grpcio version 1.50.0
   Vulnerability ID: 61191
   Affected spec: <1.53.2
   ADVISORY: Grpcio 1.53.2, 1.54.3, 1.55.3 and 1.56.2 include a fix for
   CVE-2023-4785: Lack of error handling in the TCP server in Google's gRPC
   starting version 1.23 on posix-compatible platforms (ex. Linux) allows an
   attacker to cause a denial of service by initiating a significant number of
   connections with the server. Note that gRPC C++ Python, and Ruby are
   affected, but gRPC Java, and Go are NOT
   affected.https://github.com/grpc/grpc/pull/33656
   CVE-2023-4785
   For more information, please visit
   https://data.safetycli.com/v/61191/f17


-> Vulnerability found in grpcio version 1.50.0
   Vulnerability ID: 59869
   Affected spec: <1.53.0
   ADVISORY: Grpcio 1.53.0 includes a fix for a Connection Confusion
   vulnerability. When gRPC HTTP2 stack raised a header size exceeded error, it
   skipped parsing the rest of the HPACK frame. This caused any HPACK table
   mutations to also be skipped, resulting in a desynchronization of HPACK
   tables between sender and receiver. If leveraged, say, between a proxy and a
   backend, this could lead to requests from the proxy being interpreted as
   containing headers from different proxy clients - leading to an information
   leak that can be used for privilege escalation or data
   exfiltration.https://github.com/advisories/GHSA-cfgp-2977-2fmm
   CVE-2023-32731
   For more information, please visit
   https://data.safetycli.com/v/59869/f17


-> Vulnerability found in grpcio version 1.50.0
   Vulnerability ID: 59867
   Affected spec: <1.53.0
   ADVISORY: Grpcio 1.53.0 includes a fix for a Reachable Assertion
   vulnerability. https://github.com/advisories/GHSA-6628-q6j9-w8vg
   CVE-2023-1428
   For more information, please visit
   https://data.safetycli.com/v/59867/f17


-> Vulnerability found in grpcio version 1.50.0
   Vulnerability ID: 59868
   Affected spec: <1.53.0
   ADVISORY: Grpcio 1.53.0 includes a fix for a Connection Termination
   vulnerability. The prior versions contain a vulnerability whereby a client
   can cause a termination of connection between a HTTP2 proxy and a gRPC
   server: a base64 encoding error for -bin suffixed headers will result in a
   disconnection by the gRPC server, but is typically allowed by HTTP2
   proxies.https://github.com/advisories/GHSA-9hxf-ppjv-w6rq
   CVE-2023-32732
   For more information, please visit
   https://data.safetycli.com/v/59868/f17


-> Vulnerability found in pydantic version 1.10.2
   Vulnerability ID: 61416
   Affected spec: <1.10.13
   ADVISORY: Pydantic 1.10.13 and 2.4.0 include a fix for a regular
   expression denial of service vulnerability (REDoS).https://github.com/pydanti
   c/pydantic/pull/7360https://github.com/pydantic/pydantic/pull/7673
   PVE-2023-61416
   For more information, please visit
   https://data.safetycli.com/v/61416/f17

 Scan was completed. 11 vulnerabilities were found. 

+==============================================================================+
   REMEDIATIONS

  11 vulnerabilities were found in 3 packages. For detailed remediation & fix 
  recommendations, upgrade to a commercial license. 

+==============================================================================+

  Safety is using PyUp's free open-source vulnerability database. This
data is 30 days old and limited. 
  For real-time enhanced vulnerability data, fix recommendations, severity
reporting, cybersecurity support, team and project policy management and more
sign up at https://pyup.io or email sales@pyup.io

+==============================================================================+

โœ” OK pylint in 2 minutes, 15.102 seconds
pylint create: /tmp/.tox/pylint
pylint installdeps: -r/home/jenkins/workspace/LCM-stage_2-merge_v13.0/requirements.txt, -r/home/jenkins/workspace/LCM-stage_2-merge_v13.0/requirements-dev.txt, -r/home/jenkins/workspace/LCM-stage_2-merge_v13.0/requirements-test.txt, pylint
pylint develop-inst: /home/jenkins/workspace/LCM-stage_2-merge_v13.0
pylint installed: aiohttp==3.7.4.post0,aiokafka==0.7.2,astroid==3.0.2,async-timeout==3.0.1,asynctest==0.13.0,attrs==22.1.0,bcrypt==4.0.1,boltons==21.0.0,cachetools==5.2.0,certifi==2022.9.24,cffi==1.15.1,chardet==4.0.0,charset-normalizer==2.1.1,checksumdir==1.2.0,config-man==0.0.4,coverage==6.5.0,cryptography==38.0.1,dataclasses==0.6,dill==0.3.8,face==22.0.0,glom==22.1.0,google-auth==2.12.0,grpcio==1.50.0,grpcio-tools==1.48.1,grpclib==0.4.3,h2==4.1.0,hpack==4.0.0,hyperframe==6.0.1,idna==3.4,isort==5.13.2,Jinja2==3.1.2,juju==3.0.0,jujubundlelib==0.5.7,kafka-python==2.0.2,kubernetes==24.2.0,macaroonbakery==1.3.1,MarkupSafe==2.1.1,mccabe==0.7.0,mock==4.0.3,motor==1.3.1,multidict==6.0.2,mypy-extensions==0.4.3,N2VC @ git+https://osm.etsi.org/gerrit/osm/N2VC.git@6eac0f117eca51b7970926f92f0b5a8abdd9ba4a,nose2==0.12.0,oauthlib==3.2.1,osm-common @ git+https://osm.etsi.org/gerrit/osm/common.git@303ffe4f33c7a0fcc6b5c267d402c0e7d44e5a57,-e git+https://osm.etsi.org/gerrit/osm/LCM.git@c96cb8bb9cef9b19c51baefe6a67c22fa8a71830#egg=osm_lcm,paramiko==2.11.0,platformdirs==4.1.0,protobuf==3.20.3,pyasn1==0.4.8,pyasn1-modules==0.2.8,pycparser==2.21,pycrypto==2.6.1,pydantic==1.10.2,pylint==3.0.3,pymacaroons==0.13.0,pymongo==3.12.3,PyNaCl==1.5.0,pyRFC3339==1.1,python-dateutil==2.8.2,pytz==2022.4,PyYAML==5.4.1,requests==2.28.1,requests-oauthlib==1.3.1,retrying-async==2.0.0,rsa==4.9,six==1.16.0,theblues==0.5.2,tomli==2.0.1,tomlkit==0.12.3,toposort==1.7,typing-inspect==0.8.0,typing_extensions==4.4.0,urllib3==1.26.12,websocket-client==1.4.1,websockets==7.0,yarl==1.8.1
pylint run-test-pre: PYTHONHASHSEED='727850233'
pylint run-test: commands[0] | - pylint -E osm_lcm
************* Module osm_lcm.ROclient
osm_lcm/ROclient.py:1271:25: E1101: Instance of 'ROClient' has no 'parse' member; maybe '_parse'? (no-member)
osm_lcm/ROclient.py:1314:25: E1101: Instance of 'ROClient' has no 'parse' member; maybe '_parse'? (no-member)
osm_lcm/ROclient.py:1366:32: E1120: No value for argument 'session' in method call (no-value-for-parameter)
osm_lcm/ROclient.py:1376:25: E1101: Instance of 'ROClient' has no 'get_datacenter' member; maybe '_get_datacenter'? (no-member)
************* Module osm_lcm.lcm
osm_lcm/lcm.py:482:28: E1101: Instance of 'Lcm' has no 'lcm_ns_tasks' member (no-member)
osm_lcm/lcm.py:544:28: E1101: Instance of 'Lcm' has no 'lcm_netslice_tasks' member (no-member)
************* Module osm_lcm.vim_sdn
osm_lcm/vim_sdn.py:1046:16: E1137: 'db_sdn' does not support item assignment (unsupported-assignment-operation)
osm_lcm/vim_sdn.py:1047:16: E1137: 'db_sdn' does not support item assignment (unsupported-assignment-operation)
************* Module osm_lcm.osm_config
osm_lcm/osm_config.py:16:0: E0611: No name 'BaseModel' in module 'pydantic' (no-name-in-module)
osm_lcm/osm_config.py:31:4: E0213: Method 'parse_services' should have "self" as first argument (no-self-argument)
************* Module osm_lcm.ns
osm_lcm/ns.py:1399:48: E1101: Instance of 'Exception' has no 'http_code' member (no-member)
osm_lcm/ns.py:1406:50: E1101: Instance of 'Exception' has no 'http_code' member (no-member)
osm_lcm/ns.py:1675:44: E1101: Instance of 'NgRoClient' has no 'create_action' member (no-member)
osm_lcm/ns.py:4284:29: E1121: Too many positional arguments for method call (too-many-function-args)
osm_lcm/ns.py:4307:33: E1101: Instance of 'NgRoClient' has no 'show' member (no-member)
osm_lcm/ns.py:4317:48: E1101: Instance of 'NgRoClient' has no 'check_action_status' member (no-member)
osm_lcm/ns.py:4347:62: E1101: Instance of 'Exception' has no 'http_code' member (no-member)
osm_lcm/ns.py:4356:62: E1101: Instance of 'Exception' has no 'http_code' member (no-member)
osm_lcm/ns.py:4377:22: E1121: Too many positional arguments for method call (too-many-function-args)
osm_lcm/ns.py:4384:66: E1101: Instance of 'Exception' has no 'http_code' member (no-member)
osm_lcm/ns.py:4391:66: E1101: Instance of 'Exception' has no 'http_code' member (no-member)
osm_lcm/ns.py:4417:26: E1121: Too many positional arguments for method call (too-many-function-args)
osm_lcm/ns.py:4424:70: E1101: Instance of 'Exception' has no 'http_code' member (no-member)
osm_lcm/ns.py:4434:70: E1101: Instance of 'Exception' has no 'http_code' member (no-member)
osm_lcm/ns.py:5398:71: E0601: Using variable 'step' before assignment (used-before-assignment)
************* Module osm_lcm.data_utils.database.vim_account
osm_lcm/data_utils/database/vim_account.py:32:4: E0213: Method 'get_vim_account_with_id' should have "self" as first argument (no-self-argument)
osm_lcm/data_utils/database/vim_account.py:37:4: E0211: Method 'initialize_db' has no argument (no-method-argument)
************* Module osm_lcm.data_utils.database.wim_account
osm_lcm/data_utils/database/wim_account.py:31:4: E0211: Method 'initialize_db' has no argument (no-method-argument)
osm_lcm/data_utils/database/wim_account.py:34:4: E0213: Method 'get_wim_account_with_id' should have "self" as first argument (no-self-argument)
osm_lcm/data_utils/database/wim_account.py:43:4: E0211: Method 'get_all_wim_accounts' has no argument (no-method-argument)

โœ” OK cover in 2 minutes, 28.927 seconds
cover create: /tmp/.tox/cover
cover installdeps: -r/home/jenkins/workspace/LCM-stage_2-merge_v13.0/requirements.txt, -r/home/jenkins/workspace/LCM-stage_2-merge_v13.0/requirements-dev.txt, -r/home/jenkins/workspace/LCM-stage_2-merge_v13.0/requirements-test.txt
cover develop-inst: /home/jenkins/workspace/LCM-stage_2-merge_v13.0
cover installed: aiohttp==3.7.4.post0,aiokafka==0.7.2,async-timeout==3.0.1,asynctest==0.13.0,attrs==22.1.0,bcrypt==4.0.1,boltons==21.0.0,cachetools==5.2.0,certifi==2022.9.24,cffi==1.15.1,chardet==4.0.0,charset-normalizer==2.1.1,checksumdir==1.2.0,config-man==0.0.4,coverage==6.5.0,cryptography==38.0.1,dataclasses==0.6,face==22.0.0,glom==22.1.0,google-auth==2.12.0,grpcio==1.50.0,grpcio-tools==1.48.1,grpclib==0.4.3,h2==4.1.0,hpack==4.0.0,hyperframe==6.0.1,idna==3.4,Jinja2==3.1.2,juju==3.0.0,jujubundlelib==0.5.7,kafka-python==2.0.2,kubernetes==24.2.0,macaroonbakery==1.3.1,MarkupSafe==2.1.1,mock==4.0.3,motor==1.3.1,multidict==6.0.2,mypy-extensions==0.4.3,N2VC @ git+https://osm.etsi.org/gerrit/osm/N2VC.git@6eac0f117eca51b7970926f92f0b5a8abdd9ba4a,nose2==0.12.0,oauthlib==3.2.1,osm-common @ git+https://osm.etsi.org/gerrit/osm/common.git@303ffe4f33c7a0fcc6b5c267d402c0e7d44e5a57,-e git+https://osm.etsi.org/gerrit/osm/LCM.git@c96cb8bb9cef9b19c51baefe6a67c22fa8a71830#egg=osm_lcm,paramiko==2.11.0,protobuf==3.20.3,pyasn1==0.4.8,pyasn1-modules==0.2.8,pycparser==2.21,pycrypto==2.6.1,pydantic==1.10.2,pymacaroons==0.13.0,pymongo==3.12.3,PyNaCl==1.5.0,pyRFC3339==1.1,python-dateutil==2.8.2,pytz==2022.4,PyYAML==5.4.1,requests==2.28.1,requests-oauthlib==1.3.1,retrying-async==2.0.0,rsa==4.9,six==1.16.0,theblues==0.5.2,toposort==1.7,typing-inspect==0.8.0,typing_extensions==4.4.0,urllib3==1.26.12,websocket-client==1.4.1,websockets==7.0,yarl==1.8.1
cover run-test-pre: PYTHONHASHSEED='2190516959'
cover run-test: commands[0] | sh -c 'rm -f nosetests.xml'
cover run-test: commands[1] | coverage erase
cover run-test: commands[2] | nose2 -C --coverage osm_lcm
.ERROR:lcm.vca:Task vca_create=id Failed with exception: failed
ERROR:lcm.vca:Task vca_create=id Cannot update database: database exception failed
..ERROR:lcm.vca:Task vca_delete=id Failed with exception: failed deleting
ERROR:lcm.vca:Task vca_delete=id Cannot update database: database exception failed
..DEBUG:test_lcm_helm_conn:Initialize helm N2VC connector
DEBUG:test_lcm_helm_conn:initial vca_config: {'host': None, 'port': None, 'user': None, 'secret': None, 'cloud': None, 'k8s_cloud': None, 'helmpath': '/usr/local/bin/helm', 'helm3path': '/usr/local/bin/helm3', 'kubectlpath': '/usr/bin/kubectl', 'jujupath': None, 'public_key': None, 'ca_cert': None, 'api_proxy': None, 'apt_mirror': None, 'eegrpcinittimeout': None, 'eegrpctimeout': None, 'eegrpc_tls_enforce': False, 'loglevel': 'DEBUG', 'logfile': None, 'ca_store': '/etc/ssl/certs/osm-ca.crt', 'kubectl_osm_namespace': 'osm', 'kubectl_osm_cluster_name': '_system-osm-k8s', 'helm_ee_service_port': 50050, 'helm_max_initial_retry_time': 600, 'helm_max_retry_time': 30, 'helm_ee_retry_delay': 10}
DEBUG:test_lcm_helm_conn:Initial retry time: 600
DEBUG:test_lcm_helm_conn:Retry time: 30
INFO:test_lcm_helm_conn:Helm N2VC connector initialized
INFO:test_lcm_helm_conn:create_execution_environment: namespace: testnamespace, artifact_path: helm_sample_charm, chart_model: {}, db_dict: helm_sample_charm, reuse_ee_id: None
DEBUG:test_lcm_helm_conn:install helm chart: /helm_sample_charm
.DEBUG:test_lcm_helm_conn:Initialize helm N2VC connector
DEBUG:test_lcm_helm_conn:initial vca_config: {'host': None, 'port': None, 'user': None, 'secret': None, 'cloud': None, 'k8s_cloud': None, 'helmpath': '/usr/local/bin/helm', 'helm3path': '/usr/local/bin/helm3', 'kubectlpath': '/usr/bin/kubectl', 'jujupath': None, 'public_key': None, 'ca_cert': None, 'api_proxy': None, 'apt_mirror': None, 'eegrpcinittimeout': None, 'eegrpctimeout': None, 'eegrpc_tls_enforce': False, 'loglevel': 'DEBUG', 'logfile': None, 'ca_store': '/etc/ssl/certs/osm-ca.crt', 'kubectl_osm_namespace': 'osm', 'kubectl_osm_cluster_name': '_system-osm-k8s', 'helm_ee_service_port': 50050, 'helm_max_initial_retry_time': 600, 'helm_max_retry_time': 30, 'helm_ee_retry_delay': 10}
DEBUG:test_lcm_helm_conn:Initial retry time: 600
DEBUG:test_lcm_helm_conn:Retry time: 30
INFO:test_lcm_helm_conn:Helm N2VC connector initialized
INFO:test_lcm_helm_conn:ee_id: helm-v3:osm.helm_sample_charm_0001
INFO:test_lcm_helm_conn:ee_id: helm-v3:osm.helm_sample_charm_0001 deleted
.DEBUG:test_lcm_helm_conn:Initialize helm N2VC connector
DEBUG:test_lcm_helm_conn:initial vca_config: {'host': None, 'port': None, 'user': None, 'secret': None, 'cloud': None, 'k8s_cloud': None, 'helmpath': '/usr/local/bin/helm', 'helm3path': '/usr/local/bin/helm3', 'kubectlpath': '/usr/bin/kubectl', 'jujupath': None, 'public_key': None, 'ca_cert': None, 'api_proxy': None, 'apt_mirror': None, 'eegrpcinittimeout': None, 'eegrpctimeout': None, 'eegrpc_tls_enforce': False, 'loglevel': 'DEBUG', 'logfile': None, 'ca_store': '/etc/ssl/certs/osm-ca.crt', 'kubectl_osm_namespace': 'osm', 'kubectl_osm_cluster_name': '_system-osm-k8s', 'helm_ee_service_port': 50050, 'helm_max_initial_retry_time': 600, 'helm_max_retry_time': 30, 'helm_ee_retry_delay': 10}
DEBUG:test_lcm_helm_conn:Initial retry time: 600
DEBUG:test_lcm_helm_conn:Retry time: 30
INFO:test_lcm_helm_conn:Helm N2VC connector initialized
DEBUG:test_lcm_helm_conn:Execute config primitive
INFO:test_lcm_helm_conn:exec primitive for ee_id : osm.helm_sample_charm_0001, primitive_name: config, params_dict: {'ssh-host-name': 'host1'}, db_dict: None
DEBUG:test_lcm_helm_conn:Executed config primitive ee_id_ osm.helm_sample_charm_0001, status: OK, message: CONFIG OK
.DEBUG:test_lcm_helm_conn:Initialize helm N2VC connector
DEBUG:test_lcm_helm_conn:initial vca_config: {'host': None, 'port': None, 'user': None, 'secret': None, 'cloud': None, 'k8s_cloud': None, 'helmpath': '/usr/local/bin/helm', 'helm3path': '/usr/local/bin/helm3', 'kubectlpath': '/usr/bin/kubectl', 'jujupath': None, 'public_key': None, 'ca_cert': None, 'api_proxy': None, 'apt_mirror': None, 'eegrpcinittimeout': None, 'eegrpctimeout': None, 'eegrpc_tls_enforce': False, 'loglevel': 'DEBUG', 'logfile': None, 'ca_store': '/etc/ssl/certs/osm-ca.crt', 'kubectl_osm_namespace': 'osm', 'kubectl_osm_cluster_name': '_system-osm-k8s', 'helm_ee_service_port': 50050, 'helm_max_initial_retry_time': 600, 'helm_max_retry_time': 30, 'helm_ee_retry_delay': 10}
DEBUG:test_lcm_helm_conn:Initial retry time: 600
DEBUG:test_lcm_helm_conn:Retry time: 30
INFO:test_lcm_helm_conn:Helm N2VC connector initialized
INFO:test_lcm_helm_conn:exec primitive for ee_id : osm.helm_sample_charm_0001, primitive_name: sleep, params_dict: {}, db_dict: None
DEBUG:test_lcm_helm_conn:Executed primitive sleep ee_id_ osm.helm_sample_charm_0001, status: OK, message: test-ok
.DEBUG:test_lcm_helm_conn:Initialize helm N2VC connector
DEBUG:test_lcm_helm_conn:initial vca_config: {'host': None, 'port': None, 'user': None, 'secret': None, 'cloud': None, 'k8s_cloud': None, 'helmpath': '/usr/local/bin/helm', 'helm3path': '/usr/local/bin/helm3', 'kubectlpath': '/usr/bin/kubectl', 'jujupath': None, 'public_key': None, 'ca_cert': None, 'api_proxy': None, 'apt_mirror': None, 'eegrpcinittimeout': None, 'eegrpctimeout': None, 'eegrpc_tls_enforce': False, 'loglevel': 'DEBUG', 'logfile': None, 'ca_store': '/etc/ssl/certs/osm-ca.crt', 'kubectl_osm_namespace': 'osm', 'kubectl_osm_cluster_name': '_system-osm-k8s', 'helm_ee_service_port': 50050, 'helm_max_initial_retry_time': 600, 'helm_max_retry_time': 30, 'helm_ee_retry_delay': 10}
DEBUG:test_lcm_helm_conn:Initial retry time: 600
DEBUG:test_lcm_helm_conn:Retry time: 30
INFO:test_lcm_helm_conn:Helm N2VC connector initialized
INFO:test_lcm_helm_conn:get_ee_ssh_public_key: ee_id: osm.helm_sample_charm_0001, db_dict: {}
...................................DEBUG:lcm.roclient:GET http://h//ns/v1/deploy/f48163a6-c807-47bc-9682-f72caef5af85/<MagicMock name='mock.vertical_scale().__getitem__()' id='139634154405504'>
.CRITICAL:lcm:Loaded configuration:{'globalConfig': {'loglevel': 'DEBUG', 'logfile': None, 'nologging': False}, 'timeout': {'nsi_deploy': 7200, 'vca_on_error': 300, 'ns_deploy': 7200, 'ns_terminate': 1800, 'ns_heal': 1800, 'charm_delete': 600, 'primitive': 1800, 'ns_update': 1800, 'progress_primitive': 600, 'migrate': 1800, 'operate': 1800, 'verticalscale': 1800}, 'RO': {'host': 'ro', 'ng': True, 'port': 9090, 'uri': 'h', 'tenant': 'osm', 'loglevel': 'DEBUG', 'logfile': None, 'logger_name': 'lcm.roclient'}, 'VCA': {'host': 'vca', 'port': 17070, 'user': 'admin', 'secret': 'secret', 'cloud': 'localhost', 'k8s_cloud': 'k8scloud', 'helmpath': '/usr/local/bin/helm', 'helm3path': '/usr/local/bin/helm3', 'kubectlpath': '/usr/bin/kubectl', 'jujupath': '/usr/local/bin/juju', 'public_key': None, 'ca_cert': None, 'api_proxy': None, 'apt_mirror': None, 'eegrpcinittimeout': None, 'eegrpctimeout': None, 'eegrpc_tls_enforce': False, 'loglevel': 'DEBUG', 'logfile': None, 'ca_store': '/etc/ssl/certs/osm-ca.crt', 'kubectl_osm_namespace': 'osm', 'kubectl_osm_cluster_name': '_system-osm-k8s', 'helm_ee_service_port': 50050, 'helm_max_initial_retry_time': 600, 'helm_max_retry_time': 30, 'helm_ee_retry_delay': 10}, 'database': {'driver': 'memory', 'host': None, 'port': 27017, 'uri': None, 'name': 'osm', 'replicaset': None, 'user': None, 'password': None, 'commonkey': None, 'loglevel': 'DEBUG', 'logfile': None, 'logger_name': 'lcm.db'}, 'storage': {'driver': 'local', 'path': '/tmp/storage', 'loglevel': 'DEBUG', 'logfile': None, 'logger_name': 'lcm.fs', 'collection': None, 'uri': None}, 'message': {'driver': 'local', 'path': '/tmp/kafka', 'host': 'kafka', 'port': 9092, 'loglevel': 'DEBUG', 'logfile': None, 'group_id': 'lcm-server', 'logger_name': 'lcm.msg'}, 'tsdb': {'driver': 'prometheus', 'path': '/tmp/prometheus', 'uri': 'http://prometheus:9090/', 'loglevel': 'DEBUG', 'logfile': None, 'logger_name': 'lcm.prometheus'}}
2024-01-28T09:53:14 CRITICAL lcm lcm.py:142 starting osm/lcm version 13.0.2+gc96cb8b 2020-04-08
CRITICAL:lcm:starting osm/lcm version 13.0.2+gc96cb8b 2020-04-08
.2024-01-28T09:53:14 CRITICAL lcm lcm.py:96 Loaded configuration:{'globalConfig': {'loglevel': 'DEBUG', 'logfile': None, 'nologging': False}, 'timeout': {'nsi_deploy': 7200, 'vca_on_error': 300, 'ns_deploy': 7200, 'ns_terminate': 1800, 'ns_heal': 1800, 'charm_delete': 600, 'primitive': 1800, 'ns_update': 1800, 'progress_primitive': 600, 'migrate': 1800, 'operate': 1800, 'verticalscale': 1800}, 'RO': {'host': 'ro', 'ng': True, 'port': 9090, 'uri': 'h', 'tenant': 'osm', 'loglevel': 'DEBUG', 'logfile': None, 'logger_name': 'lcm.roclient'}, 'VCA': {'host': 'vca', 'port': 17070, 'user': 'admin', 'secret': 'secret', 'cloud': 'localhost', 'k8s_cloud': 'k8scloud', 'helmpath': '/usr/local/bin/helm', 'helm3path': '/usr/local/bin/helm3', 'kubectlpath': '/usr/bin/kubectl', 'jujupath': '/usr/local/bin/juju', 'public_key': None, 'ca_cert': None, 'api_proxy': None, 'apt_mirror': None, 'eegrpcinittimeout': None, 'eegrpctimeout': None, 'eegrpc_tls_enforce': False, 'loglevel': 'DEBUG', 'logfile': None, 'ca_store': '/etc/ssl/certs/osm-ca.crt', 'kubectl_osm_namespace': 'osm', 'kubectl_osm_cluster_name': '_system-osm-k8s', 'helm_ee_service_port': 50050, 'helm_max_initial_retry_time': 600, 'helm_max_retry_time': 30, 'helm_ee_retry_delay': 10}, 'database': {'driver': 'memory', 'host': None, 'port': 27017, 'uri': None, 'name': 'osm', 'replicaset': None, 'user': None, 'password': None, 'commonkey': None, 'loglevel': 'DEBUG', 'logfile': None, 'logger_name': 'lcm.db'}, 'storage': {'driver': 'local', 'path': '/tmp/storage', 'loglevel': 'DEBUG', 'logfile': None, 'logger_name': 'lcm.fs', 'collection': None, 'uri': None}, 'message': {'driver': 'local', 'path': '/tmp/kafka', 'host': 'kafka', 'port': 9092, 'loglevel': 'DEBUG', 'logfile': None, 'group_id': 'lcm-server', 'logger_name': 'lcm.msg'}, 'tsdb': {'driver': 'prometheus', 'path': '/tmp/prometheus', 'uri': 'http://prometheus:9090/', 'loglevel': 'DEBUG', 'logfile': None, 'logger_name': 'lcm.prometheus'}}
CRITICAL:lcm:Loaded configuration:{'globalConfig': {'loglevel': 'DEBUG', 'logfile': None, 'nologging': False}, 'timeout': {'nsi_deploy': 7200, 'vca_on_error': 300, 'ns_deploy': 7200, 'ns_terminate': 1800, 'ns_heal': 1800, 'charm_delete': 600, 'primitive': 1800, 'ns_update': 1800, 'progress_primitive': 600, 'migrate': 1800, 'operate': 1800, 'verticalscale': 1800}, 'RO': {'host': 'ro', 'ng': True, 'port': 9090, 'uri': 'h', 'tenant': 'osm', 'loglevel': 'DEBUG', 'logfile': None, 'logger_name': 'lcm.roclient'}, 'VCA': {'host': 'vca', 'port': 17070, 'user': 'admin', 'secret': 'secret', 'cloud': 'localhost', 'k8s_cloud': 'k8scloud', 'helmpath': '/usr/local/bin/helm', 'helm3path': '/usr/local/bin/helm3', 'kubectlpath': '/usr/bin/kubectl', 'jujupath': '/usr/local/bin/juju', 'public_key': None, 'ca_cert': None, 'api_proxy': None, 'apt_mirror': None, 'eegrpcinittimeout': None, 'eegrpctimeout': None, 'eegrpc_tls_enforce': False, 'loglevel': 'DEBUG', 'logfile': None, 'ca_store': '/etc/ssl/certs/osm-ca.crt', 'kubectl_osm_namespace': 'osm', 'kubectl_osm_cluster_name': '_system-osm-k8s', 'helm_ee_service_port': 50050, 'helm_max_initial_retry_time': 600, 'helm_max_retry_time': 30, 'helm_ee_retry_delay': 10}, 'database': {'driver': 'memory', 'host': None, 'port': 27017, 'uri': None, 'name': 'osm', 'replicaset': None, 'user': None, 'password': None, 'commonkey': None, 'loglevel': 'DEBUG', 'logfile': None, 'logger_name': 'lcm.db'}, 'storage': {'driver': 'local', 'path': '/tmp/storage', 'loglevel': 'DEBUG', 'logfile': None, 'logger_name': 'lcm.fs', 'collection': None, 'uri': None}, 'message': {'driver': 'local', 'path': '/tmp/kafka', 'host': 'kafka', 'port': 9092, 'loglevel': 'DEBUG', 'logfile': None, 'group_id': 'lcm-server', 'logger_name': 'lcm.msg'}, 'tsdb': {'driver': 'prometheus', 'path': '/tmp/prometheus', 'uri': 'http://prometheus:9090/', 'loglevel': 'DEBUG', 'logfile': None, 'logger_name': 'lcm.prometheus'}}
2024-01-28T09:53:14 CRITICAL lcm lcm.py:142 starting osm/lcm version 13.0.2+gc96cb8b 2020-04-08
2024-01-28T09:53:14 CRITICAL lcm lcm.py:142 starting osm/lcm version 13.0.2+gc96cb8b 2020-04-08
CRITICAL:lcm:starting osm/lcm version 13.0.2+gc96cb8b 2020-04-08
.2024-01-28T09:53:14 CRITICAL lcm lcm.py:96 Loaded configuration:{'globalConfig': {'loglevel': 'DEBUG', 'logfile': None, 'nologging': False}, 'timeout': {'nsi_deploy': 7200, 'vca_on_error': 300, 'ns_deploy': 7200, 'ns_terminate': 1800, 'ns_heal': 1800, 'charm_delete': 600, 'primitive': 1800, 'ns_update': 1800, 'progress_primitive': 600, 'migrate': 1800, 'operate': 1800, 'verticalscale': 1800}, 'RO': {'host': 'ro', 'ng': True, 'port': 9090, 'uri': 'h', 'tenant': 'osm', 'loglevel': 'DEBUG', 'logfile': None, 'logger_name': 'lcm.roclient'}, 'VCA': {'host': 'vca', 'port': 17070, 'user': 'admin', 'secret': 'secret', 'cloud': 'localhost', 'k8s_cloud': 'k8scloud', 'helmpath': '/usr/local/bin/helm', 'helm3path': '/usr/local/bin/helm3', 'kubectlpath': '/usr/bin/kubectl', 'jujupath': '/usr/local/bin/juju', 'public_key': None, 'ca_cert': None, 'api_proxy': None, 'apt_mirror': None, 'eegrpcinittimeout': None, 'eegrpctimeout': None, 'eegrpc_tls_enforce': False, 'loglevel': 'DEBUG', 'logfile': None, 'ca_store': '/etc/ssl/certs/osm-ca.crt', 'kubectl_osm_namespace': 'osm', 'kubectl_osm_cluster_name': '_system-osm-k8s', 'helm_ee_service_port': 50050, 'helm_max_initial_retry_time': 600, 'helm_max_retry_time': 30, 'helm_ee_retry_delay': 10}, 'database': {'driver': 'memory', 'host': None, 'port': 27017, 'uri': None, 'name': 'osm', 'replicaset': None, 'user': None, 'password': None, 'commonkey': None, 'loglevel': 'DEBUG', 'logfile': None, 'logger_name': 'lcm.db'}, 'storage': {'driver': 'local', 'path': '/tmp/storage', 'loglevel': 'DEBUG', 'logfile': None, 'logger_name': 'lcm.fs', 'collection': None, 'uri': None}, 'message': {'driver': 'local', 'path': '/tmp/kafka', 'host': 'kafka', 'port': 9092, 'loglevel': 'DEBUG', 'logfile': None, 'group_id': 'lcm-server', 'logger_name': 'lcm.msg'}, 'tsdb': {'driver': 'prometheus', 'path': '/tmp/prometheus', 'uri': 'http://prometheus:9090/', 'loglevel': 'DEBUG', 'logfile': None, 'logger_name': 'lcm.prometheus'}}
2024-01-28T09:53:14 CRITICAL lcm lcm.py:96 Loaded configuration:{'globalConfig': {'loglevel': 'DEBUG', 'logfile': None, 'nologging': False}, 'timeout': {'nsi_deploy': 7200, 'vca_on_error': 300, 'ns_deploy': 7200, 'ns_terminate': 1800, 'ns_heal': 1800, 'charm_delete': 600, 'primitive': 1800, 'ns_update': 1800, 'progress_primitive': 600, 'migrate': 1800, 'operate': 1800, 'verticalscale': 1800}, 'RO': {'host': 'ro', 'ng': True, 'port': 9090, 'uri': 'h', 'tenant': 'osm', 'loglevel': 'DEBUG', 'logfile': None, 'logger_name': 'lcm.roclient'}, 'VCA': {'host': 'vca', 'port': 17070, 'user': 'admin', 'secret': 'secret', 'cloud': 'localhost', 'k8s_cloud': 'k8scloud', 'helmpath': '/usr/local/bin/helm', 'helm3path': '/usr/local/bin/helm3', 'kubectlpath': '/usr/bin/kubectl', 'jujupath': '/usr/local/bin/juju', 'public_key': None, 'ca_cert': None, 'api_proxy': None, 'apt_mirror': None, 'eegrpcinittimeout': None, 'eegrpctimeout': None, 'eegrpc_tls_enforce': False, 'loglevel': 'DEBUG', 'logfile': None, 'ca_store': '/etc/ssl/certs/osm-ca.crt', 'kubectl_osm_namespace': 'osm', 'kubectl_osm_cluster_name': '_system-osm-k8s', 'helm_ee_service_port': 50050, 'helm_max_initial_retry_time': 600, 'helm_max_retry_time': 30, 'helm_ee_retry_delay': 10}, 'database': {'driver': 'memory', 'host': None, 'port': 27017, 'uri': None, 'name': 'osm', 'replicaset': None, 'user': None, 'password': None, 'commonkey': None, 'loglevel': 'DEBUG', 'logfile': None, 'logger_name': 'lcm.db'}, 'storage': {'driver': 'local', 'path': '/tmp/storage', 'loglevel': 'DEBUG', 'logfile': None, 'logger_name': 'lcm.fs', 'collection': None, 'uri': None}, 'message': {'driver': 'local', 'path': '/tmp/kafka', 'host': 'kafka', 'port': 9092, 'loglevel': 'DEBUG', 'logfile': None, 'group_id': 'lcm-server', 'logger_name': 'lcm.msg'}, 'tsdb': {'driver': 'prometheus', 'path': '/tmp/prometheus', 'uri': 'http://prometheus:9090/', 'loglevel': 'DEBUG', 'logfile': None, 'logger_name': 'lcm.prometheus'}}
CRITICAL:lcm:Loaded configuration:{'globalConfig': {'loglevel': 'DEBUG', 'logfile': None, 'nologging': False}, 'timeout': {'nsi_deploy': 7200, 'vca_on_error': 300, 'ns_deploy': 7200, 'ns_terminate': 1800, 'ns_heal': 1800, 'charm_delete': 600, 'primitive': 1800, 'ns_update': 1800, 'progress_primitive': 600, 'migrate': 1800, 'operate': 1800, 'verticalscale': 1800}, 'RO': {'host': 'ro', 'ng': True, 'port': 9090, 'uri': 'h', 'tenant': 'osm', 'loglevel': 'DEBUG', 'logfile': None, 'logger_name': 'lcm.roclient'}, 'VCA': {'host': 'vca', 'port': 17070, 'user': 'admin', 'secret': 'secret', 'cloud': 'localhost', 'k8s_cloud': 'k8scloud', 'helmpath': '/usr/local/bin/helm', 'helm3path': '/usr/local/bin/helm3', 'kubectlpath': '/usr/bin/kubectl', 'jujupath': '/usr/local/bin/juju', 'public_key': None, 'ca_cert': None, 'api_proxy': None, 'apt_mirror': None, 'eegrpcinittimeout': None, 'eegrpctimeout': None, 'eegrpc_tls_enforce': False, 'loglevel': 'DEBUG', 'logfile': None, 'ca_store': '/etc/ssl/certs/osm-ca.crt', 'kubectl_osm_namespace': 'osm', 'kubectl_osm_cluster_name': '_system-osm-k8s', 'helm_ee_service_port': 50050, 'helm_max_initial_retry_time': 600, 'helm_max_retry_time': 30, 'helm_ee_retry_delay': 10}, 'database': {'driver': 'memory', 'host': None, 'port': 27017, 'uri': None, 'name': 'osm', 'replicaset': None, 'user': None, 'password': None, 'commonkey': None, 'loglevel': 'DEBUG', 'logfile': None, 'logger_name': 'lcm.db'}, 'storage': {'driver': 'local', 'path': '/tmp/storage', 'loglevel': 'DEBUG', 'logfile': None, 'logger_name': 'lcm.fs', 'collection': None, 'uri': None}, 'message': {'driver': 'local', 'path': '/tmp/kafka', 'host': 'kafka', 'port': 9092, 'loglevel': 'DEBUG', 'logfile': None, 'group_id': 'lcm-server', 'logger_name': 'lcm.msg'}, 'tsdb': {'driver': 'prometheus', 'path': '/tmp/prometheus', 'uri': 'http://prometheus:9090/', 'loglevel': 'DEBUG', 'logfile': None, 'logger_name': 'lcm.prometheus'}}
2024-01-28T09:53:14 CRITICAL lcm lcm.py:142 starting osm/lcm version 13.0.2+gc96cb8b 2020-04-08
2024-01-28T09:53:14 CRITICAL lcm lcm.py:142 starting osm/lcm version 13.0.2+gc96cb8b 2020-04-08
2024-01-28T09:53:14 CRITICAL lcm lcm.py:142 starting osm/lcm version 13.0.2+gc96cb8b 2020-04-08
CRITICAL:lcm:starting osm/lcm version 13.0.2+gc96cb8b 2020-04-08
.2024-01-28T09:53:14 CRITICAL lcm lcm.py:96 Loaded configuration:{'globalConfig': {'loglevel': 'DEBUG', 'logfile': None, 'nologging': False}, 'timeout': {'nsi_deploy': 7200, 'vca_on_error': 300, 'ns_deploy': 7200, 'ns_terminate': 1800, 'ns_heal': 1800, 'charm_delete': 600, 'primitive': 1800, 'ns_update': 1800, 'progress_primitive': 600, 'migrate': 1800, 'operate': 1800, 'verticalscale': 1800}, 'RO': {'host': 'ro', 'ng': True, 'port': 9090, 'uri': 'h', 'tenant': 'osm', 'loglevel': 'DEBUG', 'logfile': None, 'logger_name': 'lcm.roclient'}, 'VCA': {'host': 'vca', 'port': 17070, 'user': 'admin', 'secret': 'secret', 'cloud': 'localhost', 'k8s_cloud': 'k8scloud', 'helmpath': '/usr/local/bin/helm', 'helm3path': '/usr/local/bin/helm3', 'kubectlpath': '/usr/bin/kubectl', 'jujupath': '/usr/local/bin/juju', 'public_key': None, 'ca_cert': None, 'api_proxy': None, 'apt_mirror': None, 'eegrpcinittimeout': None, 'eegrpctimeout': None, 'eegrpc_tls_enforce': False, 'loglevel': 'DEBUG', 'logfile': None, 'ca_store': '/etc/ssl/certs/osm-ca.crt', 'kubectl_osm_namespace': 'osm', 'kubectl_osm_cluster_name': '_system-osm-k8s', 'helm_ee_service_port': 50050, 'helm_max_initial_retry_time': 600, 'helm_max_retry_time': 30, 'helm_ee_retry_delay': 10}, 'database': {'driver': 'memory', 'host': None, 'port': 27017, 'uri': None, 'name': 'osm', 'replicaset': None, 'user': None, 'password': None, 'commonkey': None, 'loglevel': 'DEBUG', 'logfile': None, 'logger_name': 'lcm.db'}, 'storage': {'driver': 'local', 'path': '/tmp/storage', 'loglevel': 'DEBUG', 'logfile': None, 'logger_name': 'lcm.fs', 'collection': None, 'uri': None}, 'message': {'driver': 'local', 'path': '/tmp/kafka', 'host': 'kafka', 'port': 9092, 'loglevel': 'DEBUG', 'logfile': None, 'group_id': 'lcm-server', 'logger_name': 'lcm.msg'}, 'tsdb': {'driver': 'prometheus', 'path': '/tmp/prometheus', 'uri': 'http://prometheus:9090/', 'loglevel': 'DEBUG', 'logfile': None, 'logger_name': 'lcm.prometheus'}}
2024-01-28T09:53:14 CRITICAL lcm lcm.py:96 Loaded configuration:{'globalConfig': {'loglevel': 'DEBUG', 'logfile': None, 'nologging': False}, 'timeout': {'nsi_deploy': 7200, 'vca_on_error': 300, 'ns_deploy': 7200, 'ns_terminate': 1800, 'ns_heal': 1800, 'charm_delete': 600, 'primitive': 1800, 'ns_update': 1800, 'progress_primitive': 600, 'migrate': 1800, 'operate': 1800, 'verticalscale': 1800}, 'RO': {'host': 'ro', 'ng': True, 'port': 9090, 'uri': 'h', 'tenant': 'osm', 'loglevel': 'DEBUG', 'logfile': None, 'logger_name': 'lcm.roclient'}, 'VCA': {'host': 'vca', 'port': 17070, 'user': 'admin', 'secret': 'secret', 'cloud': 'localhost', 'k8s_cloud': 'k8scloud', 'helmpath': '/usr/local/bin/helm', 'helm3path': '/usr/local/bin/helm3', 'kubectlpath': '/usr/bin/kubectl', 'jujupath': '/usr/local/bin/juju', 'public_key': None, 'ca_cert': None, 'api_proxy': None, 'apt_mirror': None, 'eegrpcinittimeout': None, 'eegrpctimeout': None, 'eegrpc_tls_enforce': False, 'loglevel': 'DEBUG', 'logfile': None, 'ca_store': '/etc/ssl/certs/osm-ca.crt', 'kubectl_osm_namespace': 'osm', 'kubectl_osm_cluster_name': '_system-osm-k8s', 'helm_ee_service_port': 50050, 'helm_max_initial_retry_time': 600, 'helm_max_retry_time': 30, 'helm_ee_retry_delay': 10}, 'database': {'driver': 'memory', 'host': None, 'port': 27017, 'uri': None, 'name': 'osm', 'replicaset': None, 'user': None, 'password': None, 'commonkey': None, 'loglevel': 'DEBUG', 'logfile': None, 'logger_name': 'lcm.db'}, 'storage': {'driver': 'local', 'path': '/tmp/storage', 'loglevel': 'DEBUG', 'logfile': None, 'logger_name': 'lcm.fs', 'collection': None, 'uri': None}, 'message': {'driver': 'local', 'path': '/tmp/kafka', 'host': 'kafka', 'port': 9092, 'loglevel': 'DEBUG', 'logfile': None, 'group_id': 'lcm-server', 'logger_name': 'lcm.msg'}, 'tsdb': {'driver': 'prometheus', 'path': '/tmp/prometheus', 'uri': 'http://prometheus:9090/', 'loglevel': 'DEBUG', 'logfile': None, 'logger_name': 'lcm.prometheus'}}
2024-01-28T09:53:14 CRITICAL lcm lcm.py:96 Loaded configuration:{'globalConfig': {'loglevel': 'DEBUG', 'logfile': None, 'nologging': False}, 'timeout': {'nsi_deploy': 7200, 'vca_on_error': 300, 'ns_deploy': 7200, 'ns_terminate': 1800, 'ns_heal': 1800, 'charm_delete': 600, 'primitive': 1800, 'ns_update': 1800, 'progress_primitive': 600, 'migrate': 1800, 'operate': 1800, 'verticalscale': 1800}, 'RO': {'host': 'ro', 'ng': True, 'port': 9090, 'uri': 'h', 'tenant': 'osm', 'loglevel': 'DEBUG', 'logfile': None, 'logger_name': 'lcm.roclient'}, 'VCA': {'host': 'vca', 'port': 17070, 'user': 'admin', 'secret': 'secret', 'cloud': 'localhost', 'k8s_cloud': 'k8scloud', 'helmpath': '/usr/local/bin/helm', 'helm3path': '/usr/local/bin/helm3', 'kubectlpath': '/usr/bin/kubectl', 'jujupath': '/usr/local/bin/juju', 'public_key': None, 'ca_cert': None, 'api_proxy': None, 'apt_mirror': None, 'eegrpcinittimeout': None, 'eegrpctimeout': None, 'eegrpc_tls_enforce': False, 'loglevel': 'DEBUG', 'logfile': None, 'ca_store': '/etc/ssl/certs/osm-ca.crt', 'kubectl_osm_namespace': 'osm', 'kubectl_osm_cluster_name': '_system-osm-k8s', 'helm_ee_service_port': 50050, 'helm_max_initial_retry_time': 600, 'helm_max_retry_time': 30, 'helm_ee_retry_delay': 10}, 'database': {'driver': 'memory', 'host': None, 'port': 27017, 'uri': None, 'name': 'osm', 'replicaset': None, 'user': None, 'password': None, 'commonkey': None, 'loglevel': 'DEBUG', 'logfile': None, 'logger_name': 'lcm.db'}, 'storage': {'driver': 'local', 'path': '/tmp/storage', 'loglevel': 'DEBUG', 'logfile': None, 'logger_name': 'lcm.fs', 'collection': None, 'uri': None}, 'message': {'driver': 'local', 'path': '/tmp/kafka', 'host': 'kafka', 'port': 9092, 'loglevel': 'DEBUG', 'logfile': None, 'group_id': 'lcm-server', 'logger_name': 'lcm.msg'}, 'tsdb': {'driver': 'prometheus', 'path': '/tmp/prometheus', 'uri': 'http://prometheus:9090/', 'loglevel': 'DEBUG', 'logfile': None, 'logger_name': 'lcm.prometheus'}}
CRITICAL:lcm:Loaded configuration:{'globalConfig': {'loglevel': 'DEBUG', 'logfile': None, 'nologging': False}, 'timeout': {'nsi_deploy': 7200, 'vca_on_error': 300, 'ns_deploy': 7200, 'ns_terminate': 1800, 'ns_heal': 1800, 'charm_delete': 600, 'primitive': 1800, 'ns_update': 1800, 'progress_primitive': 600, 'migrate': 1800, 'operate': 1800, 'verticalscale': 1800}, 'RO': {'host': 'ro', 'ng': True, 'port': 9090, 'uri': 'h', 'tenant': 'osm', 'loglevel': 'DEBUG', 'logfile': None, 'logger_name': 'lcm.roclient'}, 'VCA': {'host': 'vca', 'port': 17070, 'user': 'admin', 'secret': 'secret', 'cloud': 'localhost', 'k8s_cloud': 'k8scloud', 'helmpath': '/usr/local/bin/helm', 'helm3path': '/usr/local/bin/helm3', 'kubectlpath': '/usr/bin/kubectl', 'jujupath': '/usr/local/bin/juju', 'public_key': None, 'ca_cert': None, 'api_proxy': None, 'apt_mirror': None, 'eegrpcinittimeout': None, 'eegrpctimeout': None, 'eegrpc_tls_enforce': False, 'loglevel': 'DEBUG', 'logfile': None, 'ca_store': '/etc/ssl/certs/osm-ca.crt', 'kubectl_osm_namespace': 'osm', 'kubectl_osm_cluster_name': '_system-osm-k8s', 'helm_ee_service_port': 50050, 'helm_max_initial_retry_time': 600, 'helm_max_retry_time': 30, 'helm_ee_retry_delay': 10}, 'database': {'driver': 'memory', 'host': None, 'port': 27017, 'uri': None, 'name': 'osm', 'replicaset': None, 'user': None, 'password': None, 'commonkey': None, 'loglevel': 'DEBUG', 'logfile': None, 'logger_name': 'lcm.db'}, 'storage': {'driver': 'local', 'path': '/tmp/storage', 'loglevel': 'DEBUG', 'logfile': None, 'logger_name': 'lcm.fs', 'collection': None, 'uri': None}, 'message': {'driver': 'local', 'path': '/tmp/kafka', 'host': 'kafka', 'port': 9092, 'loglevel': 'DEBUG', 'logfile': None, 'group_id': 'lcm-server', 'logger_name': 'lcm.msg'}, 'tsdb': {'driver': 'prometheus', 'path': '/tmp/prometheus', 'uri': 'http://prometheus:9090/', 'loglevel': 'DEBUG', 'logfile': None, 'logger_name': 'lcm.prometheus'}}
2024-01-28T09:53:14 CRITICAL lcm lcm.py:142 starting osm/lcm version 13.0.2+gc96cb8b 2020-04-08
2024-01-28T09:53:14 CRITICAL lcm lcm.py:142 starting osm/lcm version 13.0.2+gc96cb8b 2020-04-08
2024-01-28T09:53:14 CRITICAL lcm lcm.py:142 starting osm/lcm version 13.0.2+gc96cb8b 2020-04-08
2024-01-28T09:53:14 CRITICAL lcm lcm.py:142 starting osm/lcm version 13.0.2+gc96cb8b 2020-04-08
CRITICAL:lcm:starting osm/lcm version 13.0.2+gc96cb8b 2020-04-08
2024-01-28T09:53:14 CRITICAL lcm lcm.py:638 unknown topic kafka and command 'ping'
2024-01-28T09:53:14 CRITICAL lcm lcm.py:638 unknown topic kafka and command 'ping'
2024-01-28T09:53:14 CRITICAL lcm lcm.py:638 unknown topic kafka and command 'ping'
2024-01-28T09:53:14 CRITICAL lcm lcm.py:638 unknown topic kafka and command 'ping'
CRITICAL:lcm:unknown topic kafka and command 'ping'
..........................
----------------------------------------------------------------------
Ran 74 tests in 36.245s

OK
Name                                          Stmts   Miss  Cover
-----------------------------------------------------------------
osm_lcm/ROclient.py                             824    759     8%
osm_lcm/__init__.py                               7      2    71%
osm_lcm/data_utils/__init__.py                    0      0   100%
osm_lcm/data_utils/database/__init__.py           0      0   100%
osm_lcm/data_utils/database/database.py          26      8    69%
osm_lcm/data_utils/database/vim_account.py        9      0   100%
osm_lcm/data_utils/database/wim_account.py       21     13    38%
osm_lcm/data_utils/dict_utils.py                  7      2    71%
osm_lcm/data_utils/filesystem/__init__.py         0      0   100%
osm_lcm/data_utils/filesystem/filesystem.py      26      9    65%
osm_lcm/data_utils/lcm_config.py                156      4    97%
osm_lcm/data_utils/list_utils.py                  5      0   100%
osm_lcm/data_utils/nsd.py                        16      7    56%
osm_lcm/data_utils/nsr.py                        17      4    76%
osm_lcm/data_utils/vca.py                        97     42    57%
osm_lcm/data_utils/vim.py                        28     13    54%
osm_lcm/data_utils/vnfd.py                       78     41    47%
osm_lcm/data_utils/vnfr.py                       41     13    68%
osm_lcm/data_utils/wim.py                        72     55    24%
osm_lcm/frontend_grpc.py                         20      6    70%
osm_lcm/frontend_pb2.py                          27      0   100%
osm_lcm/lcm.py                                  497    363    27%
osm_lcm/lcm_hc.py                                36     15    58%
osm_lcm/lcm_helm_conn.py                        300    157    48%
osm_lcm/lcm_utils.py                            303    157    48%
osm_lcm/netslice.py                             440    425     3%
osm_lcm/ng_ro.py                                205    151    26%
osm_lcm/ns.py                                  3629   2697    26%
osm_lcm/osm_config.py                            21      0   100%
osm_lcm/prometheus.py                            11      2    82%
osm_lcm/tests/test_db_descriptors.py             13      0   100%
osm_lcm/tests/test_lcm.py                        58      1    98%
osm_lcm/tests/test_lcm_hc.py                     43      0   100%
osm_lcm/tests/test_lcm_helm_conn.py              79      1    99%
osm_lcm/tests/test_lcm_utils.py                 293     13    96%
osm_lcm/tests/test_ns.py                        662     50    92%
osm_lcm/tests/test_osm_config.py                  7      0   100%
osm_lcm/tests/test_prometheus.py                 12      1    92%
osm_lcm/tests/test_vim_sdn.py                    74      0   100%
osm_lcm/vim_sdn.py                              934    823    12%
-----------------------------------------------------------------
TOTAL                                          9094   5834    36%

cover run-test: commands[3] | coverage report '--omit=*tests*'
Name                                          Stmts   Miss  Cover
-----------------------------------------------------------------
osm_lcm/ROclient.py                             824    759     8%
osm_lcm/__init__.py                               7      2    71%
osm_lcm/data_utils/__init__.py                    0      0   100%
osm_lcm/data_utils/database/__init__.py           0      0   100%
osm_lcm/data_utils/database/database.py          26      8    69%
osm_lcm/data_utils/database/vim_account.py        9      0   100%
osm_lcm/data_utils/database/wim_account.py       21     13    38%
osm_lcm/data_utils/dict_utils.py                  7      2    71%
osm_lcm/data_utils/filesystem/__init__.py         0      0   100%
osm_lcm/data_utils/filesystem/filesystem.py      26      9    65%
osm_lcm/data_utils/lcm_config.py                156      4    97%
osm_lcm/data_utils/list_utils.py                  5      0   100%
osm_lcm/data_utils/nsd.py                        16      7    56%
osm_lcm/data_utils/nsr.py                        17      4    76%
osm_lcm/data_utils/vca.py                        97     42    57%
osm_lcm/data_utils/vim.py                        28     13    54%
osm_lcm/data_utils/vnfd.py                       78     41    47%
osm_lcm/data_utils/vnfr.py                       41     13    68%
osm_lcm/data_utils/wim.py                        72     55    24%
osm_lcm/frontend_grpc.py                         20      6    70%
osm_lcm/frontend_pb2.py                          27      0   100%
osm_lcm/lcm.py                                  497    363    27%
osm_lcm/lcm_hc.py                                36     15    58%
osm_lcm/lcm_helm_conn.py                        300    157    48%
osm_lcm/lcm_utils.py                            303    157    48%
osm_lcm/netslice.py                             440    425     3%
osm_lcm/ng_ro.py                                205    151    26%
osm_lcm/ns.py                                  3629   2697    26%
osm_lcm/osm_config.py                            21      0   100%
osm_lcm/prometheus.py                            11      2    82%
osm_lcm/vim_sdn.py                              934    823    12%
-----------------------------------------------------------------
TOTAL                                          7853   5768    27%
cover run-test: commands[4] | coverage html -d ./cover '--omit=*tests*'
Wrote HTML report to ./cover/index.html
cover run-test: commands[5] | coverage xml -o coverage.xml '--omit=*tests*'
Wrote XML report to coverage.xml

___________________________________ summary ____________________________________
ERROR:   black: parallel child exit code 1
  cover: commands succeeded
  flake8: commands succeeded
  pylint: commands succeeded
  safety: commands succeeded
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
$ docker stop --time=1 6a43533c2dacae622a1c9abed510ff68b27c62f08bca1d5cf5749e6a02af16a2
$ docker rm -f 6a43533c2dacae622a1c9abed510ff68b27c62f08bca1d5cf5749e6a02af16a2
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 1
Finished: FAILURE