Update from master 86/13286/3
authorDario Faccin <dario.faccin@canonical.com>
Wed, 15 Feb 2023 08:29:55 +0000 (09:29 +0100)
committerMark Beierl <mark.beierl@canonical.com>
Wed, 26 Apr 2023 22:22:50 +0000 (22:22 +0000)
Merged the following from master into paas branch:

Add OSM-POL integration tests

Change-Id: I140b9eb271c0f03520660b676e075b3f0d62a128
Signed-off-by: Dario Faccin <dario.faccin@canonical.com>
Add OSM-MON integration tests

Change-Id: I3199869880d0c9ce0784dcc623c844dd39f1180a
Signed-off-by: Dario Faccin <dario.faccin@canonical.com>
Bug 2218: Fix command for `juju run-action`

Change-Id: Ife2e8e9f532f3c67c7e2f71d3f77d3e4e7dc5257
Signed-off-by: Daniel Arndt <daniel.arndt@canonical.com>
Update the artifacts stored in stage2

This change updates the patterns of the artifacts to be stored by the
method `archive` in `ci_helper.groovy`.

The pattern "dists/*.gz" and "dists/*Packages" corresponding to index
files for debian repos are no longer required.

The pattern "dist/*.whl" corresponding to Python wheel files is now
required, since it is an additional artifact generated in stage2.

Change-Id: Id87fcb98b2d79a9bd0b64fdaca44da8acd9e1cb1
Signed-off-by: garciadeblas <gerardo.garciadeblas@telefonica.com>
ntegration of OSM Charms with new MongoDB

Change-Id: I9e723dc94ff4c5b7e691179be4e9e3c7b43b6ab0
Signed-off-by: Dario Faccin <dario.faccin@canonical.com>
Charm cleanup

Removal of obsolete charm code

Change-Id: Ifc5e83457cf580d8b236a636328470c527c5c3a9
Signed-off-by: Mark Beierl <mark.beierl@canonical.com>
Integration tests for NG UI charm

Change-Id: I3c8958d54aeed84faf1ed2194bc818c1691cf755
Signed-off-by: Daniel Arndt <daniel.arndt@canonical.com>
Fix unit tests for NG-UI charm

Change-Id: If5b98446bb589a3346bcaf1d260a3ad2c5affd3b
Signed-off-by: Daniel Arndt <daniel.arndt@canonical.com>
Set K8s 1.26 in charmed OSM installation

storage is deprecated: replaced by hostpath-storage

Change-Id: I11dd6fc2c18f89c289ad80da696929a7c0236d63
Signed-off-by: Patricia Reinoso <patricia.reinoso@canonical.com>
Remove duplicated lines in Airflow Dockerfile

Change-Id: Iaeb200d498c01e53a7748293d39b6d9a0ba3cfa9
Signed-off-by: garciadeblas <gerardo.garciadeblas@telefonica.com>
Fix docker tag in stage3 to coexist with periodic clean-up

Change-Id: I1ce9a5de84e0bcedd7abaecfa0fb6d753b853cb7
Signed-off-by: garciadeblas <gerardo.garciadeblas@telefonica.com>
Pin Charmed Operator Framework version for charms

Change-Id: Iff5659151e5678298b72e54b7b22a375bc7b7ebf
Signed-off-by: Dario Faccin <dario.faccin@canonical.com>
Update base image for Airflow to 2.5.2

Change-Id: Id73a0de10b80a4154e1816c5695d3c96de1b03fe
Signed-off-by: garciadeblas <gerardo.garciadeblas@telefonica.com>
Update base image for Airflow to support Python 3.10

Change-Id: I4d0bd5be38faff10de4bd2dbaaa9a6010ab12732
Signed-off-by: garciadeblas <gerardo.garciadeblas@telefonica.com>
Remove checks for copyright in charms

This patch removes the flake8 copyright plugin and configuration.

Change-Id: I65e362748e16efbc48055370f8f1590d4910c000
Signed-off-by: Dario Faccin <dario.faccin@canonical.com>
Update bundle (standalone and HA) to use MongoDB charm from edge channel

Change-Id: Ie60a105a58c5838db90129f1d6d896907675a405
Signed-off-by: Dario Faccin <dario.faccin@canonical.com>
Update Dockerfile and stage-test script to run tests for charms

This patch updates Dockerfile to use Ubuntu 20.04 as base for building
and testing charms.
This patch updates stage-test script to execute testing for charms.
Tests will be executed only for charms modified by the review.
This patch updates tox configuration for charms setting the python
interpreter to python3.8.

Change-Id: Ib9046b78d6520188cc51ac776fe60ea16479f11c
Signed-off-by: Dario Faccin <dario.faccin@canonical.com>
Adding documentation to OSM bundles

Change-Id: I94b2d7467f4fba40b625acaf545dc20fc6079f8c
Signed-off-by: Guillermo Calvino <guillermo.calvino@canonical.com>
Partial revert of 13026

The *.gz and *Packages are actually used in the creation of
the debian repository for the installers.

Change-Id: I56ba0ce478fba9bcaeb58d6f2abaf235a4eab78a
Signed-off-by: Mark Beierl <mark.beierl@canonical.com>
Minor indentation fixes in MON and POL K8s manifests

Change-Id: Ib96f1655df650587fc6255d5f98986e1332bbb2f
Signed-off-by: garciadeblas <gerardo.garciadeblas@telefonica.com>
Integration tests for VCA Integrator Operator

Change-Id: I2bc362961edb19f3a0696c779aa9eeaacc361572
Signed-off-by: Dario Faccin <dario.faccin@canonical.com>
Signed-off-by: Mark Beierl <mark.beierl@canonical.com>
LCM integration tests: use RO charm from charmhub instead of building it
locally

Change-Id: I3c1aba9227d9ef5c28f559447da63035214c8ea1
Signed-off-by: Dario Faccin <dario.faccin@canonical.com>
Feature 10981: installation of AlertManager as part of NG-SA

Change-Id: I99bb5785081df4395be336f323d5d4ac3dfd68b6
Signed-off-by: garciadeblas <gerardo.garciadeblas@telefonica.com>
Feature 10981: installation of webhook translator as part of NG-SA

Change-Id: I5318460103a6b89b37931bf661618251a3837d04
Signed-off-by: garciadeblas <gerardo.garciadeblas@telefonica.com>
Remove unnecessary Makefile related to old docker image build process

Change-Id: Icc304cfe7124979584405ec6635ce2c7a9861eac
Signed-off-by: garciadeblas <gerardo.garciadeblas@telefonica.com>
Update tools/local-build.sh to run python http server instead of qhttp

Change-Id: Id9857656e18e1487da7123e076bf00c0b9869d25
Signed-off-by: garciadeblas <gerardo.garciadeblas@telefonica.com>
Add Dockerfile for Webhook translator

Change-Id: Id9a787e0fd3fd953b1b2ace190cdca6a77199f27
Signed-off-by: garciadeblas <gerardo.garciadeblas@telefonica.com>
Replace OSM_STACK_NAME by OSM_NAMESPACE in installers scripts

Change-Id: I5ce4bdc392fd64b4bed7479768b91adba53c67e4
Signed-off-by: garciadeblas <gerardo.garciadeblas@telefonica.com>
Update helm version to 3.11.3

Change-Id: Ic95f32cd1fc311bf93a817da90f48a17d7c2bd13
Signed-off-by: garciadeblas <gerardo.garciadeblas@telefonica.com>
Add nohup to http.server in tools/local-build.sh

Change-Id: Ic21b33c22c069d6145ba9d60c7e3cebb75f99664
Signed-off-by: garciadeblas <gerardo.garciadeblas@telefonica.com>
Feature 10981: auto-scaling alerts rules for AlertManager

Change-Id: I7e8c3f7b1dd3201b75848ae6264eaa2375a5b06b
Signed-off-by: aguilard <e.dah.tid@telefonica.com>
Feature 10981: fix CMD in webhook Dockerfile

Change-Id: If8332c12c2f065c0a4d195873e24a98aa34b0ed4
Signed-off-by: aguilard <e.dah.tid@telefonica.com>
Feature 10981: remove mon and pol for ng-sa installation

This change removes the deployment of POL for NG-SA installation.
In addition, it deploys a reduced MON, which will only run
mon-dashboarder. A new K8s manifest (ng-mon.yaml )file has been created
for the purpose.

Change-Id: I243a2710d7b883d505ff4b4d012f7d67920f0e73
Signed-off-by: garciadeblas <gerardo.garciadeblas@telefonica.com>
Feature 10981: extended Prometheus sidecar to dump alerts rules in config files

Change-Id: Ic454c894b60d0b2b88b6ea81ca35a0bf4d54ebac
Signed-off-by: aguilard <e.dah.tid@telefonica.com>
OSM DB Update Charm

Initial load of code for the osm-update-db-operator charm

Change-Id: I2884249efaaa86f614df6c286a69f3546489b523
Signed-off-by: Mark Beierl <mark.beierl@canonical.com>
Improve stage-test script: Split charms list according to tox envlist.

For newer charms the tox envlist includes lint, unit and integration: for these charms execute only lint and unit tests.
For older charms the tox envlist includes black, cover, flake8, pylint, yamllint, safety: for these charms execute all tests.

Change-Id: I6cfbe129440be1665f63572a1879060eccd822fd
Signed-off-by: Dario Faccin <dario.faccin@canonical.com>
Signed-off-by: Mark Beierl <mark.beierl@canonical.com>
248 files changed:
Dockerfile
devops-stages/stage-test.sh
docker/Airflow/Dockerfile
docker/MON/scripts/dashboarder-start.sh [new file with mode: 0644]
docker/Prometheus/Dockerfile
docker/Prometheus/src/app.py
docker/Webhook/Dockerfile [new file with mode: 0644]
docker/mk/Makefile.include [deleted file]
installers/charm/build.sh [deleted file]
installers/charm/bundles/osm-ha/bundle.yaml
installers/charm/bundles/osm/bundle.yaml
installers/charm/interfaces/keystone/interface.yaml [deleted file]
installers/charm/interfaces/keystone/provides.py [deleted file]
installers/charm/interfaces/keystone/requires.py [deleted file]
installers/charm/interfaces/osm-nbi/README.md [deleted file]
installers/charm/interfaces/osm-nbi/copyright [deleted file]
installers/charm/interfaces/osm-nbi/interface.yaml [deleted file]
installers/charm/interfaces/osm-nbi/provides.py [deleted file]
installers/charm/interfaces/osm-nbi/requires.py [deleted file]
installers/charm/interfaces/osm-ro/README.md [deleted file]
installers/charm/interfaces/osm-ro/copyright [deleted file]
installers/charm/interfaces/osm-ro/interface.yaml [deleted file]
installers/charm/interfaces/osm-ro/provides.py [deleted file]
installers/charm/interfaces/osm-ro/requires.py [deleted file]
installers/charm/juju-simplestreams-operator/pyproject.toml
installers/charm/juju-simplestreams-operator/requirements.txt
installers/charm/juju-simplestreams-operator/tox.ini
installers/charm/layers/osm-common/README.md [deleted file]
installers/charm/layers/osm-common/layer.yaml [deleted file]
installers/charm/layers/osm-common/lib/charms/osm/k8s.py [deleted file]
installers/charm/layers/osm-common/metadata.yaml [deleted file]
installers/charm/layers/osm-common/reactive/osm_common.py [deleted file]
installers/charm/lcm/.gitignore [deleted file]
installers/charm/lcm/.jujuignore [deleted file]
installers/charm/lcm/.yamllint.yaml [deleted file]
installers/charm/lcm/README.md [deleted file]
installers/charm/lcm/charmcraft.yaml [deleted file]
installers/charm/lcm/config.yaml [deleted file]
installers/charm/lcm/lib/charms/kafka_k8s/v0/kafka.py [deleted file]
installers/charm/lcm/metadata.yaml [deleted file]
installers/charm/lcm/requirements-test.txt [deleted file]
installers/charm/lcm/requirements.txt [deleted file]
installers/charm/lcm/src/charm.py [deleted file]
installers/charm/lcm/src/pod_spec.py [deleted file]
installers/charm/lcm/tests/__init__.py [deleted file]
installers/charm/lcm/tests/test_charm.py [deleted file]
installers/charm/lcm/tests/test_pod_spec.py [deleted file]
installers/charm/lcm/tox.ini [deleted file]
installers/charm/lint.sh [deleted file]
installers/charm/mon/.gitignore [deleted file]
installers/charm/mon/.jujuignore [deleted file]
installers/charm/mon/.yamllint.yaml [deleted file]
installers/charm/mon/README.md [deleted file]
installers/charm/mon/charmcraft.yaml [deleted file]
installers/charm/mon/config.yaml [deleted file]
installers/charm/mon/lib/charms/kafka_k8s/v0/kafka.py [deleted file]
installers/charm/mon/metadata.yaml [deleted file]
installers/charm/mon/requirements-test.txt [deleted file]
installers/charm/mon/requirements.txt [deleted file]
installers/charm/mon/src/charm.py [deleted file]
installers/charm/mon/src/pod_spec.py [deleted file]
installers/charm/mon/tests/__init__.py [deleted file]
installers/charm/mon/tests/test_charm.py [deleted file]
installers/charm/mon/tests/test_pod_spec.py [deleted file]
installers/charm/mon/tox.ini [deleted file]
installers/charm/nbi/.gitignore [deleted file]
installers/charm/nbi/.jujuignore [deleted file]
installers/charm/nbi/.yamllint.yaml [deleted file]
installers/charm/nbi/README.md [deleted file]
installers/charm/nbi/charmcraft.yaml [deleted file]
installers/charm/nbi/config.yaml [deleted file]
installers/charm/nbi/lib/charms/kafka_k8s/v0/kafka.py [deleted file]
installers/charm/nbi/metadata.yaml [deleted file]
installers/charm/nbi/requirements-test.txt [deleted file]
installers/charm/nbi/requirements.txt [deleted file]
installers/charm/nbi/src/charm.py [deleted file]
installers/charm/nbi/src/pod_spec.py [deleted file]
installers/charm/nbi/tests/__init__.py [deleted file]
installers/charm/nbi/tests/test_charm.py [deleted file]
installers/charm/nbi/tests/test_pod_spec.py [deleted file]
installers/charm/nbi/tox.ini [deleted file]
installers/charm/ng-ui/.gitignore [deleted file]
installers/charm/ng-ui/.jujuignore [deleted file]
installers/charm/ng-ui/.yamllint.yaml [deleted file]
installers/charm/ng-ui/README.md [deleted file]
installers/charm/ng-ui/charmcraft.yaml [deleted file]
installers/charm/ng-ui/config.yaml [deleted file]
installers/charm/ng-ui/metadata.yaml [deleted file]
installers/charm/ng-ui/requirements-test.txt [deleted file]
installers/charm/ng-ui/requirements.txt [deleted file]
installers/charm/ng-ui/src/charm.py [deleted file]
installers/charm/ng-ui/src/pod_spec.py [deleted file]
installers/charm/ng-ui/templates/default.template [deleted file]
installers/charm/ng-ui/tests/__init__.py [deleted file]
installers/charm/ng-ui/tests/test_charm.py [deleted file]
installers/charm/ng-ui/tox.ini [deleted file]
installers/charm/osm-lcm/config.yaml
installers/charm/osm-lcm/lib/charms/data_platform_libs/v0/data_interfaces.py [new file with mode: 0644]
installers/charm/osm-lcm/lib/charms/osm_libs/v0/utils.py
installers/charm/osm-lcm/lib/charms/osm_ro/v0/ro.py
installers/charm/osm-lcm/metadata.yaml
installers/charm/osm-lcm/pyproject.toml
installers/charm/osm-lcm/requirements.txt
installers/charm/osm-lcm/src/charm.py
installers/charm/osm-lcm/tests/integration/test_charm.py
installers/charm/osm-lcm/tests/unit/test_charm.py
installers/charm/osm-lcm/tox.ini
installers/charm/osm-mon/config.yaml
installers/charm/osm-mon/lib/charms/data_platform_libs/v0/data_interfaces.py [new file with mode: 0644]
installers/charm/osm-mon/metadata.yaml
installers/charm/osm-mon/pyproject.toml
installers/charm/osm-mon/requirements.txt
installers/charm/osm-mon/src/charm.py
installers/charm/osm-mon/tests/integration/test_charm.py [new file with mode: 0644]
installers/charm/osm-mon/tests/unit/test_charm.py
installers/charm/osm-mon/tox.ini
installers/charm/osm-nbi/config.yaml
installers/charm/osm-nbi/lib/charms/data_platform_libs/v0/data_interfaces.py [new file with mode: 0644]
installers/charm/osm-nbi/lib/charms/osm_libs/v0/utils.py
installers/charm/osm-nbi/metadata.yaml
installers/charm/osm-nbi/pyproject.toml
installers/charm/osm-nbi/requirements.txt
installers/charm/osm-nbi/src/charm.py
installers/charm/osm-nbi/tests/integration/test_charm.py
installers/charm/osm-nbi/tests/unit/test_charm.py
installers/charm/osm-nbi/tox.ini
installers/charm/osm-ng-ui/pyproject.toml
installers/charm/osm-ng-ui/requirements.txt
installers/charm/osm-ng-ui/src/charm.py
installers/charm/osm-ng-ui/tests/integration/test_charm.py [new file with mode: 0644]
installers/charm/osm-ng-ui/tests/unit/test_charm.py
installers/charm/osm-ng-ui/tox.ini
installers/charm/osm-nglcm/lib/charms/data_platform_libs/v0/data_interfaces.py [new file with mode: 0644]
installers/charm/osm-nglcm/lib/charms/osm_libs/v0/utils.py
installers/charm/osm-nglcm/metadata.yaml
installers/charm/osm-nglcm/requirements.txt
installers/charm/osm-nglcm/src/charm.py
installers/charm/osm-nglcm/src/legacy_interfaces.py [deleted file]
installers/charm/osm-nglcm/tests/unit/test_charm.py
installers/charm/osm-nglcm/tox.ini
installers/charm/osm-pol/config.yaml
installers/charm/osm-pol/lib/charms/data_platform_libs/v0/data_interfaces.py [new file with mode: 0644]
installers/charm/osm-pol/metadata.yaml
installers/charm/osm-pol/pyproject.toml
installers/charm/osm-pol/requirements.txt
installers/charm/osm-pol/src/charm.py
installers/charm/osm-pol/tests/integration/test_charm.py [new file with mode: 0644]
installers/charm/osm-pol/tests/unit/test_charm.py
installers/charm/osm-pol/tox.ini
installers/charm/osm-ro/lib/charms/data_platform_libs/v0/data_interfaces.py [new file with mode: 0644]
installers/charm/osm-ro/metadata.yaml
installers/charm/osm-ro/pyproject.toml
installers/charm/osm-ro/requirements.txt
installers/charm/osm-ro/src/charm.py
installers/charm/osm-ro/tests/integration/test_charm.py
installers/charm/osm-ro/tests/unit/test_charm.py
installers/charm/osm-ro/tox.ini
installers/charm/osm-temporal/lib/charms/osm_libs/v0/utils.py
installers/charm/osm-temporal/src/charm.py
installers/charm/osm-temporal/src/legacy_interfaces.py
installers/charm/osm-temporal/tests/unit/test_charm.py
installers/charm/osm-update-db-operator/.gitignore [new file with mode: 0644]
installers/charm/osm-update-db-operator/.jujuignore [new file with mode: 0644]
installers/charm/osm-update-db-operator/CONTRIBUTING.md [new file with mode: 0644]
installers/charm/osm-update-db-operator/LICENSE [new file with mode: 0644]
installers/charm/osm-update-db-operator/README.md [new file with mode: 0644]
installers/charm/osm-update-db-operator/actions.yaml [new file with mode: 0644]
installers/charm/osm-update-db-operator/charmcraft.yaml [new file with mode: 0644]
installers/charm/osm-update-db-operator/config.yaml [new file with mode: 0644]
installers/charm/osm-update-db-operator/metadata.yaml [new file with mode: 0644]
installers/charm/osm-update-db-operator/pyproject.toml [new file with mode: 0644]
installers/charm/osm-update-db-operator/requirements.txt [new file with mode: 0644]
installers/charm/osm-update-db-operator/src/charm.py [new file with mode: 0755]
installers/charm/osm-update-db-operator/src/db_upgrade.py [new file with mode: 0644]
installers/charm/osm-update-db-operator/tests/integration/test_charm.py [new file with mode: 0644]
installers/charm/osm-update-db-operator/tests/unit/test_charm.py [new file with mode: 0644]
installers/charm/osm-update-db-operator/tests/unit/test_db_upgrade.py [new file with mode: 0644]
installers/charm/osm-update-db-operator/tox.ini [new file with mode: 0644]
installers/charm/pla/.gitignore [deleted file]
installers/charm/pla/.jujuignore [deleted file]
installers/charm/pla/.yamllint.yaml [deleted file]
installers/charm/pla/README.md [deleted file]
installers/charm/pla/charmcraft.yaml [deleted file]
installers/charm/pla/config.yaml [deleted file]
installers/charm/pla/lib/charms/kafka_k8s/v0/kafka.py [deleted file]
installers/charm/pla/metadata.yaml [deleted file]
installers/charm/pla/requirements-test.txt [deleted file]
installers/charm/pla/requirements.txt [deleted file]
installers/charm/pla/src/charm.py [deleted file]
installers/charm/pla/tests/__init__.py [deleted file]
installers/charm/pla/tests/test_charm.py [deleted file]
installers/charm/pla/tox.ini [deleted file]
installers/charm/pol/.gitignore [deleted file]
installers/charm/pol/.jujuignore [deleted file]
installers/charm/pol/.yamllint.yaml [deleted file]
installers/charm/pol/README.md [deleted file]
installers/charm/pol/charmcraft.yaml [deleted file]
installers/charm/pol/config.yaml [deleted file]
installers/charm/pol/lib/charms/kafka_k8s/v0/kafka.py [deleted file]
installers/charm/pol/metadata.yaml [deleted file]
installers/charm/pol/requirements-test.txt [deleted file]
installers/charm/pol/requirements.txt [deleted file]
installers/charm/pol/src/charm.py [deleted file]
installers/charm/pol/src/pod_spec.py [deleted file]
installers/charm/pol/tests/__init__.py [deleted file]
installers/charm/pol/tests/test_charm.py [deleted file]
installers/charm/pol/tests/test_pod_spec.py [deleted file]
installers/charm/pol/tox.ini [deleted file]
installers/charm/release_edge.sh [deleted file]
installers/charm/ro/.gitignore [deleted file]
installers/charm/ro/.jujuignore [deleted file]
installers/charm/ro/.yamllint.yaml [deleted file]
installers/charm/ro/README.md [deleted file]
installers/charm/ro/charmcraft.yaml [deleted file]
installers/charm/ro/config.yaml [deleted file]
installers/charm/ro/lib/charms/kafka_k8s/v0/kafka.py [deleted file]
installers/charm/ro/metadata.yaml [deleted file]
installers/charm/ro/requirements-test.txt [deleted file]
installers/charm/ro/requirements.txt [deleted file]
installers/charm/ro/src/charm.py [deleted file]
installers/charm/ro/src/pod_spec.py [deleted file]
installers/charm/ro/tests/__init__.py [deleted file]
installers/charm/ro/tests/test_charm.py [deleted file]
installers/charm/ro/tests/test_pod_spec.py [deleted file]
installers/charm/ro/tox.ini [deleted file]
installers/charm/update-bundle-revisions.sh [deleted file]
installers/charm/vca-integrator-operator/charmcraft.yaml
installers/charm/vca-integrator-operator/pyproject.toml
installers/charm/vca-integrator-operator/requirements.txt
installers/charm/vca-integrator-operator/tests/integration/test_charm.py
installers/charm/vca-integrator-operator/tox.ini
installers/charmed_install.sh
installers/docker/osm_pods/mon.yaml
installers/docker/osm_pods/ng-mon.yaml [new file with mode: 0644]
installers/docker/osm_pods/ng-prometheus.yaml
installers/docker/osm_pods/pol.yaml
installers/docker/osm_pods/webhook-translator.yaml [new file with mode: 0644]
installers/full_install_osm.sh
installers/helm/values/airflow-values.yaml
installers/helm/values/alertmanager-values.yaml [new file with mode: 0644]
installers/install_juju.sh
installers/install_kubeadm_cluster.sh
installers/install_ngsa.sh
installers/uninstall_osm.sh
jenkins/ci-pipelines/ci_helper.groovy
jenkins/ci-pipelines/ci_stage_2.groovy
jenkins/ci-pipelines/ci_stage_3.groovy
tools/local-build.sh

index 931da3e..dda7a41 100644 (file)
@@ -24,7 +24,7 @@
 #   devops-stages/stage-build.sh
 #
 
-FROM ubuntu:18.04
+FROM ubuntu:20.04
 
 ARG APT_PROXY
 RUN if [ ! -z $APT_PROXY ] ; then \
@@ -37,13 +37,13 @@ RUN DEBIAN_FRONTEND=noninteractive apt-get update && \
         debhelper \
         dh-make \
         git \
-        python3.8 \
+        python3 \
         python3-all \
         python3-dev \
         python3-setuptools
 
-RUN python3 -m easy_install pip==21.0.1
-RUN pip3 install tox==3.22.0
+RUN python3 -m easy_install pip==21.3.1
+RUN pip install tox==3.24.5
 
 ENV LC_ALL C.UTF-8
 ENV LANG C.UTF-8
index ae8f541..af5953a 100755 (executable)
 
 set -eu
 
-if [ $(git diff --name-only origin/$GERRIT_BRANCH -- installers/charm/ |wc -l) -eq 0 ]; then
-    exit 0
-fi
-
 CURRENT_DIR=`pwd`
 
 # Execute tests for charms
 CHARM_PATH="./installers/charm"
-CHARM_NAMES=""
-for charm in $CHARM_NAMES; do
-    cd $CHARM_PATH/$charm
-    TOX_PARALLEL_NO_SPINNER=1 tox --parallel=auto
-    cd $CURRENT_DIR
+NEW_CHARMS_NAMES="osm-lcm osm-mon osm-nbi osm-ng-ui osm-pol osm-ro vca-integrator-operator"
+OLD_CHARMS_NAMES="keystone prometheus grafana"
+for charm in $NEW_CHARMS_NAMES; do
+    if [ $(git diff --name-only "origin/${GERRIT_BRANCH}" -- "installers/charm/${charm}" | wc -l) -ne 0 ]; then
+        echo "Running tox for ${charm}"
+        cd "${CHARM_PATH}/${charm}"
+        TOX_PARALLEL_NO_SPINNER=1 tox -e lint,unit --parallel=auto
+        cd "${CURRENT_DIR}"
+    fi
+done
+for charm in $OLD_CHARMS_NAMES; do
+    if [ $(git diff --name-only "origin/${GERRIT_BRANCH}" -- "installers/charm/${charm}" | wc -l) -ne 0 ]; then
+        echo "Running tox for ${charm}"
+        cd "${CHARM_PATH}/${charm}"
+        TOX_PARALLEL_NO_SPINNER=1 tox --parallel=auto
+        cd "${CURRENT_DIR}"
+    fi
 done
index 2727977..bf72444 100644 (file)
@@ -15,7 +15,7 @@
 # limitations under the License.
 #######################################################################################
 
-FROM apache/airflow:2.3.0-python3.8
+FROM apache/airflow:2.5.2-python3.10
 USER root
 RUN DEBIAN_FRONTEND=noninteractive apt-get --yes update && \
     DEBIAN_FRONTEND=noninteractive apt-get --yes install \
@@ -33,14 +33,10 @@ RUN mkdir /tmp/osm
 RUN dpkg-deb -x osm_common.deb /tmp/osm
 RUN dpkg-deb -x osm_ngsa.deb /tmp/osm
 
-RUN mv /tmp/osm/usr/lib/python3/dist-packages/* /home/airflow/.local/lib/python3.8/site-packages/
+RUN mv /tmp/osm/usr/lib/python3/dist-packages/* /home/airflow/.local/lib/python3.10/site-packages/
 RUN rm -rf /tmp/osm
 
 RUN pip3 install \
-    -r /home/airflow/.local/lib/python3.8/site-packages/osm_common/requirements.txt \
-    -r /home/airflow/.local/lib/python3.8/site-packages/osm_ngsa/requirements.txt
-
-RUN pip3 install \
-    -r /home/airflow/.local/lib/python3.8/site-packages/osm_common/requirements.txt \
-    -r /home/airflow/.local/lib/python3.8/site-packages/osm_ngsa/requirements.txt
+    -r /home/airflow/.local/lib/python3.10/site-packages/osm_common/requirements.txt \
+    -r /home/airflow/.local/lib/python3.10/site-packages/osm_ngsa/requirements.txt
 
diff --git a/docker/MON/scripts/dashboarder-start.sh b/docker/MON/scripts/dashboarder-start.sh
new file mode 100644 (file)
index 0000000..171f75d
--- /dev/null
@@ -0,0 +1,18 @@
+#######################################################################################
+# Copyright ETSI Contributors and Others.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+# implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#######################################################################################
+
+osm-mon-dashboarder
index 228f597..1bdbaf9 100644 (file)
@@ -22,6 +22,8 @@ ENV PROMETHEUS_URL http://prometheus:9090
 ENV MONGODB_URL mongodb://mongo:27017
 ENV PROMETHEUS_CONFIG_FILE /etc/prometheus/prometheus.yml
 ENV PROMETHEUS_BASE_CONFIG_FILE /etc/prometheus_base/prometheus.yml
+ENV PROMETHEUS_ALERTS_FILE /etc/prometheus/osm_alert_rules.yml
+ENV PROMETHEUS_BASE_ALERTS_FILE /etc/prometheus_base/osm_alert_rules.yml
 ENV TARGET_DATABASE osm
 
 WORKDIR /code
index b06f448..36b7b52 100755 (executable)
 # contact: fbravo@whitestack.com
 ##
 
-import os
-import pymongo
-import yaml
 import aiohttp
 import asyncio
+from bson.json_util import dumps
+from bson import ObjectId
 import copy
+from datetime import datetime
 import json
+import os
+import pymongo
 import time
-from bson.json_util import dumps
-from bson import ObjectId
+import yaml
 
 # Env variables
 mongodb_url = os.environ["MONGODB_URL"]
 target_database = os.environ["TARGET_DATABASE"]
 prometheus_config_file = os.environ["PROMETHEUS_CONFIG_FILE"]
 prometheus_base_config_file = os.environ["PROMETHEUS_BASE_CONFIG_FILE"]
+prometheus_alerts_file = os.environ["PROMETHEUS_ALERTS_FILE"]
+prometheus_base_alerts_file = os.environ["PROMETHEUS_BASE_ALERTS_FILE"]
+
 prometheus_url = os.environ["PROMETHEUS_URL"]
 
 
@@ -45,6 +49,10 @@ def get_jobs(client):
     return json.loads(dumps(client[target_database].prometheus_jobs.find({})))
 
 
+def get_alerts(client):
+    return json.loads(dumps(client[target_database].alerts.find({"prometheus_config": {"$exists": True}})))
+
+
 def save_successful_jobs(client, jobs):
     for job in jobs:
         client[target_database].prometheus_jobs.update_one(
@@ -88,6 +96,29 @@ def generate_prometheus_config(prometheus_jobs, config_file_path):
     return config_file_yaml
 
 
+def generate_prometheus_alerts(prometheus_alerts, config_file_path):
+    with open(config_file_path, encoding="utf-8", mode="r") as config_file:
+        config_file_yaml = yaml.safe_load(config_file)
+    if config_file_yaml is None:
+        config_file_yaml = {}
+    if "groups" not in config_file_yaml:
+        config_file_yaml["groups"] = []
+
+    timestamp = datetime.now().strftime("%Y%m%d%H%M%S")
+    group = {
+        "name": f"_osm_alert_rules_{timestamp}_",
+        "rules": [],
+    }
+    for alert in prometheus_alerts:
+        if "prometheus_config" in alert:
+            group["rules"].append(alert["prometheus_config"])
+
+    if group["rules"]:
+        config_file_yaml["groups"].append(group)
+
+    return config_file_yaml
+
+
 async def reload_prometheus_config(prom_url):
     async with aiohttp.ClientSession() as session:
         async with session.post(prom_url + "/-/reload") as resp:
@@ -131,7 +162,7 @@ async def validate_configuration(prom_url, new_config):
 
 async def main_task(client):
     stored_jobs = get_jobs(client)
-    print(f"Jobs detected : {len(stored_jobs):d}")
+    print(f"Jobs detected: {len(stored_jobs):d}")
     generated_prometheus_config = generate_prometheus_config(
         stored_jobs, prometheus_base_config_file
     )
@@ -141,6 +172,20 @@ async def main_task(client):
     print(yaml.safe_dump(generated_prometheus_config))
     config_file.write(yaml.safe_dump(generated_prometheus_config))
     config_file.close()
+
+    if os.path.isfile(prometheus_base_alerts_file):
+        stored_alerts = get_alerts(client)
+        print(f"Alerts read: {len(stored_alerts):d}")
+        generated_prometheus_alerts = generate_prometheus_alerts(
+            stored_alerts, prometheus_base_alerts_file
+        )
+        print(f"Writing new alerts file to {prometheus_alerts_file}")
+        config_file = open(prometheus_alerts_file, "w")
+        config_file.truncate(0)
+        print(yaml.safe_dump(generated_prometheus_alerts))
+        config_file.write(yaml.safe_dump(generated_prometheus_alerts))
+        config_file.close()
+
     print("New config written, updating prometheus")
     update_resp = await reload_prometheus_config(prometheus_url)
     is_valid = await validate_configuration(prometheus_url, generated_prometheus_config)
@@ -161,9 +206,9 @@ async def main():
     # Initial loop. First refresh of prometheus config file
     first_refresh_completed = False
     tries = 1
-    while tries <= 3:
+    while tries <= 3 and first_refresh_completed == False:
         try:
-            print("Refreshing prometheus config file for first time")
+            print("Generating prometheus config files")
             await main_task(client)
             first_refresh_completed = True
         except Exception as error:
@@ -179,23 +224,21 @@ async def main():
     while True:
         try:
             # Needs mongodb in replica mode as this feature relies in OpLog
-            change_stream = client[target_database].prometheus_jobs.watch(
+            change_stream = client[target_database].watch(
                 [
                     {
                         "$match": {
-                            # If you want to modify a particular job,
-                            # delete and insert it again
-                            "operationType": {"$in": ["insert", "delete"]}
+                            "operationType": {"$in": ["insert", "delete"]},
+                            "ns.coll": { "$in": ["prometheus_jobs", "alerts"]},
                         }
                     }
                 ]
             )
 
             # Single thread, no race conditions and ops are queued up in order
-            print("Listening to changes in prometheus jobs collection")
+            print("Listening to changes in prometheus jobs and alerts collections")
             for change in change_stream:
-                print("Change detected, updating prometheus config")
-                print(f"{change}")
+                print("Changes detected, updating prometheus config")
                 await main_task(client)
                 print()
         except Exception as error:
diff --git a/docker/Webhook/Dockerfile b/docker/Webhook/Dockerfile
new file mode 100644 (file)
index 0000000..73e1bd0
--- /dev/null
@@ -0,0 +1,78 @@
+#######################################################################################
+# Copyright ETSI Contributors and Others.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+# implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#######################################################################################
+
+FROM ubuntu:20.04 as INSTALL
+
+ARG APT_PROXY
+RUN if [ ! -z $APT_PROXY ] ; then \
+    echo "Acquire::http::Proxy \"$APT_PROXY\";" > /etc/apt/apt.conf.d/proxy.conf ;\
+    echo "Acquire::https::Proxy \"$APT_PROXY\";" >> /etc/apt/apt.conf.d/proxy.conf ;\
+    fi
+
+RUN DEBIAN_FRONTEND=noninteractive apt-get --yes update && \
+    DEBIAN_FRONTEND=noninteractive apt-get --yes install \
+    gcc=4:9.3.* \
+    python3=3.8.* \
+    python3-dev=3.8.* \
+    python3-pip=20.0.2* \
+    python3-setuptools=45.2.* \
+    curl=7.68.*
+
+ARG PYTHON3_OSM_WEBHOOK_TRANSLATOR_URL
+
+RUN curl $PYTHON3_OSM_WEBHOOK_TRANSLATOR_URL -o osm_webhook_translator.deb
+RUN dpkg -i ./osm_webhook_translator.deb
+
+RUN pip3 install \
+    -r /usr/lib/python3/dist-packages/osm_webhook_translator/requirements.txt
+
+#######################################################################################
+FROM ubuntu:20.04 as FINAL
+
+ARG APT_PROXY
+RUN if [ ! -z $APT_PROXY ] ; then \
+    echo "Acquire::http::Proxy \"$APT_PROXY\";" > /etc/apt/apt.conf.d/proxy.conf ;\
+    echo "Acquire::https::Proxy \"$APT_PROXY\";" >> /etc/apt/apt.conf.d/proxy.conf ;\
+    fi
+
+RUN DEBIAN_FRONTEND=noninteractive apt-get --yes update && \
+    DEBIAN_FRONTEND=noninteractive apt-get --yes install \
+    python3-minimal=3.8.* \
+    && rm -rf /var/lib/apt/lists/*
+
+RUN rm -f /etc/apt/apt.conf.d/proxy.conf
+
+COPY --from=INSTALL /usr/lib/python3/dist-packages /usr/lib/python3/dist-packages
+COPY --from=INSTALL /usr/local/lib/python3.8/dist-packages /usr/local/lib/python3.8/dist-packages
+COPY --from=INSTALL /usr/local/bin/uvicorn /usr/local/bin/uvicorn
+
+# Creating the user for the app
+RUN groupadd -g 1000 appuser && \
+    useradd -u 1000 -g 1000 -d /app appuser && \
+    mkdir -p /app/osm_webhook_translator && \
+    chown -R appuser:appuser /app
+
+WORKDIR /app/osm_webhook_translator
+
+# Changing the security context
+USER appuser
+
+EXPOSE 9998
+
+CMD ["uvicorn", "osm_webhook_translator.main:app", "--host", "0.0.0.0", "--port", "80"]
+
+
diff --git a/docker/mk/Makefile.include b/docker/mk/Makefile.include
deleted file mode 100644 (file)
index 1fd6dcd..0000000
+++ /dev/null
@@ -1,89 +0,0 @@
-#
-#   Copyright 2020 ETSI
-#
-#   Licensed under the Apache License, Version 2.0 (the "License");
-#   you may not use this file except in compliance with the License.
-#   You may obtain a copy of the License at
-#
-#       http://www.apache.org/licenses/LICENSE-2.0
-#
-#   Unless required by applicable law or agreed to in writing, software
-#   distributed under the License is distributed on an "AS IS" BASIS,
-#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-#   See the License for the specific language governing permissions and
-#   limitations under the License.
-#
-TOPDIR=$(shell readlink -f .|sed -e 's/\/docker\/.*//')
-MKINCLUDE=$(TOPDIR)/docker/mk
-MKBUILD=$(TOPDIR)/docker/build
-
-all: build
-
-TAG ?= 6
-
-REPOSITORY_BASE ?= http://osm-download.etsi.org/repository/osm/debian
-RELEASE         ?= ReleaseNINE-daily
-REPOSITORY_KEY  ?= OSM%20ETSI%20Release%20Key.gpg
-REPOSITORY      ?= testing
-NO_CACHE        ?= --no-cache
-DOCKER_REGISTRY     ?= ""
-
-LOWER_MDG = $(shell echo $(MDG) | tr '[:upper:]' '[:lower:]')
-
-CONTAINER_NAME ?= $(LOWER_MDG)
-
-CMD_DOCKER_ARGS ?= -q
-DOCKER_ARGS     = $(CMD_DOCKER_ARGS)
-
-DEPS := MON IM LCM RO common osmclient devops NBI policy-module Keystone N2VC lightui ngui PLA tests Prometheus
-
-DEPS_TARGETS = $(addprefix $(MKBUILD)/.dep_, $(DEPS))
-
-Q=@
-
-$(MKBUILD):
-       $Qmkdir -p $(MKBUILD)
-
-$(MKBUILD)/.dep_policy-module:
-       $Q$(MKINCLUDE)/get_version.sh -r $(REPOSITORY) -R $(RELEASE) -k $(REPOSITORY_KEY) -u $(REPOSITORY_BASE) -m POL -p policy-module > $@
-
-$(MKBUILD)/.dep_lightui:
-       $Q$(MKINCLUDE)/get_version.sh -r $(REPOSITORY) -R $(RELEASE) -k $(REPOSITORY_KEY) -u $(REPOSITORY_BASE) -m LW-UI -p lightui > $@
-
-$(MKBUILD)/.dep_ngui:
-       $Q$(MKINCLUDE)/get_version.sh -r $(REPOSITORY) -R $(RELEASE) -k $(REPOSITORY_KEY) -u $(REPOSITORY_BASE) -m NG-UI -p ngui > $@
-
-$(MKBUILD)/.dep_%:
-       $Q$(MKINCLUDE)/get_version.sh -r $(REPOSITORY) -R $(RELEASE) -k $(REPOSITORY_KEY) -u $(REPOSITORY_BASE) -m $* > $@
-
-build: $(MKBUILD) $(DEPS_TARGETS)
-       $Qdocker build -t opensourcemano/$(LOWER_MDG):$(TAG) \
-                   --build-arg RELEASE=$(RELEASE) \
-                   --build-arg REPOSITORY=$(REPOSITORY) \
-                   --build-arg REPOSITORY_KEY=$(REPOSITORY_KEY) \
-                   --build-arg REPOSITORY_BASE=$(REPOSITORY_BASE) \
-                   --build-arg MON_VERSION==$(shell cat $(MKBUILD)/.dep_MON) \
-                   --build-arg IM_VERSION==$(shell cat $(MKBUILD)/.dep_IM) \
-                   --build-arg RO_VERSION==$(shell cat $(MKBUILD)/.dep_RO) \
-                   --build-arg LCM_VERSION==$(shell cat $(MKBUILD)/.dep_LCM) \
-                   --build-arg COMMON_VERSION==$(shell cat $(MKBUILD)/.dep_common) \
-                   --build-arg OSMCLIENT_VERSION==$(shell cat $(MKBUILD)/.dep_osmclient) \
-                   --build-arg NBI_VERSION==$(shell cat $(MKBUILD)/.dep_NBI) \
-                   --build-arg POL_VERSION==$(shell cat $(MKBUILD)/.dep_policy-module) \
-                   --build-arg PLA_VERSION==$(shell cat $(MKBUILD)/.dep_PLA) \
-                   --build-arg DEVOPS_VERSION==$(shell cat $(MKBUILD)/.dep_devops) \
-                   --build-arg N2VC_VERSION==$(shell cat $(MKBUILD)/.dep_N2VC) \
-                   --build-arg NGUI_VERSION==$(shell cat $(MKBUILD)/.dep_ngui) \
-                   --build-arg NGSA_VERSION==$(shell cat $(MKBUILD)/.dep_ngsa) \
-                   --build-arg TESTS_VERSION==$(shell cat $(MKBUILD)/.dep_tests) \
-                   --build-arg CACHE_DATE==$(shell date -uI) \
-                   $(DOCKER_ARGS) .
-
-clean:
-       rm -f $(MKBUILD)/.dep*
-
-tag:
-       docker tag opensourcemano/$(CONTAINER_NAME):$(INPUT_TAG) $(DOCKER_REGISTRY)opensourcemano/$(LOWER_MDG):$(TAG)
-
-push: tag
-       docker push $(DOCKER_REGISTRY)opensourcemano/$(LOWER_MDG):$(TAG)
diff --git a/installers/charm/build.sh b/installers/charm/build.sh
deleted file mode 100755 (executable)
index 459da13..0000000
+++ /dev/null
@@ -1,28 +0,0 @@
-#!/bin/bash
-# Copyright 2020 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-#     Unless required by applicable law or agreed to in writing, software
-#     distributed under the License is distributed on an "AS IS" BASIS,
-#     WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-#     See the License for the specific language governing permissions and
-#     limitations under the License.
-
-function build() {
-    cd $1 && tox -qe build && cd ..
-}
-
-charms="ro nbi pla pol mon lcm ng-ui grafana prometheus mongodb-exporter kafka-exporter mysqld-exporter"
-if [ -z `which charmcraft` ]; then
-    sudo snap install charmcraft --classic
-fi
-
-for charm_directory in $charms; do
-    build $charm_directory
-done
-wait
\ No newline at end of file
index 08cd281..8fd1e15 100644 (file)
@@ -13,6 +13,7 @@
 #     limitations under the License.
 name: osm-ha
 bundle: kubernetes
+docs: https://discourse.charmhub.io/t/osm-docs-index/8806
 description: |
   **A high-available Charmed OSM cluster**
 
@@ -55,7 +56,7 @@ applications:
       ha-mode: true
   mongodb:
     charm: mongodb-k8s
-    channel: latest/stable
+    channel: latest/edge
     scale: 3
     series: kubernetes
     storage:
index 4718b91..e2c336d 100644 (file)
@@ -13,6 +13,7 @@
 #     limitations under the License.
 name: osm
 bundle: kubernetes
+docs: https://discourse.charmhub.io/t/osm-docs-index/8806
 description: |
   **Single instance Charmed OSM**
 
@@ -53,7 +54,7 @@ applications:
       user: mano
   mongodb:
     charm: mongodb-k8s
-    channel: latest/stable
+    channel: latest/edge
     scale: 1
     series: kubernetes
     storage:
@@ -144,11 +145,11 @@ applications:
       keystone-image: opensourcemano/keystone:testing-daily
   temporal:
     charm: osm-temporal
-    channel: latest/edge/paas
+    channel: latest/edge
     series: focal
     scale: 1
     resources:
-      temporal-server-image: temporalio/auto-setup:1.19.0
+      temporal-server-image: temporalio/auto-setup:1.20.0
 relations:
   - - grafana:prometheus
     - prometheus:prometheus
diff --git a/installers/charm/interfaces/keystone/interface.yaml b/installers/charm/interfaces/keystone/interface.yaml
deleted file mode 100644 (file)
index be1d09b..0000000
+++ /dev/null
@@ -1,16 +0,0 @@
-# Copyright 2020 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-#     Unless required by applicable law or agreed to in writing, software
-#     distributed under the License is distributed on an "AS IS" BASIS,
-#     WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-#     See the License for the specific language governing permissions and
-#     limitations under the License.
-name: keystone
-summary: Keystone Interface
-version: 1
diff --git a/installers/charm/interfaces/keystone/provides.py b/installers/charm/interfaces/keystone/provides.py
deleted file mode 100644 (file)
index bda5d2f..0000000
+++ /dev/null
@@ -1,63 +0,0 @@
-# Copyright 2020 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-#     Unless required by applicable law or agreed to in writing, software
-#     distributed under the License is distributed on an "AS IS" BASIS,
-#     WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-#     See the License for the specific language governing permissions and
-#     limitations under the License.
-from charms.reactive import Endpoint
-from charms.reactive import when
-from charms.reactive import set_flag, clear_flag
-
-
-class KeystoneProvides(Endpoint):
-    @when("endpoint.{endpoint_name}.joined")
-    def _joined(self):
-        set_flag(self.expand_name("{endpoint_name}.joined"))
-
-    @when("endpoint.{endpoint_name}.changed")
-    def _changed(self):
-        set_flag(self.expand_name("{endpoint_name}.ready"))
-
-    @when("endpoint.{endpoint_name}.departed")
-    def _departed(self):
-        set_flag(self.expand_name("{endpoint_name}.departed"))
-        clear_flag(self.expand_name("{endpoint_name}.joined"))
-
-    def publish_info(
-        self,
-        host,
-        port,
-        keystone_db_password,
-        region_id,
-        user_domain_name,
-        project_domain_name,
-        admin_username,
-        admin_password,
-        admin_project_name,
-        username,
-        password,
-        service,
-    ):
-        for relation in self.relations:
-            relation.to_publish["host"] = host
-            relation.to_publish["port"] = port
-            relation.to_publish["keystone_db_password"] = keystone_db_password
-            relation.to_publish["region_id"] = region_id
-            relation.to_publish["user_domain_name"] = user_domain_name
-            relation.to_publish["project_domain_name"] = project_domain_name
-            relation.to_publish["admin_username"] = admin_username
-            relation.to_publish["admin_password"] = admin_password
-            relation.to_publish["admin_project_name"] = admin_project_name
-            relation.to_publish["username"] = username
-            relation.to_publish["password"] = password
-            relation.to_publish["service"] = service
-
-    def mark_complete(self):
-        clear_flag(self.expand_name("{endpoint_name}.joined"))
diff --git a/installers/charm/interfaces/keystone/requires.py b/installers/charm/interfaces/keystone/requires.py
deleted file mode 100644 (file)
index c0d8d47..0000000
+++ /dev/null
@@ -1,72 +0,0 @@
-# Copyright 2020 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-#     Unless required by applicable law or agreed to in writing, software
-#     distributed under the License is distributed on an "AS IS" BASIS,
-#     WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-#     See the License for the specific language governing permissions and
-#     limitations under the License.
-from charms.reactive import Endpoint
-from charms.reactive import when
-from charms.reactive import set_flag, clear_flag
-
-
-class KeystoneRequires(Endpoint):
-    @when("endpoint.{endpoint_name}.joined")
-    def _joined(self):
-        set_flag(self.expand_name("{endpoint_name}.joined"))
-
-    @when("endpoint.{endpoint_name}.changed")
-    def _changed(self):
-        if len(self.keystones()) > 0:
-            set_flag(self.expand_name("{endpoint_name}.ready"))
-        else:
-            clear_flag(self.expand_name("{endpoint_name}.ready"))
-
-    @when("endpoint.{endpoint_name}.departed")
-    def _departed(self):
-        set_flag(self.expand_name("{endpoint_name}.departed"))
-        clear_flag(self.expand_name("{endpoint_name}.joined"))
-        clear_flag(self.expand_name("{endpoint_name}.ready"))
-
-    def keystones(self):
-        """
-        Return Keystone Data:
-        [{
-            'host': <host>,
-            'port': <port>,
-            'keystone_db_password: <keystone_db_password>,
-            'region_id: <region_id>,
-            'admin_username: <admin_username>,
-            'admin_password: <admin_password>,
-            'admin_project_name: <admin_project_name>,
-            'username: <username>,
-            'password: <password>,
-            'service: <service>
-        }]
-        """
-        keystones = []
-        for relation in self.relations:
-            for unit in relation.units:
-                data = {
-                    "host": unit.received["host"],
-                    "port": unit.received["port"],
-                    "keystone_db_password": unit.received["keystone_db_password"],
-                    "region_id": unit.received["region_id"],
-                    "user_domain_name": unit.received["user_domain_name"],
-                    "project_domain_name": unit.received["project_domain_name"],
-                    "admin_username": unit.received["admin_username"],
-                    "admin_password": unit.received["admin_password"],
-                    "admin_project_name": unit.received["admin_project_name"],
-                    "username": unit.received["username"],
-                    "password": unit.received["password"],
-                    "service": unit.received["service"],
-                }
-                if all(data.values()):
-                    keystones.append(data)
-        return keystones
diff --git a/installers/charm/interfaces/osm-nbi/README.md b/installers/charm/interfaces/osm-nbi/README.md
deleted file mode 100644 (file)
index 8fb9523..0000000
+++ /dev/null
@@ -1,63 +0,0 @@
-<!--
-Copyright 2020 Canonical Ltd.
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-    http://www.apache.org/licenses/LICENSE-2.0
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License. -->
-
-# Overview
-
-This interface layer handles communication between Mongodb and its clients.
-
-## Usage
-
-### Provides
-
-To implement this relation to offer an nbi:
-
-In your charm's metadata.yaml:
-
-```yaml
-provides:
-    nbi:
-        interface: osm-nbi
-```
-
-reactive/mynbi.py:
-
-```python
-@when('nbi.joined')
-def send_config(nbi):
-    nbi.send_connection(
-        unit_get('private-address'),
-        get_nbi_port()
-    )
-```
-
-### Requires
-
-If you would like to use an nbi from your charm:
-
-metadata.yaml:
-
-```yaml
-requires:
-    nbi:
-        interface: osm-nbi
-```
-
-reactive/mycharm.py:
-
-```python
-@when('nbi.ready')
-def nbi_ready():
-    nbi = endpoint_from_flag('nbi.ready')
-    if nbi:
-        for unit in nbi.nbis():
-            add_nbi(unit['host'], unit['port'])
-```
diff --git a/installers/charm/interfaces/osm-nbi/copyright b/installers/charm/interfaces/osm-nbi/copyright
deleted file mode 100644 (file)
index dd9405e..0000000
+++ /dev/null
@@ -1,16 +0,0 @@
-Format: http://dep.debian.net/deps/dep5/
-
-Files: *
-Copyright: Copyright 2020, Canonical Ltd., All Rights Reserved.
-License: Apache License 2.0
- Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License.
- You may obtain a copy of the License at
- .
-     http://www.apache.org/licenses/LICENSE-2.0
- .
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
\ No newline at end of file
diff --git a/installers/charm/interfaces/osm-nbi/interface.yaml b/installers/charm/interfaces/osm-nbi/interface.yaml
deleted file mode 100644 (file)
index ec8ee86..0000000
+++ /dev/null
@@ -1,16 +0,0 @@
-# Copyright 2020 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-#     Unless required by applicable law or agreed to in writing, software
-#     distributed under the License is distributed on an "AS IS" BASIS,
-#     WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-#     See the License for the specific language governing permissions and
-#     limitations under the License.
-name: osm-nbi
-summary: Interface for relating to a OSM Northbound Interface
-maintainer: '"Adam Israel" <adam@adamisrael.com>'
diff --git a/installers/charm/interfaces/osm-nbi/provides.py b/installers/charm/interfaces/osm-nbi/provides.py
deleted file mode 100644 (file)
index 7ff3199..0000000
+++ /dev/null
@@ -1,44 +0,0 @@
-#!/usr/bin/python
-# Copyright 2020 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-#     Unless required by applicable law or agreed to in writing, software
-#     distributed under the License is distributed on an "AS IS" BASIS,
-#     WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-#     See the License for the specific language governing permissions and
-#     limitations under the License.
-
-from charms.reactive import RelationBase
-from charms.reactive import hook
-from charms.reactive import scopes
-
-
-class OsmNBIProvides(RelationBase):
-    scope = scopes.GLOBAL
-
-    @hook("{provides:osm-nbi}-relation-joined")
-    def joined(self):
-        self.set_state("{relation_name}.joined")
-
-    @hook("{provides:osm-nbi}-relation-changed")
-    def changed(self):
-        self.set_state("{relation_name}.ready")
-
-    @hook("{provides:osm-nbi}-relation-{broken,departed}")
-    def broken_departed(self):
-        self.remove_state("{relation_name}.ready")
-        self.remove_state("{relation_name}.joined")
-
-    @hook("{provides:osm-nbi}-relation-broken")
-    def broken(self):
-        self.set_state("{relation_name}.removed")
-
-    def send_connection(self, host, port=9999):
-        conv = self.conversation()
-        conv.set_remote("host", host)
-        conv.set_remote("port", port)
diff --git a/installers/charm/interfaces/osm-nbi/requires.py b/installers/charm/interfaces/osm-nbi/requires.py
deleted file mode 100644 (file)
index a5e8e29..0000000
+++ /dev/null
@@ -1,56 +0,0 @@
-# Copyright 2020 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-#     Unless required by applicable law or agreed to in writing, software
-#     distributed under the License is distributed on an "AS IS" BASIS,
-#     WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-#     See the License for the specific language governing permissions and
-#     limitations under the License.
-
-from charms.reactive import RelationBase
-from charms.reactive import hook
-from charms.reactive import scopes
-
-
-class OsmNBIRequires(RelationBase):
-    scope = scopes.GLOBAL
-
-    @hook("{requires:osm-nbi}-relation-joined")
-    def joined(self):
-        conv = self.conversation()
-        conv.set_state("{relation_name}.joined")
-
-    @hook("{requires:osm-nbi}-relation-changed")
-    def changed(self):
-        conv = self.conversation()
-        if self.nbis():
-            conv.set_state("{relation_name}.ready")
-        else:
-            conv.remove_state("{relation_name}.ready")
-
-    @hook("{requires:osm-nbi}-relation-departed")
-    def departed(self):
-        conv = self.conversation()
-        conv.remove_state("{relation_name}.ready")
-        conv.remove_state("{relation_name}.joined")
-
-    def nbis(self):
-        """Return the NBI's host and port.
-
-        [{
-            'host': <host>,
-            'port': <port>,
-        }]
-        """
-        nbis = []
-        for conv in self.conversations():
-            port = conv.get_remote("port")
-            host = conv.get_remote("host") or conv.get_remote("private-address")
-            if host and port:
-                nbis.append({"host": host, "port": port})
-        return nbis
diff --git a/installers/charm/interfaces/osm-ro/README.md b/installers/charm/interfaces/osm-ro/README.md
deleted file mode 100644 (file)
index eb6413a..0000000
+++ /dev/null
@@ -1,63 +0,0 @@
-<!--
-Copyright 2020 Canonical Ltd.
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-    http://www.apache.org/licenses/LICENSE-2.0
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License. -->
-
-# Overview
-
-This interface layer handles communication between OSM's RO and its clients.
-
-## Usage
-
-### Provides
-
-To implement this relation to offer an ro:
-
-In your charm's metadata.yaml:
-
-```yaml
-provides:
-    ro:
-        interface: osm-ro
-```
-
-reactive/myro.py:
-
-```python
-@when('ro.joined')
-def send_config(ro):
-    ro.send_connection(
-        unit_get('private-address'),
-        get_ro_port()
-    )
-```
-
-### Requires
-
-If you would like to use a rodb from your charm:
-
-metadata.yaml:
-
-```yaml
-requires:
-    ro:
-        interface: osm-ro
-```
-
-reactive/mycharm.py:
-
-```python
-@when('ro.ready')
-def ro_ready():
-    ro = endpoint_from_flag('ro.ready')
-    if ro:
-        for unit in ro.ros():
-            add_ro(unit['host'], unit['port'])
-```
diff --git a/installers/charm/interfaces/osm-ro/copyright b/installers/charm/interfaces/osm-ro/copyright
deleted file mode 100644 (file)
index 9270d6c..0000000
+++ /dev/null
@@ -1,16 +0,0 @@
-Format: http://dep.debian.net/deps/dep5/
-
-Files: *
-Copyright: Copyright 2020, Canonical Ltd., All Rights Reserved.
-License: Apache License 2.0
- Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License.
- You may obtain a copy of the License at
- .
-     http://www.apache.org/licenses/LICENSE-2.0
- .
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
diff --git a/installers/charm/interfaces/osm-ro/interface.yaml b/installers/charm/interfaces/osm-ro/interface.yaml
deleted file mode 100644 (file)
index 9a12872..0000000
+++ /dev/null
@@ -1,16 +0,0 @@
-# Copyright 2020 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-#     Unless required by applicable law or agreed to in writing, software
-#     distributed under the License is distributed on an "AS IS" BASIS,
-#     WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-#     See the License for the specific language governing permissions and
-#     limitations under the License.
-name: osm-ro
-summary: Interface for relating to a OSM Resource Orchestrator
-maintainer: '"Adam Israel" <adam@adamisrael.com>'
diff --git a/installers/charm/interfaces/osm-ro/provides.py b/installers/charm/interfaces/osm-ro/provides.py
deleted file mode 100644 (file)
index f577319..0000000
+++ /dev/null
@@ -1,44 +0,0 @@
-#!/usr/bin/python
-# Copyright 2020 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-#     Unless required by applicable law or agreed to in writing, software
-#     distributed under the License is distributed on an "AS IS" BASIS,
-#     WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-#     See the License for the specific language governing permissions and
-#     limitations under the License.
-
-from charms.reactive import RelationBase
-from charms.reactive import hook
-from charms.reactive import scopes
-
-
-class OsmROProvides(RelationBase):
-    scope = scopes.GLOBAL
-
-    @hook("{provides:osm-ro}-relation-joined")
-    def joined(self):
-        self.set_state("{relation_name}.joined")
-
-    @hook("{provides:osm-ro}-relation-changed")
-    def changed(self):
-        self.set_state("{relation_name}.ready")
-
-    @hook("{provides:osm-ro}-relation-{broken,departed}")
-    def broken_departed(self):
-        self.remove_state("{relation_name}.ready")
-        self.remove_state("{relation_name}.joined")
-
-    @hook("{provides:osm-ro}-relation-broken")
-    def broken(self):
-        self.set_state("{relation_name}.removed")
-
-    def send_connection(self, host, port=9090):
-        conv = self.conversation()
-        conv.set_remote("host", host)
-        conv.set_remote("port", port)
diff --git a/installers/charm/interfaces/osm-ro/requires.py b/installers/charm/interfaces/osm-ro/requires.py
deleted file mode 100644 (file)
index fc8f0f4..0000000
+++ /dev/null
@@ -1,56 +0,0 @@
-# Copyright 2020 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-#     Unless required by applicable law or agreed to in writing, software
-#     distributed under the License is distributed on an "AS IS" BASIS,
-#     WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-#     See the License for the specific language governing permissions and
-#     limitations under the License.
-
-from charms.reactive import RelationBase
-from charms.reactive import hook
-from charms.reactive import scopes
-
-
-class OsmRORequires(RelationBase):
-    scope = scopes.GLOBAL
-
-    @hook("{requires:osm-ro}-relation-joined")
-    def joined(self):
-        conv = self.conversation()
-        conv.set_state("{relation_name}.joined")
-
-    @hook("{requires:osm-ro}-relation-changed")
-    def changed(self):
-        conv = self.conversation()
-        if self.ros():
-            conv.set_state("{relation_name}.ready")
-        else:
-            conv.remove_state("{relation_name}.ready")
-
-    @hook("{requires:osm-ro}-relation-departed")
-    def departed(self):
-        conv = self.conversation()
-        conv.remove_state("{relation_name}.ready")
-        conv.remove_state("{relation_name}.joined")
-
-    def ros(self):
-        """Return the NBI's host and port.
-
-        [{
-            'host': <host>,
-            'port': <port>,
-        }]
-        """
-        ros = []
-        for conv in self.conversations():
-            port = conv.get_remote("port")
-            host = conv.get_remote("host") or conv.get_remote("private-address")
-            if host and port:
-                ros.append({"host": host, "port": port})
-        return ros
index d0d4a5b..16cf0f4 100644 (file)
@@ -50,7 +50,3 @@ ignore = ["W503", "E501", "D107"]
 # D100, D101, D102, D103: Ignore missing docstrings in tests
 per-file-ignores = ["tests/*:D100,D101,D102,D103,D104"]
 docstring-convention = "google"
-# Check for properly formatted copyright header in each file
-copyright-check = "True"
-copyright-author = "Canonical Ltd."
-copyright-regexp = "Copyright\\s\\d{4}([-,]\\d{4})*\\s+%(author)s"
index cb303a3..398d4ad 100644 (file)
@@ -17,7 +17,7 @@
 #
 # To get in touch with the maintainers, please contact:
 # osm-charmers@lists.launchpad.net
-ops >= 1.2.0
+ops < 2.2
 lightkube
 lightkube-models
 # git+https://github.com/charmed-osm/config-validator/
index 275137c..0268da8 100644 (file)
@@ -53,7 +53,6 @@ deps =
     black
     flake8
     flake8-docstrings
-    flake8-copyright
     flake8-builtins
     pyproject-flake8
     pep8-naming
diff --git a/installers/charm/layers/osm-common/README.md b/installers/charm/layers/osm-common/README.md
deleted file mode 100644 (file)
index c55b97b..0000000
+++ /dev/null
@@ -1,17 +0,0 @@
-<!-- Copyright 2020 Canonical Ltd.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
-    http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License. -->
-
-# README
-
-WIP. Layer to share common functionality to write/deploy k8s charms for OSM demo
diff --git a/installers/charm/layers/osm-common/layer.yaml b/installers/charm/layers/osm-common/layer.yaml
deleted file mode 100644 (file)
index 6e8379a..0000000
+++ /dev/null
@@ -1,13 +0,0 @@
-# Copyright 2020 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-#     Unless required by applicable law or agreed to in writing, software
-#     distributed under the License is distributed on an "AS IS" BASIS,
-#     WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-#     See the License for the specific language governing permissions and
-#     limitations under the License.
\ No newline at end of file
diff --git a/installers/charm/layers/osm-common/lib/charms/osm/k8s.py b/installers/charm/layers/osm-common/lib/charms/osm/k8s.py
deleted file mode 100644 (file)
index 9735517..0000000
+++ /dev/null
@@ -1,76 +0,0 @@
-# Copyright 2020 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-#     Unless required by applicable law or agreed to in writing, software
-#     distributed under the License is distributed on an "AS IS" BASIS,
-#     WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-#     See the License for the specific language governing permissions and
-#     limitations under the License.
-
-from charmhelpers.core.hookenv import (
-    network_get,
-    relation_id,
-    log,
-)
-
-
-def get_service_ip(endpoint):
-    try:
-        info = network_get(endpoint, relation_id())
-        if 'ingress-addresses' in info:
-            addr = info['ingress-addresses'][0]
-            if len(addr):
-                return addr
-        else:
-            log("No ingress-addresses: {}".format(info))
-    except Exception as e:
-        log("Caught exception checking for service IP: {}".format(e))
-
-    return None
-
-
-def is_pod_up(endpoint):
-    """Check to see if the pod of a relation is up.
-
-    application-vimdb: 19:29:10 INFO unit.vimdb/0.juju-log network info
-
-    In the example below:
-    - 10.1.1.105 is the address of the application pod.
-    - 10.152.183.199 is the service cluster ip
-
-    {
-        'bind-addresses': [{
-            'macaddress': '',
-            'interfacename': '',
-            'addresses': [{
-                'hostname': '',
-                'address': '10.1.1.105',
-                'cidr': ''
-            }]
-        }],
-        'egress-subnets': [
-            '10.152.183.199/32'
-        ],
-        'ingress-addresses': [
-            '10.152.183.199',
-            '10.1.1.105'
-        ]
-    }
-    """
-    try:
-        info = network_get(endpoint, relation_id())
-
-        # Check to see if the pod has been assigned it's internal and
-        # external ips
-        for ingress in info['ingress-addresses']:
-            if len(ingress) == 0:
-                return False
-    except:
-        return False
-
-    return True
diff --git a/installers/charm/layers/osm-common/metadata.yaml b/installers/charm/layers/osm-common/metadata.yaml
deleted file mode 100644 (file)
index 6e8379a..0000000
+++ /dev/null
@@ -1,13 +0,0 @@
-# Copyright 2020 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-#     Unless required by applicable law or agreed to in writing, software
-#     distributed under the License is distributed on an "AS IS" BASIS,
-#     WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-#     See the License for the specific language governing permissions and
-#     limitations under the License.
\ No newline at end of file
diff --git a/installers/charm/layers/osm-common/reactive/osm_common.py b/installers/charm/layers/osm-common/reactive/osm_common.py
deleted file mode 100644 (file)
index 6e8379a..0000000
+++ /dev/null
@@ -1,13 +0,0 @@
-# Copyright 2020 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-#     Unless required by applicable law or agreed to in writing, software
-#     distributed under the License is distributed on an "AS IS" BASIS,
-#     WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-#     See the License for the specific language governing permissions and
-#     limitations under the License.
\ No newline at end of file
diff --git a/installers/charm/lcm/.gitignore b/installers/charm/lcm/.gitignore
deleted file mode 100644 (file)
index 2885df2..0000000
+++ /dev/null
@@ -1,30 +0,0 @@
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-venv
-.vscode
-build
-*.charm
-.coverage
-coverage.xml
-.stestr
-cover
-release
\ No newline at end of file
diff --git a/installers/charm/lcm/.jujuignore b/installers/charm/lcm/.jujuignore
deleted file mode 100644 (file)
index 3ae3e7d..0000000
+++ /dev/null
@@ -1,34 +0,0 @@
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-venv
-.vscode
-build
-*.charm
-.coverage
-coverage.xml
-.gitignore
-.stestr
-cover
-release
-tests/
-requirements*
-tox.ini
diff --git a/installers/charm/lcm/.yamllint.yaml b/installers/charm/lcm/.yamllint.yaml
deleted file mode 100644 (file)
index d71fb69..0000000
+++ /dev/null
@@ -1,34 +0,0 @@
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
----
-extends: default
-
-yaml-files:
-  - "*.yaml"
-  - "*.yml"
-  - ".yamllint"
-ignore: |
-  .tox
-  cover/
-  build/
-  venv
-  release/
diff --git a/installers/charm/lcm/README.md b/installers/charm/lcm/README.md
deleted file mode 100644 (file)
index 1a6cd74..0000000
+++ /dev/null
@@ -1,23 +0,0 @@
-<!-- Copyright 2020 Canonical Ltd.
-
-Licensed under the Apache License, Version 2.0 (the "License"); you may
-not use this file except in compliance with the License. You may obtain
-a copy of the License at
-
-        http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-License for the specific language governing permissions and limitations
-under the License.
-
-For those usages not covered by the Apache License, Version 2.0 please
-contact: legal@canonical.com
-
-To get in touch with the maintainers, please contact:
-osm-charmers@lists.launchpad.net -->
-
-# LCM operator Charm for Kubernetes
-
-## Requirements
diff --git a/installers/charm/lcm/charmcraft.yaml b/installers/charm/lcm/charmcraft.yaml
deleted file mode 100644 (file)
index 0a285a9..0000000
+++ /dev/null
@@ -1,37 +0,0 @@
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-type: charm
-bases:
-  - build-on:
-      - name: ubuntu
-        channel: "20.04"
-        architectures: ["amd64"]
-    run-on:
-      - name: ubuntu
-        channel: "20.04"
-        architectures:
-          - amd64
-          - aarch64
-          - arm64
-parts:
-  charm:
-    build-packages: [git]
diff --git a/installers/charm/lcm/config.yaml b/installers/charm/lcm/config.yaml
deleted file mode 100644 (file)
index 709a8ca..0000000
+++ /dev/null
@@ -1,318 +0,0 @@
-# Copyright 2020 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-options:
-  vca_host:
-    type: string
-    description: "The VCA host."
-  vca_port:
-    type: int
-    description: "The VCA port."
-  vca_user:
-    type: string
-    description: "The VCA user name."
-  vca_secret:
-    type: string
-    description: "The VCA user secret."
-  vca_pubkey:
-    type: string
-    description: "The VCA public key."
-  vca_cacert:
-    type: string
-    description: "The VCA cacert."
-  vca_apiproxy:
-    type: string
-    description: "The VCA api proxy (native charms)"
-  vca_cloud:
-    type: string
-    description: "The VCA lxd cloud name"
-  vca_k8s_cloud:
-    type: string
-    description: "The VCA K8s cloud name"
-  database_commonkey:
-    description: Database common key
-    type: string
-    default: osm
-  mongodb_uri:
-    type: string
-    description: MongoDB URI (external database)
-  log_level:
-    description: "Log Level"
-    type: string
-    default: "INFO"
-  vca_model_config_agent_metadata_url:
-    description: The URL of the private stream.
-    type: string
-  vca_model_config_agent_stream:
-    description: |
-      The stream to use for deploy/upgrades of agents.
-      See additional info below.
-    type: string
-  vca_model_config_apt_ftp_proxy:
-    description: The APT FTP proxy for the model.
-    type: string
-  vca_model_config_apt_http_proxy:
-    description: The APT HTTP proxy for the model.
-    type: string
-  vca_model_config_apt_https_proxy:
-    description: The APT HTTPS proxy for the model.
-    type: string
-  vca_model_config_apt_mirror:
-    description: The APT mirror for the model.
-    type: string
-  vca_model_config_apt_no_proxy:
-    description: The APT no proxy for the model.
-    type: string
-  vca_model_config_automatically_retry_hooks:
-    description: Set the policy on retying failed hooks.
-    type: boolean
-  vca_model_config_backup_dir:
-    description: Backup directory
-    type: string
-  vca_model_config_cloudinit_userdata:
-    description: Cloudinit userdata
-    type: string
-  vca_model_config_container_image_metadata_url:
-    description: |
-      Corresponds to 'image-metadata-url' (see below) for cloud-hosted
-      KVM guests or LXD containers. Not needed for the localhost cloud.
-    type: string
-  vca_model_config_container_image_stream:
-    description: |
-      Corresponds to 'image-stream' (see below) for cloud-hosted KVM
-      guests or LXD containers. Not needed for the localhost cloud.
-    type: string
-  vca_model_config_container_inherit_properties:
-    description: |
-      Set parameters to be inherited from a machine toits hosted
-      containers (KVM or LXD).
-    type: string
-  vca_model_config_container_networking_method:
-    description: |
-      The FAN networking mode to use. Default values can be provider-specific.
-    type: string
-  vca_model_config_default_series:
-    description: The default series of Ubuntu to use for deploying charms.
-    type: string
-  vca_model_config_default_space:
-    description: |
-      The space used as the default binding when deploying charms.
-      Will be "alpha" by default.
-    type: string
-  vca_model_config_development:
-    description: Set whether the model is in development mode.
-    type: boolean
-  vca_model_config_disable_network_management:
-    description: |
-      Set whether to give network control to the provider instead
-      of Juju controlling configuration.
-    type: boolean
-  vca_model_config_egress_subnets:
-    description: Egress subnets
-    type: string
-  vca_model_config_enable_os_refresh_update:
-    description: |
-      Set whether newly provisioned instances should run their
-      respective OS's update capability.
-    type: boolean
-  vca_model_config_enable_os_upgrade:
-    description: |
-      Set whether newly provisioned instances should run their
-      respective OS's upgrade capability.
-    type: boolean
-  vca_model_config_fan_config:
-    description: |
-      The FAN overlay and underlay networks in
-      CIDR notation (space-separated).
-    type: string
-  vca_model_config_firewall_mode:
-    description: The mode to use for network firewalling.
-    type: string
-  vca_model_config_ftp_proxy:
-    description: |
-      The FTP proxy value to configure on instances,
-      in the FTP_PROXY environment variable.
-    type: string
-  vca_model_config_http_proxy:
-    description: |
-      The HTTP proxy value to configure on instances,
-      in the HTTP_PROXY environment variable.
-    type: string
-  vca_model_config_https_proxy:
-    description: |
-      The HTTPS proxy value to configure on instances,
-      in the HTTPS_PROXY environment variable.
-    type: string
-  vca_model_config_ignore_machine_addresses:
-    description: |
-      When true, the machine worker will not look up
-      or discover any machine addresses.
-    type: boolean
-  vca_model_config_image_metadata_url:
-    description: |
-      The URL at which the metadata used to locate
-      OS image ids is located.
-    type: string
-  vca_model_config_image_stream:
-    description: |
-      The simplestreams stream used to identify which image
-      ids to search when starting an instance.
-    type: string
-  vca_model_config_juju_ftp_proxy:
-    description: The charm-centric FTP proxy value.
-    type: string
-  vca_model_config_juju_http_proxy:
-    description: The charm-centric HTTP proxy value.
-    type: string
-  vca_model_config_juju_https_proxy:
-    description: The charm-centric HTTPS proxy value.
-    type: string
-  vca_model_config_juju_no_proxy:
-    description: The charm-centric no-proxy value.
-    type: string
-  vca_model_config_logforward_enabled:
-    description: Set whether the log forward function is enabled.
-    type: boolean
-  vca_model_config_logging_config:
-    description: |
-      The configuration string to use when configuring Juju agent logging
-    type: string
-  vca_model_config_lxd_snap_channel:
-    description: LXD snap channel
-    type: string
-  vca_model_config_max_action_results_age:
-    description: The maximum aget for status action results entries
-    type: string
-  vca_model_config_max_action_results_size:
-    description: The maximum size for status action results entries
-    type: string
-  vca_model_config_max_status_history_age:
-    description: |
-      The maximum age for status history entries before they are pruned,
-      in a human-readable time format.
-    type: string
-  vca_model_config_max_status_history_size:
-    description: |
-      The maximum size for the status history collection,
-      in human-readable memory format.
-    type: string
-  vca_model_config_net_bond_reconfigure_delay:
-    description: Net bond reconfigure delay
-    type: int
-  vca_model_config_no_proxy:
-    description: List of domain addresses not to be proxied (comma-separated).
-    type: string
-  vca_model_config_provisioner_harvest_mode:
-    description: Set what to do with unknown machines.
-    type: string
-  vca_model_config_proxy_ssh:
-    description: |
-      Set whether SSH commands should be proxied through the API server.
-    type: boolean
-  vca_model_config_snap_http_proxy:
-    description: The snap-centric HTTP proxy value.
-    type: string
-  vca_model_config_snap_https_proxy:
-    description: The snap-centric HTTPS proxy value.
-    type: string
-  vca_model_config_snap_store_assertions:
-    description: |
-      The collection of snap store assertions.
-      Each entry should contain the snap store ID.
-    type: string
-  vca_model_config_snap_store_proxy:
-    description: The snap store ID.
-    type: string
-  vca_model_config_snap_store_proxy_url:
-    description: The snap store proxy url
-    type: string
-  vca_model_config_ssl_hostname_verification:
-    description: Set whether SSL hostname verification is enabled.
-    type: boolean
-  vca_model_config_test_mode:
-    description: |
-      Set whether the model is intended for testing.
-      If true, accessing the charm store does not affect
-      statistical data of the store.
-    type: boolean
-  vca_model_config_transmit_vendor_metrics:
-    description: |
-      Set whether the controller will send metrics collected from
-      this model for use in anonymized aggregate analytics.
-    type: boolean
-  vca_model_config_update_status_hook_interval:
-    description: |
-      The run frequency of the update-status hook.
-      The value has a random +/- 20% offset applied to avoid hooks
-      for all units firing at once. Value change only honoured
-      during controller and model creation
-      (bootstrap --config and add-model --config).
-    type: string
-  vca_stablerepourl:
-    description: Stable repository URL for Helm charts
-    type: string
-    default: https://charts.helm.sh/stable
-  vca_helm_ca_certs:
-    description: CA certificates to validate access to Helm repository
-    type: string
-    default: ""
-  image_pull_policy:
-    type: string
-    description: |
-      ImagePullPolicy configuration for the pod.
-      Possible values: always, ifnotpresent, never
-    default: always
-  debug_mode:
-    description: |
-      If true, debug mode is activated. It means that the service will not run,
-      and instead, the command for the container will be a `sleep infinity`.
-      Note: If enabled, security_context will be disabled.
-    type: boolean
-    default: false
-  debug_pubkey:
-    description: |
-      Public SSH key that will be injected to the application pod.
-    type: string
-  debug_lcm_local_path:
-    description: |
-      Local full path to the LCM project.
-
-      The path will be mounted to the docker image,
-      which means changes during the debugging will be saved in your local path.
-    type: string
-  debug_n2vc_local_path:
-    description: |
-      Local full path to the N2VC project.
-
-      The path will be mounted to the docker image,
-      which means changes during the debugging will be saved in your local path.
-    type: string
-  debug_common_local_path:
-    description: |
-      Local full path to the COMMON project.
-
-      The path will be mounted to the docker image,
-      which means changes during the debugging will be saved in your local path.
-    type: string
-  security_context:
-    description: Enables the security context of the pods
-    type: boolean
-    default: false
diff --git a/installers/charm/lcm/lib/charms/kafka_k8s/v0/kafka.py b/installers/charm/lcm/lib/charms/kafka_k8s/v0/kafka.py
deleted file mode 100644 (file)
index 1baf9a8..0000000
+++ /dev/null
@@ -1,207 +0,0 @@
-# Copyright 2022 Canonical Ltd.
-# See LICENSE file for licensing details.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#    http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
-# implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""Kafka library.
-
-This [library](https://juju.is/docs/sdk/libraries) implements both sides of the
-`kafka` [interface](https://juju.is/docs/sdk/relations).
-
-The *provider* side of this interface is implemented by the
-[kafka-k8s Charmed Operator](https://charmhub.io/kafka-k8s).
-
-Any Charmed Operator that *requires* Kafka for providing its
-service should implement the *requirer* side of this interface.
-
-In a nutshell using this library to implement a Charmed Operator *requiring*
-Kafka would look like
-
-```
-$ charmcraft fetch-lib charms.kafka_k8s.v0.kafka
-```
-
-`metadata.yaml`:
-
-```
-requires:
-  kafka:
-    interface: kafka
-    limit: 1
-```
-
-`src/charm.py`:
-
-```
-from charms.kafka_k8s.v0.kafka import KafkaEvents, KafkaRequires
-from ops.charm import CharmBase
-
-
-class MyCharm(CharmBase):
-
-    on = KafkaEvents()
-
-    def __init__(self, *args):
-        super().__init__(*args)
-        self.kafka = KafkaRequires(self)
-        self.framework.observe(
-            self.on.kafka_available,
-            self._on_kafka_available,
-        )
-        self.framework.observe(
-            self.on.kafka_broken,
-            self._on_kafka_broken,
-        )
-
-    def _on_kafka_available(self, event):
-        # Get Kafka host and port
-        host: str = self.kafka.host
-        port: int = self.kafka.port
-        # host => "kafka-k8s"
-        # port => 9092
-
-    def _on_kafka_broken(self, event):
-        # Stop service
-        # ...
-        self.unit.status = BlockedStatus("need kafka relation")
-```
-
-You can file bugs
-[here](https://github.com/charmed-osm/kafka-k8s-operator/issues)!
-"""
-
-from typing import Optional
-
-from ops.charm import CharmBase, CharmEvents
-from ops.framework import EventBase, EventSource, Object
-
-# The unique Charmhub library identifier, never change it
-from ops.model import Relation
-
-LIBID = "eacc8c85082347c9aae740e0220b8376"
-
-# Increment this major API version when introducing breaking changes
-LIBAPI = 0
-
-# Increment this PATCH version before using `charmcraft publish-lib` or reset
-# to 0 if you are raising the major API version
-LIBPATCH = 3
-
-
-KAFKA_HOST_APP_KEY = "host"
-KAFKA_PORT_APP_KEY = "port"
-
-
-class _KafkaAvailableEvent(EventBase):
-    """Event emitted when Kafka is available."""
-
-
-class _KafkaBrokenEvent(EventBase):
-    """Event emitted when Kafka relation is broken."""
-
-
-class KafkaEvents(CharmEvents):
-    """Kafka events.
-
-    This class defines the events that Kafka can emit.
-
-    Events:
-        kafka_available (_KafkaAvailableEvent)
-    """
-
-    kafka_available = EventSource(_KafkaAvailableEvent)
-    kafka_broken = EventSource(_KafkaBrokenEvent)
-
-
-class KafkaRequires(Object):
-    """Requires-side of the Kafka relation."""
-
-    def __init__(self, charm: CharmBase, endpoint_name: str = "kafka") -> None:
-        super().__init__(charm, endpoint_name)
-        self.charm = charm
-        self._endpoint_name = endpoint_name
-
-        # Observe relation events
-        event_observe_mapping = {
-            charm.on[self._endpoint_name].relation_changed: self._on_relation_changed,
-            charm.on[self._endpoint_name].relation_broken: self._on_relation_broken,
-        }
-        for event, observer in event_observe_mapping.items():
-            self.framework.observe(event, observer)
-
-    def _on_relation_changed(self, event) -> None:
-        if event.relation.app and all(
-            key in event.relation.data[event.relation.app]
-            for key in (KAFKA_HOST_APP_KEY, KAFKA_PORT_APP_KEY)
-        ):
-            self.charm.on.kafka_available.emit()
-
-    def _on_relation_broken(self, _) -> None:
-        self.charm.on.kafka_broken.emit()
-
-    @property
-    def host(self) -> str:
-        relation: Relation = self.model.get_relation(self._endpoint_name)
-        return (
-            relation.data[relation.app].get(KAFKA_HOST_APP_KEY)
-            if relation and relation.app
-            else None
-        )
-
-    @property
-    def port(self) -> int:
-        relation: Relation = self.model.get_relation(self._endpoint_name)
-        return (
-            int(relation.data[relation.app].get(KAFKA_PORT_APP_KEY))
-            if relation and relation.app
-            else None
-        )
-
-
-class KafkaProvides(Object):
-    """Provides-side of the Kafka relation."""
-
-    def __init__(self, charm: CharmBase, endpoint_name: str = "kafka") -> None:
-        super().__init__(charm, endpoint_name)
-        self._endpoint_name = endpoint_name
-
-    def set_host_info(self, host: str, port: int, relation: Optional[Relation] = None) -> None:
-        """Set Kafka host and port.
-
-        This function writes in the application data of the relation, therefore,
-        only the unit leader can call it.
-
-        Args:
-            host (str): Kafka hostname or IP address.
-            port (int): Kafka port.
-            relation (Optional[Relation]): Relation to update.
-                                           If not specified, all relations will be updated.
-
-        Raises:
-            Exception: if a non-leader unit calls this function.
-        """
-        if not self.model.unit.is_leader():
-            raise Exception("only the leader set host information.")
-
-        if relation:
-            self._update_relation_data(host, port, relation)
-            return
-
-        for relation in self.model.relations[self._endpoint_name]:
-            self._update_relation_data(host, port, relation)
-
-    def _update_relation_data(self, host: str, port: int, relation: Relation) -> None:
-        """Update data in relation if needed."""
-        relation.data[self.model.app][KAFKA_HOST_APP_KEY] = host
-        relation.data[self.model.app][KAFKA_PORT_APP_KEY] = str(port)
diff --git a/installers/charm/lcm/metadata.yaml b/installers/charm/lcm/metadata.yaml
deleted file mode 100644 (file)
index e81cdd9..0000000
+++ /dev/null
@@ -1,50 +0,0 @@
-# Copyright 2020 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-name: osm-lcm
-summary: OSM Lifecycle Management (LCM)
-description: |
-  A CAAS charm to deploy OSM's Lifecycle Management (LCM).
-series:
-  - kubernetes
-tags:
-  - kubernetes
-  - osm
-  - lcm
-min-juju-version: 2.8.0
-deployment:
-  type: stateless
-  service: cluster
-resources:
-  image:
-    type: oci-image
-    description: OSM docker image for LCM
-    upstream-source: "opensourcemano/lcm:latest"
-requires:
-  kafka:
-    interface: kafka
-    limit: 1
-  mongodb:
-    interface: mongodb
-    limit: 1
-  ro:
-    interface: http
-    limit: 1
diff --git a/installers/charm/lcm/requirements-test.txt b/installers/charm/lcm/requirements-test.txt
deleted file mode 100644 (file)
index cf61dd4..0000000
+++ /dev/null
@@ -1,20 +0,0 @@
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-mock==4.0.3
diff --git a/installers/charm/lcm/requirements.txt b/installers/charm/lcm/requirements.txt
deleted file mode 100644 (file)
index 1a8928c..0000000
+++ /dev/null
@@ -1,22 +0,0 @@
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-git+https://github.com/charmed-osm/ops-lib-charmed-osm/@master
\ No newline at end of file
diff --git a/installers/charm/lcm/src/charm.py b/installers/charm/lcm/src/charm.py
deleted file mode 100755 (executable)
index 5319763..0000000
+++ /dev/null
@@ -1,573 +0,0 @@
-#!/usr/bin/env python3
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-# pylint: disable=E0213
-
-
-import logging
-from typing import NoReturn, Optional
-
-
-from charms.kafka_k8s.v0.kafka import KafkaEvents, KafkaRequires
-from ops.main import main
-from opslib.osm.charm import CharmedOsmBase, RelationsMissing
-from opslib.osm.interfaces.http import HttpClient
-from opslib.osm.interfaces.mongo import MongoClient
-from opslib.osm.pod import ContainerV3Builder, PodRestartPolicy, PodSpecV3Builder
-from opslib.osm.validator import ModelValidator, validator
-
-
-logger = logging.getLogger(__name__)
-
-PORT = 9999
-
-
-class ConfigModel(ModelValidator):
-    vca_host: Optional[str]
-    vca_port: Optional[int]
-    vca_user: Optional[str]
-    vca_secret: Optional[str]
-    vca_pubkey: Optional[str]
-    vca_cacert: Optional[str]
-    vca_cloud: Optional[str]
-    vca_k8s_cloud: Optional[str]
-    database_commonkey: str
-    mongodb_uri: Optional[str]
-    log_level: str
-    vca_apiproxy: Optional[str]
-    # Model-config options
-    vca_model_config_agent_metadata_url: Optional[str]
-    vca_model_config_agent_stream: Optional[str]
-    vca_model_config_apt_ftp_proxy: Optional[str]
-    vca_model_config_apt_http_proxy: Optional[str]
-    vca_model_config_apt_https_proxy: Optional[str]
-    vca_model_config_apt_mirror: Optional[str]
-    vca_model_config_apt_no_proxy: Optional[str]
-    vca_model_config_automatically_retry_hooks: Optional[bool]
-    vca_model_config_backup_dir: Optional[str]
-    vca_model_config_cloudinit_userdata: Optional[str]
-    vca_model_config_container_image_metadata_url: Optional[str]
-    vca_model_config_container_image_stream: Optional[str]
-    vca_model_config_container_inherit_properties: Optional[str]
-    vca_model_config_container_networking_method: Optional[str]
-    vca_model_config_default_series: Optional[str]
-    vca_model_config_default_space: Optional[str]
-    vca_model_config_development: Optional[bool]
-    vca_model_config_disable_network_management: Optional[bool]
-    vca_model_config_egress_subnets: Optional[str]
-    vca_model_config_enable_os_refresh_update: Optional[bool]
-    vca_model_config_enable_os_upgrade: Optional[bool]
-    vca_model_config_fan_config: Optional[str]
-    vca_model_config_firewall_mode: Optional[str]
-    vca_model_config_ftp_proxy: Optional[str]
-    vca_model_config_http_proxy: Optional[str]
-    vca_model_config_https_proxy: Optional[str]
-    vca_model_config_ignore_machine_addresses: Optional[bool]
-    vca_model_config_image_metadata_url: Optional[str]
-    vca_model_config_image_stream: Optional[str]
-    vca_model_config_juju_ftp_proxy: Optional[str]
-    vca_model_config_juju_http_proxy: Optional[str]
-    vca_model_config_juju_https_proxy: Optional[str]
-    vca_model_config_juju_no_proxy: Optional[str]
-    vca_model_config_logforward_enabled: Optional[bool]
-    vca_model_config_logging_config: Optional[str]
-    vca_model_config_lxd_snap_channel: Optional[str]
-    vca_model_config_max_action_results_age: Optional[str]
-    vca_model_config_max_action_results_size: Optional[str]
-    vca_model_config_max_status_history_age: Optional[str]
-    vca_model_config_max_status_history_size: Optional[str]
-    vca_model_config_net_bond_reconfigure_delay: Optional[str]
-    vca_model_config_no_proxy: Optional[str]
-    vca_model_config_provisioner_harvest_mode: Optional[str]
-    vca_model_config_proxy_ssh: Optional[bool]
-    vca_model_config_snap_http_proxy: Optional[str]
-    vca_model_config_snap_https_proxy: Optional[str]
-    vca_model_config_snap_store_assertions: Optional[str]
-    vca_model_config_snap_store_proxy: Optional[str]
-    vca_model_config_snap_store_proxy_url: Optional[str]
-    vca_model_config_ssl_hostname_verification: Optional[bool]
-    vca_model_config_test_mode: Optional[bool]
-    vca_model_config_transmit_vendor_metrics: Optional[bool]
-    vca_model_config_update_status_hook_interval: Optional[str]
-    vca_stablerepourl: Optional[str]
-    vca_helm_ca_certs: Optional[str]
-    image_pull_policy: str
-    debug_mode: bool
-    security_context: bool
-
-    @validator("log_level")
-    def validate_log_level(cls, v):
-        if v not in {"INFO", "DEBUG"}:
-            raise ValueError("value must be INFO or DEBUG")
-        return v
-
-    @validator("mongodb_uri")
-    def validate_mongodb_uri(cls, v):
-        if v and not v.startswith("mongodb://"):
-            raise ValueError("mongodb_uri is not properly formed")
-        return v
-
-    @validator("image_pull_policy")
-    def validate_image_pull_policy(cls, v):
-        values = {
-            "always": "Always",
-            "ifnotpresent": "IfNotPresent",
-            "never": "Never",
-        }
-        v = v.lower()
-        if v not in values.keys():
-            raise ValueError("value must be always, ifnotpresent or never")
-        return values[v]
-
-
-class LcmCharm(CharmedOsmBase):
-    on = KafkaEvents()
-
-    def __init__(self, *args) -> NoReturn:
-        super().__init__(
-            *args,
-            oci_image="image",
-            vscode_workspace=VSCODE_WORKSPACE,
-        )
-        if self.config.get("debug_mode"):
-            self.enable_debug_mode(
-                pubkey=self.config.get("debug_pubkey"),
-                hostpaths={
-                    "LCM": {
-                        "hostpath": self.config.get("debug_lcm_local_path"),
-                        "container-path": "/usr/lib/python3/dist-packages/osm_lcm",
-                    },
-                    "N2VC": {
-                        "hostpath": self.config.get("debug_n2vc_local_path"),
-                        "container-path": "/usr/lib/python3/dist-packages/n2vc",
-                    },
-                    "osm_common": {
-                        "hostpath": self.config.get("debug_common_local_path"),
-                        "container-path": "/usr/lib/python3/dist-packages/osm_common",
-                    },
-                },
-            )
-        self.kafka = KafkaRequires(self)
-        self.framework.observe(self.on.kafka_available, self.configure_pod)
-        self.framework.observe(self.on.kafka_broken, self.configure_pod)
-
-        self.mongodb_client = MongoClient(self, "mongodb")
-        self.framework.observe(self.on["mongodb"].relation_changed, self.configure_pod)
-        self.framework.observe(self.on["mongodb"].relation_broken, self.configure_pod)
-
-        self.ro_client = HttpClient(self, "ro")
-        self.framework.observe(self.on["ro"].relation_changed, self.configure_pod)
-        self.framework.observe(self.on["ro"].relation_broken, self.configure_pod)
-
-    def _check_missing_dependencies(self, config: ConfigModel):
-        missing_relations = []
-
-        if not self.kafka.host or not self.kafka.port:
-            missing_relations.append("kafka")
-        if not config.mongodb_uri and self.mongodb_client.is_missing_data_in_unit():
-            missing_relations.append("mongodb")
-        if self.ro_client.is_missing_data_in_app():
-            missing_relations.append("ro")
-
-        if missing_relations:
-            raise RelationsMissing(missing_relations)
-
-    def build_pod_spec(self, image_info):
-        # Validate config
-        config = ConfigModel(**dict(self.config))
-
-        if config.mongodb_uri and not self.mongodb_client.is_missing_data_in_unit():
-            raise Exception("Mongodb data cannot be provided via config and relation")
-
-        # Check relations
-        self._check_missing_dependencies(config)
-
-        security_context_enabled = (
-            config.security_context if not config.debug_mode else False
-        )
-
-        # Create Builder for the PodSpec
-        pod_spec_builder = PodSpecV3Builder(
-            enable_security_context=security_context_enabled
-        )
-
-        # Add secrets to the pod
-        mongodb_secret_name = f"{self.app.name}-mongodb-secret"
-        pod_spec_builder.add_secret(
-            mongodb_secret_name,
-            {
-                "uri": config.mongodb_uri or self.mongodb_client.connection_string,
-                "commonkey": config.database_commonkey,
-                "helm_ca_certs": config.vca_helm_ca_certs,
-            },
-        )
-
-        # Build Container
-        container_builder = ContainerV3Builder(
-            self.app.name,
-            image_info,
-            config.image_pull_policy,
-            run_as_non_root=security_context_enabled,
-        )
-        container_builder.add_port(name=self.app.name, port=PORT)
-        container_builder.add_envs(
-            {
-                # General configuration
-                "ALLOW_ANONYMOUS_LOGIN": "yes",
-                "OSMLCM_GLOBAL_LOGLEVEL": config.log_level,
-                # RO configuration
-                "OSMLCM_RO_HOST": self.ro_client.host,
-                "OSMLCM_RO_PORT": self.ro_client.port,
-                "OSMLCM_RO_TENANT": "osm",
-                # Kafka configuration
-                "OSMLCM_MESSAGE_DRIVER": "kafka",
-                "OSMLCM_MESSAGE_HOST": self.kafka.host,
-                "OSMLCM_MESSAGE_PORT": self.kafka.port,
-                # Database configuration
-                "OSMLCM_DATABASE_DRIVER": "mongo",
-                # Storage configuration
-                "OSMLCM_STORAGE_DRIVER": "mongo",
-                "OSMLCM_STORAGE_PATH": "/app/storage",
-                "OSMLCM_STORAGE_COLLECTION": "files",
-                "OSMLCM_VCA_STABLEREPOURL": config.vca_stablerepourl,
-            }
-        )
-        container_builder.add_secret_envs(
-            secret_name=mongodb_secret_name,
-            envs={
-                "OSMLCM_DATABASE_URI": "uri",
-                "OSMLCM_DATABASE_COMMONKEY": "commonkey",
-                "OSMLCM_STORAGE_URI": "uri",
-                "OSMLCM_VCA_HELM_CA_CERTS": "helm_ca_certs",
-            },
-        )
-        if config.vca_host:
-            vca_secret_name = f"{self.app.name}-vca-secret"
-            pod_spec_builder.add_secret(
-                vca_secret_name,
-                {
-                    "host": config.vca_host,
-                    "port": str(config.vca_port),
-                    "user": config.vca_user,
-                    "pubkey": config.vca_pubkey,
-                    "secret": config.vca_secret,
-                    "cacert": config.vca_cacert,
-                    "cloud": config.vca_cloud,
-                    "k8s_cloud": config.vca_k8s_cloud,
-                },
-            )
-            container_builder.add_secret_envs(
-                secret_name=vca_secret_name,
-                envs={
-                    # VCA configuration
-                    "OSMLCM_VCA_HOST": "host",
-                    "OSMLCM_VCA_PORT": "port",
-                    "OSMLCM_VCA_USER": "user",
-                    "OSMLCM_VCA_PUBKEY": "pubkey",
-                    "OSMLCM_VCA_SECRET": "secret",
-                    "OSMLCM_VCA_CACERT": "cacert",
-                    "OSMLCM_VCA_CLOUD": "cloud",
-                    "OSMLCM_VCA_K8S_CLOUD": "k8s_cloud",
-                },
-            )
-            if config.vca_apiproxy:
-                container_builder.add_env("OSMLCM_VCA_APIPROXY", config.vca_apiproxy)
-
-            model_config_envs = {
-                f"OSMLCM_{k.upper()}": v
-                for k, v in self.config.items()
-                if k.startswith("vca_model_config")
-            }
-            if model_config_envs:
-                container_builder.add_envs(model_config_envs)
-        container = container_builder.build()
-
-        # Add container to pod spec
-        pod_spec_builder.add_container(container)
-
-        # Add restart policy
-        restart_policy = PodRestartPolicy()
-        restart_policy.add_secrets()
-        pod_spec_builder.set_restart_policy(restart_policy)
-
-        return pod_spec_builder.build()
-
-
-VSCODE_WORKSPACE = {
-    "folders": [
-        {"path": "/usr/lib/python3/dist-packages/osm_lcm"},
-        {"path": "/usr/lib/python3/dist-packages/n2vc"},
-        {"path": "/usr/lib/python3/dist-packages/osm_common"},
-    ],
-    "settings": {},
-    "launch": {
-        "version": "0.2.0",
-        "configurations": [
-            {
-                "name": "LCM",
-                "type": "python",
-                "request": "launch",
-                "module": "osm_lcm.lcm",
-                "justMyCode": False,
-            }
-        ],
-    },
-}
-
-
-if __name__ == "__main__":
-    main(LcmCharm)
-
-
-# class ConfigurePodEvent(EventBase):
-#     """Configure Pod event"""
-
-#     pass
-
-
-# class LcmEvents(CharmEvents):
-#     """LCM Events"""
-
-#     configure_pod = EventSource(ConfigurePodEvent)
-
-
-# class LcmCharm(CharmBase):
-#     """LCM Charm."""
-
-#     state = StoredState()
-#     on = LcmEvents()
-
-#     def __init__(self, *args) -> NoReturn:
-#         """LCM Charm constructor."""
-#         super().__init__(*args)
-
-#         # Internal state initialization
-#         self.state.set_default(pod_spec=None)
-
-#         # Message bus data initialization
-#         self.state.set_default(message_host=None)
-#         self.state.set_default(message_port=None)
-
-#         # Database data initialization
-#         self.state.set_default(database_uri=None)
-
-#         # RO data initialization
-#         self.state.set_default(ro_host=None)
-#         self.state.set_default(ro_port=None)
-
-#         self.port = LCM_PORT
-#         self.image = OCIImageResource(self, "image")
-
-#         # Registering regular events
-#         self.framework.observe(self.on.start, self.configure_pod)
-#         self.framework.observe(self.on.config_changed, self.configure_pod)
-#         self.framework.observe(self.on.upgrade_charm, self.configure_pod)
-
-#         # Registering custom internal events
-#         self.framework.observe(self.on.configure_pod, self.configure_pod)
-
-#         # Registering required relation events
-#         self.framework.observe(
-#             self.on.kafka_relation_changed, self._on_kafka_relation_changed
-#         )
-#         self.framework.observe(
-#             self.on.mongodb_relation_changed, self._on_mongodb_relation_changed
-#         )
-#         self.framework.observe(
-#             self.on.ro_relation_changed, self._on_ro_relation_changed
-#         )
-
-#         # Registering required relation broken events
-#         self.framework.observe(
-#             self.on.kafka_relation_broken, self._on_kafka_relation_broken
-#         )
-#         self.framework.observe(
-#             self.on.mongodb_relation_broken, self._on_mongodb_relation_broken
-#         )
-#         self.framework.observe(
-#             self.on.ro_relation_broken, self._on_ro_relation_broken
-#         )
-
-#     def _on_kafka_relation_changed(self, event: EventBase) -> NoReturn:
-#         """Reads information about the kafka relation.
-
-#         Args:
-#             event (EventBase): Kafka relation event.
-#         """
-#         message_host = event.relation.data[event.unit].get("host")
-#         message_port = event.relation.data[event.unit].get("port")
-
-#         if (
-#             message_host
-#             and message_port
-#             and (
-#                 self.state.message_host != message_host
-#                 or self.state.message_port != message_port
-#             )
-#         ):
-#             self.state.message_host = message_host
-#             self.state.message_port = message_port
-#             self.on.configure_pod.emit()
-
-#     def _on_kafka_relation_broken(self, event: EventBase) -> NoReturn:
-#         """Clears data from kafka relation.
-
-#         Args:
-#             event (EventBase): Kafka relation event.
-#         """
-#         self.state.message_host = None
-#         self.state.message_port = None
-#         self.on.configure_pod.emit()
-
-#     def _on_mongodb_relation_changed(self, event: EventBase) -> NoReturn:
-#         """Reads information about the DB relation.
-
-#         Args:
-#             event (EventBase): DB relation event.
-#         """
-#         database_uri = event.relation.data[event.unit].get("connection_string")
-
-#         if database_uri and self.state.database_uri != database_uri:
-#             self.state.database_uri = database_uri
-#             self.on.configure_pod.emit()
-
-#     def _on_mongodb_relation_broken(self, event: EventBase) -> NoReturn:
-#         """Clears data from mongodb relation.
-
-#         Args:
-#             event (EventBase): DB relation event.
-#         """
-#         self.state.database_uri = None
-#         self.on.configure_pod.emit()
-
-#     def _on_ro_relation_changed(self, event: EventBase) -> NoReturn:
-#         """Reads information about the RO relation.
-
-#         Args:
-#             event (EventBase): Keystone relation event.
-#         """
-#         ro_host = event.relation.data[event.unit].get("host")
-#         ro_port = event.relation.data[event.unit].get("port")
-
-#         if (
-#             ro_host
-#             and ro_port
-#             and (self.state.ro_host != ro_host or self.state.ro_port != ro_port)
-#         ):
-#             self.state.ro_host = ro_host
-#             self.state.ro_port = ro_port
-#             self.on.configure_pod.emit()
-
-#     def _on_ro_relation_broken(self, event: EventBase) -> NoReturn:
-#         """Clears data from ro relation.
-
-#         Args:
-#             event (EventBase): Keystone relation event.
-#         """
-#         self.state.ro_host = None
-#         self.state.ro_port = None
-#         self.on.configure_pod.emit()
-
-#     def _missing_relations(self) -> str:
-#         """Checks if there missing relations.
-
-#         Returns:
-#             str: string with missing relations
-#         """
-#         data_status = {
-#             "kafka": self.state.message_host,
-#             "mongodb": self.state.database_uri,
-#             "ro": self.state.ro_host,
-#         }
-
-#         missing_relations = [k for k, v in data_status.items() if not v]
-
-#         return ", ".join(missing_relations)
-
-#     @property
-#     def relation_state(self) -> Dict[str, Any]:
-#         """Collects relation state configuration for pod spec assembly.
-
-#         Returns:
-#             Dict[str, Any]: relation state information.
-#         """
-#         relation_state = {
-#             "message_host": self.state.message_host,
-#             "message_port": self.state.message_port,
-#             "database_uri": self.state.database_uri,
-#             "ro_host": self.state.ro_host,
-#             "ro_port": self.state.ro_port,
-#         }
-
-#         return relation_state
-
-#     def configure_pod(self, event: EventBase) -> NoReturn:
-#         """Assemble the pod spec and apply it, if possible.
-
-#         Args:
-#             event (EventBase): Hook or Relation event that started the
-#                                function.
-#         """
-#         if missing := self._missing_relations():
-#             self.unit.status = BlockedStatus(
-#                 "Waiting for {0} relation{1}".format(
-#                     missing, "s" if "," in missing else ""
-#                 )
-#             )
-#             return
-
-#         if not self.unit.is_leader():
-#             self.unit.status = ActiveStatus("ready")
-#             return
-
-#         self.unit.status = MaintenanceStatus("Assembling pod spec")
-
-#         # Fetch image information
-#         try:
-#             self.unit.status = MaintenanceStatus("Fetching image information")
-#             image_info = self.image.fetch()
-#         except OCIImageResourceError:
-#             self.unit.status = BlockedStatus("Error fetching image information")
-#             return
-
-#         try:
-#             pod_spec = make_pod_spec(
-#                 image_info,
-#                 self.model.config,
-#                 self.relation_state,
-#                 self.model.app.name,
-#                 self.port,
-#             )
-#         except ValueError as exc:
-#             logger.exception("Config/Relation data validation error")
-#             self.unit.status = BlockedStatus(str(exc))
-#             return
-
-#         if self.state.pod_spec != pod_spec:
-#             self.model.pod.set_spec(pod_spec)
-#             self.state.pod_spec = pod_spec
-
-#         self.unit.status = ActiveStatus("ready")
-
-
-# if __name__ == "__main__":
-#     main(LcmCharm)
diff --git a/installers/charm/lcm/src/pod_spec.py b/installers/charm/lcm/src/pod_spec.py
deleted file mode 100644 (file)
index 8709f4f..0000000
+++ /dev/null
@@ -1,237 +0,0 @@
-#!/usr/bin/env python3
-# Copyright 2020 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-import logging
-from typing import Any, Dict, List, NoReturn
-
-logger = logging.getLogger(__name__)
-
-
-def _validate_data(
-    config_data: Dict[str, Any], relation_data: Dict[str, Any]
-) -> NoReturn:
-    """Validate input data.
-
-    Args:
-        config_data (Dict[str, Any]): configuration data.
-        relation_data (Dict[str, Any]): relation data.
-    """
-    config_validators = {
-        "database_commonkey": lambda value, _: (
-            isinstance(value, str) and len(value) > 1
-        ),
-        "log_level": lambda value, _: (
-            isinstance(value, str) and value in ("INFO", "DEBUG")
-        ),
-        "vca_host": lambda value, _: isinstance(value, str) and len(value) > 1,
-        "vca_port": lambda value, _: isinstance(value, int) and value > 0,
-        "vca_user": lambda value, _: isinstance(value, str) and len(value) > 1,
-        "vca_pubkey": lambda value, _: isinstance(value, str) and len(value) > 1,
-        "vca_password": lambda value, _: isinstance(value, str) and len(value) > 1,
-        "vca_cacert": lambda value, _: isinstance(value, str),
-        "vca_cloud": lambda value, _: isinstance(value, str) and len(value) > 1,
-        "vca_k8s_cloud": lambda value, _: isinstance(value, str) and len(value) > 1,
-        "vca_apiproxy": lambda value, _: (isinstance(value, str) and len(value) > 1)
-        if value
-        else True,
-    }
-    relation_validators = {
-        "ro_host": lambda value, _: isinstance(value, str) and len(value) > 1,
-        "ro_port": lambda value, _: isinstance(value, int) and value > 0,
-        "message_host": lambda value, _: isinstance(value, str) and len(value) > 1,
-        "message_port": lambda value, _: isinstance(value, int) and value > 0,
-        "database_uri": lambda value, _: isinstance(value, str) and len(value) > 1,
-    }
-    problems = []
-
-    for key, validator in config_validators.items():
-        valid = validator(config_data.get(key), config_data)
-
-        if not valid:
-            problems.append(key)
-
-    for key, validator in relation_validators.items():
-        valid = validator(relation_data.get(key), relation_data)
-
-        if not valid:
-            problems.append(key)
-
-    if len(problems) > 0:
-        raise ValueError("Errors found in: {}".format(", ".join(problems)))
-
-
-def _make_pod_ports(port: int) -> List[Dict[str, Any]]:
-    """Generate pod ports details.
-
-    Args:
-        port (int): port to expose.
-
-    Returns:
-        List[Dict[str, Any]]: pod port details.
-    """
-    return [{"name": "lcm", "containerPort": port, "protocol": "TCP"}]
-
-
-def _make_pod_envconfig(
-    config: Dict[str, Any], relation_state: Dict[str, Any]
-) -> Dict[str, Any]:
-    """Generate pod environment configuration.
-
-    Args:
-        config (Dict[str, Any]): configuration information.
-        relation_state (Dict[str, Any]): relation state information.
-
-    Returns:
-        Dict[str, Any]: pod environment configuration.
-    """
-    envconfig = {
-        # General configuration
-        "ALLOW_ANONYMOUS_LOGIN": "yes",
-        "OSMLCM_GLOBAL_LOGLEVEL": config["log_level"],
-        # RO configuration
-        "OSMLCM_RO_HOST": relation_state["ro_host"],
-        "OSMLCM_RO_PORT": relation_state["ro_port"],
-        "OSMLCM_RO_TENANT": "osm",
-        # Kafka configuration
-        "OSMLCM_MESSAGE_DRIVER": "kafka",
-        "OSMLCM_MESSAGE_HOST": relation_state["message_host"],
-        "OSMLCM_MESSAGE_PORT": relation_state["message_port"],
-        # Database configuration
-        "OSMLCM_DATABASE_DRIVER": "mongo",
-        "OSMLCM_DATABASE_URI": relation_state["database_uri"],
-        "OSMLCM_DATABASE_COMMONKEY": config["database_commonkey"],
-        # Storage configuration
-        "OSMLCM_STORAGE_DRIVER": "mongo",
-        "OSMLCM_STORAGE_PATH": "/app/storage",
-        "OSMLCM_STORAGE_COLLECTION": "files",
-        "OSMLCM_STORAGE_URI": relation_state["database_uri"],
-        # VCA configuration
-        "OSMLCM_VCA_HOST": config["vca_host"],
-        "OSMLCM_VCA_PORT": config["vca_port"],
-        "OSMLCM_VCA_USER": config["vca_user"],
-        "OSMLCM_VCA_PUBKEY": config["vca_pubkey"],
-        "OSMLCM_VCA_SECRET": config["vca_password"],
-        "OSMLCM_VCA_CACERT": config["vca_cacert"],
-        "OSMLCM_VCA_CLOUD": config["vca_cloud"],
-        "OSMLCM_VCA_K8S_CLOUD": config["vca_k8s_cloud"],
-    }
-
-    if "vca_apiproxy" in config and config["vca_apiproxy"]:
-        envconfig["OSMLCM_VCA_APIPROXY"] = config["vca_apiproxy"]
-
-    return envconfig
-
-
-def _make_startup_probe() -> Dict[str, Any]:
-    """Generate startup probe.
-
-    Returns:
-        Dict[str, Any]: startup probe.
-    """
-    return {
-        "exec": {"command": ["/usr/bin/pgrep python3"]},
-        "initialDelaySeconds": 60,
-        "timeoutSeconds": 5,
-    }
-
-
-def _make_readiness_probe(port: int) -> Dict[str, Any]:
-    """Generate readiness probe.
-
-    Args:
-        port (int): [description]
-
-    Returns:
-        Dict[str, Any]: readiness probe.
-    """
-    return {
-        "httpGet": {
-            "path": "/osm/",
-            "port": port,
-        },
-        "initialDelaySeconds": 45,
-        "timeoutSeconds": 5,
-    }
-
-
-def _make_liveness_probe(port: int) -> Dict[str, Any]:
-    """Generate liveness probe.
-
-    Args:
-        port (int): [description]
-
-    Returns:
-        Dict[str, Any]: liveness probe.
-    """
-    return {
-        "httpGet": {
-            "path": "/osm/",
-            "port": port,
-        },
-        "initialDelaySeconds": 45,
-        "timeoutSeconds": 5,
-    }
-
-
-def make_pod_spec(
-    image_info: Dict[str, str],
-    config: Dict[str, Any],
-    relation_state: Dict[str, Any],
-    app_name: str = "lcm",
-    port: int = 9999,
-) -> Dict[str, Any]:
-    """Generate the pod spec information.
-
-    Args:
-        image_info (Dict[str, str]): Object provided by
-                                     OCIImageResource("image").fetch().
-        config (Dict[str, Any]): Configuration information.
-        relation_state (Dict[str, Any]): Relation state information.
-        app_name (str, optional): Application name. Defaults to "lcm".
-        port (int, optional): Port for the container. Defaults to 9999.
-
-    Returns:
-        Dict[str, Any]: Pod spec dictionary for the charm.
-    """
-    if not image_info:
-        return None
-
-    _validate_data(config, relation_state)
-
-    ports = _make_pod_ports(port)
-    env_config = _make_pod_envconfig(config, relation_state)
-
-    return {
-        "version": 3,
-        "containers": [
-            {
-                "name": app_name,
-                "imageDetails": image_info,
-                "imagePullPolicy": "Always",
-                "ports": ports,
-                "envConfig": env_config,
-            }
-        ],
-        "kubernetesResources": {
-            "ingressResources": [],
-        },
-    }
diff --git a/installers/charm/lcm/tests/__init__.py b/installers/charm/lcm/tests/__init__.py
deleted file mode 100644 (file)
index 446d5ce..0000000
+++ /dev/null
@@ -1,40 +0,0 @@
-#!/usr/bin/env python3
-# Copyright 2020 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-"""Init mocking for unit tests."""
-
-import sys
-
-
-import mock
-
-
-class OCIImageResourceErrorMock(Exception):
-    pass
-
-
-sys.path.append("src")
-
-oci_image = mock.MagicMock()
-oci_image.OCIImageResourceError = OCIImageResourceErrorMock
-sys.modules["oci_image"] = oci_image
-sys.modules["oci_image"].OCIImageResource().fetch.return_value = {}
diff --git a/installers/charm/lcm/tests/test_charm.py b/installers/charm/lcm/tests/test_charm.py
deleted file mode 100644 (file)
index aa11a74..0000000
+++ /dev/null
@@ -1,462 +0,0 @@
-#!/usr/bin/env python3
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-import sys
-from typing import NoReturn
-import unittest
-
-from charm import LcmCharm
-import mock
-from ops.model import ActiveStatus, BlockedStatus
-from ops.testing import Harness
-
-
-class TestCharm(unittest.TestCase):
-    """LCM Charm unit tests."""
-
-    def setUp(self) -> NoReturn:
-        """Test setup"""
-        self.image_info = sys.modules["oci_image"].OCIImageResource().fetch()
-        self.harness = Harness(LcmCharm)
-        self.harness.set_leader(is_leader=True)
-        self.harness.begin()
-        self.config = {
-            "vca_host": "192.168.0.13",
-            "vca_port": 17070,
-            "vca_user": "admin",
-            "vca_secret": "admin",
-            "vca_pubkey": "key",
-            "vca_cacert": "cacert",
-            "vca_cloud": "cloud",
-            "vca_k8s_cloud": "k8scloud",
-            "database_commonkey": "commonkey",
-            "mongodb_uri": "",
-            "log_level": "INFO",
-        }
-        self.harness.update_config(self.config)
-
-    def test_config_changed_no_relations(
-        self,
-    ) -> NoReturn:
-        """Test ingress resources without HTTP."""
-
-        self.harness.charm.on.config_changed.emit()
-
-        # Assertions
-        self.assertIsInstance(self.harness.charm.unit.status, BlockedStatus)
-        self.assertTrue(
-            all(
-                relation in self.harness.charm.unit.status.message
-                for relation in ["mongodb", "kafka", "ro"]
-            )
-        )
-
-    def test_config_changed_non_leader(
-        self,
-    ) -> NoReturn:
-        """Test ingress resources without HTTP."""
-        self.harness.set_leader(is_leader=False)
-        self.harness.charm.on.config_changed.emit()
-
-        # Assertions
-        self.assertIsInstance(self.harness.charm.unit.status, ActiveStatus)
-
-    def test_with_relations_and_mongodb_config(
-        self,
-    ) -> NoReturn:
-        "Test with relations and mongodb config"
-        self.initialize_kafka_relation()
-        self.initialize_mongo_config()
-        self.initialize_ro_relation()
-        # Verifying status
-        self.assertNotIsInstance(self.harness.charm.unit.status, BlockedStatus)
-
-    def test_with_relations(
-        self,
-    ) -> NoReturn:
-        "Test with relations (internal)"
-        self.initialize_kafka_relation()
-        self.initialize_mongo_relation()
-        self.initialize_ro_relation()
-        # Verifying status
-        self.assertNotIsInstance(self.harness.charm.unit.status, BlockedStatus)
-
-    def test_exception_mongodb_relation_and_config(
-        self,
-    ) -> NoReturn:
-        "Test with all relations and config for mongodb. Must fail"
-        self.initialize_mongo_relation()
-        self.initialize_mongo_config()
-        # Verifying status
-        self.assertIsInstance(self.harness.charm.unit.status, BlockedStatus)
-
-    # def test_build_pod_spec(
-    #     self,
-    # ) -> NoReturn:
-    #     expected_config = {
-    #         "OSMLCM_GLOBAL_LOGLEVEL": self.config["log_level"],
-    #         "OSMLCM_DATABASE_COMMONKEY": self.config["database_commonkey"],
-    #     }
-    #     expected_config.update(
-    #         {
-    #             f"OSMLCM_{k.upper()}": v
-    #             for k, v in self.config.items()
-    #             if k.startswith("vca_")
-    #         }
-    #     )
-    #     self.harness.charm._check_missing_dependencies = mock.Mock()
-    #     pod_spec = self.harness.charm.build_pod_spec(
-    #         {"imageDetails": {"imagePath": "lcm-image"}}
-    #     )
-    #     actual_config = pod_spec["containers"][0]["envConfig"]
-
-    #     self.assertDictContainsSubset(
-    #         expected_config,
-    #         actual_config,
-    #     )
-    #     for config_key in actual_config:
-    #         self.assertNotIn("VCA_MODEL_CONFIG", config_key)
-
-    def test_build_pod_spec_with_model_config(
-        self,
-    ) -> NoReturn:
-        self.harness.update_config(
-            {
-                "vca_model_config_agent_metadata_url": "string",
-                "vca_model_config_agent_stream": "string",
-                "vca_model_config_apt_ftp_proxy": "string",
-                "vca_model_config_apt_http_proxy": "string",
-                "vca_model_config_apt_https_proxy": "string",
-                "vca_model_config_apt_mirror": "string",
-                "vca_model_config_apt_no_proxy": "string",
-                "vca_model_config_automatically_retry_hooks": False,
-                "vca_model_config_backup_dir": "string",
-                "vca_model_config_cloudinit_userdata": "string",
-                "vca_model_config_container_image_metadata_url": "string",
-                "vca_model_config_container_image_stream": "string",
-                "vca_model_config_container_inherit_properties": "string",
-                "vca_model_config_container_networking_method": "string",
-                "vca_model_config_default_series": "string",
-                "vca_model_config_default_space": "string",
-                "vca_model_config_development": False,
-                "vca_model_config_disable_network_management": False,
-                "vca_model_config_egress_subnets": "string",
-                "vca_model_config_enable_os_refresh_update": False,
-                "vca_model_config_enable_os_upgrade": False,
-                "vca_model_config_fan_config": "string",
-                "vca_model_config_firewall_mode": "string",
-                "vca_model_config_ftp_proxy": "string",
-                "vca_model_config_http_proxy": "string",
-                "vca_model_config_https_proxy": "string",
-                "vca_model_config_ignore_machine_addresses": False,
-                "vca_model_config_image_metadata_url": "string",
-                "vca_model_config_image_stream": "string",
-                "vca_model_config_juju_ftp_proxy": "string",
-                "vca_model_config_juju_http_proxy": "string",
-                "vca_model_config_juju_https_proxy": "string",
-                "vca_model_config_juju_no_proxy": "string",
-                "vca_model_config_logforward_enabled": False,
-                "vca_model_config_logging_config": "string",
-                "vca_model_config_lxd_snap_channel": "string",
-                "vca_model_config_max_action_results_age": "string",
-                "vca_model_config_max_action_results_size": "string",
-                "vca_model_config_max_status_history_age": "string",
-                "vca_model_config_max_status_history_size": "string",
-                "vca_model_config_net_bond_reconfigure_delay": "string",
-                "vca_model_config_no_proxy": "string",
-                "vca_model_config_provisioner_harvest_mode": "string",
-                "vca_model_config_proxy_ssh": False,
-                "vca_model_config_snap_http_proxy": "string",
-                "vca_model_config_snap_https_proxy": "string",
-                "vca_model_config_snap_store_assertions": "string",
-                "vca_model_config_snap_store_proxy": "string",
-                "vca_model_config_snap_store_proxy_url": "string",
-                "vca_model_config_ssl_hostname_verification": False,
-                "vca_model_config_test_mode": False,
-                "vca_model_config_transmit_vendor_metrics": False,
-                "vca_model_config_update_status_hook_interval": "string",
-            }
-        )
-        expected_config = {
-            f"OSMLCM_{k.upper()}": v
-            for k, v in self.config.items()
-            if k.startswith("vca_model_config_")
-        }
-
-        self.harness.charm._check_missing_dependencies = mock.Mock()
-        pod_spec = self.harness.charm.build_pod_spec(
-            {"imageDetails": {"imagePath": "lcm-image"}}
-        )
-        actual_config = pod_spec["containers"][0]["envConfig"]
-
-        self.assertDictContainsSubset(
-            expected_config,
-            actual_config,
-        )
-
-    def initialize_kafka_relation(self):
-        kafka_relation_id = self.harness.add_relation("kafka", "kafka")
-        self.harness.add_relation_unit(kafka_relation_id, "kafka/0")
-        self.harness.update_relation_data(
-            kafka_relation_id, "kafka", {"host": "kafka", "port": 9092}
-        )
-
-    def initialize_mongo_config(self):
-        self.harness.update_config({"mongodb_uri": "mongodb://mongo:27017"})
-
-    def initialize_mongo_relation(self):
-        mongodb_relation_id = self.harness.add_relation("mongodb", "mongodb")
-        self.harness.add_relation_unit(mongodb_relation_id, "mongodb/0")
-        self.harness.update_relation_data(
-            mongodb_relation_id,
-            "mongodb/0",
-            {"connection_string": "mongodb://mongo:27017"},
-        )
-
-    def initialize_ro_relation(self):
-        http_relation_id = self.harness.add_relation("ro", "ro")
-        self.harness.add_relation_unit(http_relation_id, "ro")
-        self.harness.update_relation_data(
-            http_relation_id,
-            "ro",
-            {"host": "ro", "port": 9090},
-        )
-
-
-if __name__ == "__main__":
-    unittest.main()
-
-
-# class TestCharm(unittest.TestCase):
-#     """LCM Charm unit tests."""
-
-#     def setUp(self) -> NoReturn:
-#         """Test setup"""
-#         self.harness = Harness(LcmCharm)
-#         self.harness.set_leader(is_leader=True)
-#         self.harness.begin()
-
-#     def test_on_start_without_relations(self) -> NoReturn:
-#         """Test installation without any relation."""
-#         self.harness.charm.on.start.emit()
-
-#         # Verifying status
-#         self.assertIsInstance(self.harness.charm.unit.status, BlockedStatus)
-
-#         # Verifying status message
-#         self.assertGreater(len(self.harness.charm.unit.status.message), 0)
-#         self.assertTrue(
-#             self.harness.charm.unit.status.message.startswith("Waiting for ")
-#         )
-#         self.assertIn("kafka", self.harness.charm.unit.status.message)
-#         self.assertIn("mongodb", self.harness.charm.unit.status.message)
-#         self.assertIn("ro", self.harness.charm.unit.status.message)
-#         self.assertTrue(self.harness.charm.unit.status.message.endswith(" relations"))
-
-#     def test_on_start_with_relations(self) -> NoReturn:
-#         """Test deployment without keystone."""
-#         expected_result = {
-#             "version": 3,
-#             "containers": [
-#                 {
-#                     "name": "lcm",
-#                     "imageDetails": self.harness.charm.image.fetch(),
-#                     "imagePullPolicy": "Always",
-#                     "ports": [
-#                         {
-#                             "name": "lcm",
-#                             "containerPort": 9999,
-#                             "protocol": "TCP",
-#                         }
-#                     ],
-#                     "envConfig": {
-#                         "ALLOW_ANONYMOUS_LOGIN": "yes",
-#                         "OSMLCM_GLOBAL_LOGLEVEL": "INFO",
-#                         "OSMLCM_RO_HOST": "ro",
-#                         "OSMLCM_RO_PORT": 9090,
-#                         "OSMLCM_RO_TENANT": "osm",
-#                         "OSMLCM_MESSAGE_DRIVER": "kafka",
-#                         "OSMLCM_MESSAGE_HOST": "kafka",
-#                         "OSMLCM_MESSAGE_PORT": 9092,
-#                         "OSMLCM_DATABASE_DRIVER": "mongo",
-#                         "OSMLCM_DATABASE_URI": "mongodb://mongo:27017",
-#                         "OSMLCM_DATABASE_COMMONKEY": "osm",
-#                         "OSMLCM_STORAGE_DRIVER": "mongo",
-#                         "OSMLCM_STORAGE_PATH": "/app/storage",
-#                         "OSMLCM_STORAGE_COLLECTION": "files",
-#                         "OSMLCM_STORAGE_URI": "mongodb://mongo:27017",
-#                         "OSMLCM_VCA_HOST": "admin",
-#                         "OSMLCM_VCA_PORT": 17070,
-#                         "OSMLCM_VCA_USER": "admin",
-#                         "OSMLCM_VCA_PUBKEY": "secret",
-#                         "OSMLCM_VCA_SECRET": "secret",
-#                         "OSMLCM_VCA_CACERT": "",
-#                         "OSMLCM_VCA_CLOUD": "localhost",
-#                         "OSMLCM_VCA_K8S_CLOUD": "k8scloud",
-#                     },
-#                 }
-#             ],
-#             "kubernetesResources": {"ingressResources": []},
-#         }
-
-#         self.harness.charm.on.start.emit()
-
-#         # Check if kafka datastore is initialized
-#         self.assertIsNone(self.harness.charm.state.message_host)
-#         self.assertIsNone(self.harness.charm.state.message_port)
-
-#         # Check if mongodb datastore is initialized
-#         self.assertIsNone(self.harness.charm.state.database_uri)
-
-#         # Check if RO datastore is initialized
-#         self.assertIsNone(self.harness.charm.state.ro_host)
-#         self.assertIsNone(self.harness.charm.state.ro_port)
-
-#         # Initializing the kafka relation
-#         kafka_relation_id = self.harness.add_relation("kafka", "kafka")
-#         self.harness.add_relation_unit(kafka_relation_id, "kafka/0")
-#         self.harness.update_relation_data(
-#             kafka_relation_id, "kafka/0", {"host": "kafka", "port": 9092}
-#         )
-
-#         # Initializing the mongo relation
-#         mongodb_relation_id = self.harness.add_relation("mongodb", "mongodb")
-#         self.harness.add_relation_unit(mongodb_relation_id, "mongodb/0")
-#         self.harness.update_relation_data(
-#             mongodb_relation_id,
-#             "mongodb/0",
-#             {"connection_string": "mongodb://mongo:27017"},
-#         )
-
-#         # Initializing the RO relation
-#         ro_relation_id = self.harness.add_relation("ro", "ro")
-#         self.harness.add_relation_unit(ro_relation_id, "ro/0")
-#         self.harness.update_relation_data(
-#             ro_relation_id, "ro/0", {"host": "ro", "port": 9090}
-#         )
-
-#         # Checking if kafka data is stored
-#         self.assertEqual(self.harness.charm.state.message_host, "kafka")
-#         self.assertEqual(self.harness.charm.state.message_port, 9092)
-
-#         # Checking if mongodb data is stored
-#         self.assertEqual(self.harness.charm.state.database_uri, "mongodb://mongo:27017")
-
-#         # Checking if RO data is stored
-#         self.assertEqual(self.harness.charm.state.ro_host, "ro")
-#         self.assertEqual(self.harness.charm.state.ro_port, 9090)
-
-#         # Verifying status
-#         self.assertNotIsInstance(self.harness.charm.unit.status, BlockedStatus)
-
-#         pod_spec, _ = self.harness.get_pod_spec()
-
-#         self.assertDictEqual(expected_result, pod_spec)
-
-#     def test_on_kafka_relation_unit_changed(self) -> NoReturn:
-#         """Test to see if kafka relation is updated."""
-#         self.harness.charm.on.start.emit()
-
-#         self.assertIsNone(self.harness.charm.state.message_host)
-#         self.assertIsNone(self.harness.charm.state.message_port)
-
-#         relation_id = self.harness.add_relation("kafka", "kafka")
-#         self.harness.add_relation_unit(relation_id, "kafka/0")
-#         self.harness.update_relation_data(
-#             relation_id, "kafka/0", {"host": "kafka", "port": 9092}
-#         )
-
-#         self.assertEqual(self.harness.charm.state.message_host, "kafka")
-#         self.assertEqual(self.harness.charm.state.message_port, 9092)
-
-#         # Verifying status
-#         self.assertIsInstance(self.harness.charm.unit.status, BlockedStatus)
-
-#         # Verifying status message
-#         self.assertGreater(len(self.harness.charm.unit.status.message), 0)
-#         self.assertTrue(
-#             self.harness.charm.unit.status.message.startswith("Waiting for ")
-#         )
-#         self.assertNotIn("kafka", self.harness.charm.unit.status.message)
-#         self.assertIn("mongodb", self.harness.charm.unit.status.message)
-#         self.assertIn("ro", self.harness.charm.unit.status.message)
-#         self.assertTrue(self.harness.charm.unit.status.message.endswith(" relations"))
-
-#     def test_on_mongodb_unit_relation_changed(self) -> NoReturn:
-#         """Test to see if mongodb relation is updated."""
-#         self.harness.charm.on.start.emit()
-
-#         self.assertIsNone(self.harness.charm.state.database_uri)
-
-#         relation_id = self.harness.add_relation("mongodb", "mongodb")
-#         self.harness.add_relation_unit(relation_id, "mongodb/0")
-#         self.harness.update_relation_data(
-#             relation_id, "mongodb/0", {"connection_string": "mongodb://mongo:27017"}
-#         )
-
-#         self.assertEqual(self.harness.charm.state.database_uri, "mongodb://mongo:27017")
-
-#         # Verifying status
-#         self.assertIsInstance(self.harness.charm.unit.status, BlockedStatus)
-
-#         # Verifying status message
-#         self.assertGreater(len(self.harness.charm.unit.status.message), 0)
-#         self.assertTrue(
-#             self.harness.charm.unit.status.message.startswith("Waiting for ")
-#         )
-#         self.assertIn("kafka", self.harness.charm.unit.status.message)
-#         self.assertNotIn("mongodb", self.harness.charm.unit.status.message)
-#         self.assertIn("ro", self.harness.charm.unit.status.message)
-#         self.assertTrue(self.harness.charm.unit.status.message.endswith(" relations"))
-
-#     def test_on_ro_unit_relation_changed(self) -> NoReturn:
-#         """Test to see if RO relation is updated."""
-#         self.harness.charm.on.start.emit()
-
-#         self.assertIsNone(self.harness.charm.state.ro_host)
-#         self.assertIsNone(self.harness.charm.state.ro_port)
-
-#         relation_id = self.harness.add_relation("ro", "ro")
-#         self.harness.add_relation_unit(relation_id, "ro/0")
-#         self.harness.update_relation_data(
-#             relation_id, "ro/0", {"host": "ro", "port": 9090}
-#         )
-
-#         self.assertEqual(self.harness.charm.state.ro_host, "ro")
-#         self.assertEqual(self.harness.charm.state.ro_port, 9090)
-
-#         # Verifying status
-#         self.assertIsInstance(self.harness.charm.unit.status, BlockedStatus)
-
-#         # Verifying status message
-#         self.assertGreater(len(self.harness.charm.unit.status.message), 0)
-#         self.assertTrue(
-#             self.harness.charm.unit.status.message.startswith("Waiting for ")
-#         )
-#         self.assertIn("kafka", self.harness.charm.unit.status.message)
-#         self.assertIn("mongodb", self.harness.charm.unit.status.message)
-#         self.assertNotIn("ro", self.harness.charm.unit.status.message)
-#         self.assertTrue(self.harness.charm.unit.status.message.endswith(" relations"))
-
-
-# if __name__ == "__main__":
-#     unittest.main()
diff --git a/installers/charm/lcm/tests/test_pod_spec.py b/installers/charm/lcm/tests/test_pod_spec.py
deleted file mode 100644 (file)
index c74fb10..0000000
+++ /dev/null
@@ -1,426 +0,0 @@
-#!/usr/bin/env python3
-# Copyright 2020 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-from typing import NoReturn
-import unittest
-
-import pod_spec
-
-
-class TestPodSpec(unittest.TestCase):
-    """Pod spec unit tests."""
-
-    def test_make_pod_ports(self) -> NoReturn:
-        """Testing make pod ports."""
-        port = 9999
-
-        expected_result = [
-            {
-                "name": "lcm",
-                "containerPort": port,
-                "protocol": "TCP",
-            }
-        ]
-
-        pod_ports = pod_spec._make_pod_ports(9999)
-
-        self.assertListEqual(expected_result, pod_ports)
-
-    def test_make_pod_envconfig_without_vca_apiproxy(self) -> NoReturn:
-        """Teting make pod envconfig without vca_apiproxy configuration."""
-        config = {
-            "database_commonkey": "commonkey",
-            "log_level": "INFO",
-            "vca_host": "vca",
-            "vca_port": 1212,
-            "vca_user": "vca_user",
-            "vca_pubkey": "vca_pubkey",
-            "vca_password": "vca_password",
-            "vca_cacert": "vca_cacert",
-            "vca_cloud": "vca_cloud",
-            "vca_k8s_cloud": "vca_k8s_cloud",
-        }
-        relation_state = {
-            "message_host": "kafka",
-            "message_port": 2181,
-            "database_uri": "mongodb://mongo",
-            "ro_host": "ro",
-            "ro_port": 9090,
-        }
-
-        expected_result = {
-            "ALLOW_ANONYMOUS_LOGIN": "yes",
-            "OSMLCM_GLOBAL_LOGLEVEL": config["log_level"],
-            "OSMLCM_RO_HOST": relation_state["ro_host"],
-            "OSMLCM_RO_PORT": relation_state["ro_port"],
-            "OSMLCM_RO_TENANT": "osm",
-            "OSMLCM_MESSAGE_DRIVER": "kafka",
-            "OSMLCM_MESSAGE_HOST": relation_state["message_host"],
-            "OSMLCM_MESSAGE_PORT": relation_state["message_port"],
-            "OSMLCM_DATABASE_DRIVER": "mongo",
-            "OSMLCM_DATABASE_URI": relation_state["database_uri"],
-            "OSMLCM_DATABASE_COMMONKEY": config["database_commonkey"],
-            "OSMLCM_STORAGE_DRIVER": "mongo",
-            "OSMLCM_STORAGE_PATH": "/app/storage",
-            "OSMLCM_STORAGE_COLLECTION": "files",
-            "OSMLCM_STORAGE_URI": relation_state["database_uri"],
-            "OSMLCM_VCA_HOST": config["vca_host"],
-            "OSMLCM_VCA_PORT": config["vca_port"],
-            "OSMLCM_VCA_USER": config["vca_user"],
-            "OSMLCM_VCA_PUBKEY": config["vca_pubkey"],
-            "OSMLCM_VCA_SECRET": config["vca_password"],
-            "OSMLCM_VCA_CACERT": config["vca_cacert"],
-            "OSMLCM_VCA_CLOUD": config["vca_cloud"],
-            "OSMLCM_VCA_K8S_CLOUD": config["vca_k8s_cloud"],
-        }
-
-        pod_envconfig = pod_spec._make_pod_envconfig(config, relation_state)
-
-        self.assertDictEqual(expected_result, pod_envconfig)
-
-    def test_make_pod_envconfig_with_vca_apiproxy(self) -> NoReturn:
-        """Teting make pod envconfig with vca_apiproxy configuration."""
-        config = {
-            "database_commonkey": "commonkey",
-            "log_level": "INFO",
-            "vca_host": "vca",
-            "vca_port": 1212,
-            "vca_user": "vca_user",
-            "vca_pubkey": "vca_pubkey",
-            "vca_password": "vca_password",
-            "vca_cacert": "vca_cacert",
-            "vca_cloud": "vca_cloud",
-            "vca_k8s_cloud": "vca_k8s_cloud",
-            "vca_apiproxy": "vca_apiproxy",
-        }
-        relation_state = {
-            "message_host": "kafka",
-            "message_port": 2181,
-            "database_uri": "mongodb://mongo",
-            "ro_host": "ro",
-            "ro_port": 9090,
-        }
-
-        expected_result = {
-            "ALLOW_ANONYMOUS_LOGIN": "yes",
-            "OSMLCM_GLOBAL_LOGLEVEL": config["log_level"],
-            "OSMLCM_RO_HOST": relation_state["ro_host"],
-            "OSMLCM_RO_PORT": relation_state["ro_port"],
-            "OSMLCM_RO_TENANT": "osm",
-            "OSMLCM_MESSAGE_DRIVER": "kafka",
-            "OSMLCM_MESSAGE_HOST": relation_state["message_host"],
-            "OSMLCM_MESSAGE_PORT": relation_state["message_port"],
-            "OSMLCM_DATABASE_DRIVER": "mongo",
-            "OSMLCM_DATABASE_URI": relation_state["database_uri"],
-            "OSMLCM_DATABASE_COMMONKEY": config["database_commonkey"],
-            "OSMLCM_STORAGE_DRIVER": "mongo",
-            "OSMLCM_STORAGE_PATH": "/app/storage",
-            "OSMLCM_STORAGE_COLLECTION": "files",
-            "OSMLCM_STORAGE_URI": relation_state["database_uri"],
-            "OSMLCM_VCA_HOST": config["vca_host"],
-            "OSMLCM_VCA_PORT": config["vca_port"],
-            "OSMLCM_VCA_USER": config["vca_user"],
-            "OSMLCM_VCA_PUBKEY": config["vca_pubkey"],
-            "OSMLCM_VCA_SECRET": config["vca_password"],
-            "OSMLCM_VCA_CACERT": config["vca_cacert"],
-            "OSMLCM_VCA_CLOUD": config["vca_cloud"],
-            "OSMLCM_VCA_K8S_CLOUD": config["vca_k8s_cloud"],
-            "OSMLCM_VCA_APIPROXY": config["vca_apiproxy"],
-        }
-
-        pod_envconfig = pod_spec._make_pod_envconfig(config, relation_state)
-
-        self.assertDictEqual(expected_result, pod_envconfig)
-
-    def test_make_startup_probe(self) -> NoReturn:
-        """Testing make startup probe."""
-        expected_result = {
-            "exec": {"command": ["/usr/bin/pgrep python3"]},
-            "initialDelaySeconds": 60,
-            "timeoutSeconds": 5,
-        }
-
-        startup_probe = pod_spec._make_startup_probe()
-
-        self.assertDictEqual(expected_result, startup_probe)
-
-    def test_make_readiness_probe(self) -> NoReturn:
-        """Testing make readiness probe."""
-        port = 9999
-
-        expected_result = {
-            "httpGet": {
-                "path": "/osm/",
-                "port": port,
-            },
-            "initialDelaySeconds": 45,
-            "timeoutSeconds": 5,
-        }
-
-        readiness_probe = pod_spec._make_readiness_probe(port)
-
-        self.assertDictEqual(expected_result, readiness_probe)
-
-    def test_make_liveness_probe(self) -> NoReturn:
-        """Testing make liveness probe."""
-        port = 9999
-
-        expected_result = {
-            "httpGet": {
-                "path": "/osm/",
-                "port": port,
-            },
-            "initialDelaySeconds": 45,
-            "timeoutSeconds": 5,
-        }
-
-        liveness_probe = pod_spec._make_liveness_probe(port)
-
-        self.assertDictEqual(expected_result, liveness_probe)
-
-    def test_make_pod_spec(self) -> NoReturn:
-        """Testing make pod spec."""
-        image_info = {"upstream-source": "opensourcemano/lcm:8"}
-        config = {
-            "database_commonkey": "commonkey",
-            "log_level": "INFO",
-            "vca_host": "vca",
-            "vca_port": 1212,
-            "vca_user": "vca_user",
-            "vca_pubkey": "vca_pubkey",
-            "vca_password": "vca_password",
-            "vca_cacert": "vca_cacert",
-            "vca_cloud": "vca_cloud",
-            "vca_k8s_cloud": "vca_k8s_cloud",
-            "vca_apiproxy": "vca_apiproxy",
-        }
-        relation_state = {
-            "message_host": "kafka",
-            "message_port": 2181,
-            "database_uri": "mongodb://mongo",
-            "ro_host": "ro",
-            "ro_port": 9090,
-        }
-        app_name = "lcm"
-        port = 9999
-
-        expected_result = {
-            "version": 3,
-            "containers": [
-                {
-                    "name": app_name,
-                    "imageDetails": image_info,
-                    "imagePullPolicy": "Always",
-                    "ports": [
-                        {
-                            "name": app_name,
-                            "containerPort": port,
-                            "protocol": "TCP",
-                        }
-                    ],
-                    "envConfig": {
-                        "ALLOW_ANONYMOUS_LOGIN": "yes",
-                        "OSMLCM_GLOBAL_LOGLEVEL": config["log_level"],
-                        "OSMLCM_RO_HOST": relation_state["ro_host"],
-                        "OSMLCM_RO_PORT": relation_state["ro_port"],
-                        "OSMLCM_RO_TENANT": "osm",
-                        "OSMLCM_MESSAGE_DRIVER": "kafka",
-                        "OSMLCM_MESSAGE_HOST": relation_state["message_host"],
-                        "OSMLCM_MESSAGE_PORT": relation_state["message_port"],
-                        "OSMLCM_DATABASE_DRIVER": "mongo",
-                        "OSMLCM_DATABASE_URI": relation_state["database_uri"],
-                        "OSMLCM_DATABASE_COMMONKEY": config["database_commonkey"],
-                        "OSMLCM_STORAGE_DRIVER": "mongo",
-                        "OSMLCM_STORAGE_PATH": "/app/storage",
-                        "OSMLCM_STORAGE_COLLECTION": "files",
-                        "OSMLCM_STORAGE_URI": relation_state["database_uri"],
-                        "OSMLCM_VCA_HOST": config["vca_host"],
-                        "OSMLCM_VCA_PORT": config["vca_port"],
-                        "OSMLCM_VCA_USER": config["vca_user"],
-                        "OSMLCM_VCA_PUBKEY": config["vca_pubkey"],
-                        "OSMLCM_VCA_SECRET": config["vca_password"],
-                        "OSMLCM_VCA_CACERT": config["vca_cacert"],
-                        "OSMLCM_VCA_CLOUD": config["vca_cloud"],
-                        "OSMLCM_VCA_K8S_CLOUD": config["vca_k8s_cloud"],
-                        "OSMLCM_VCA_APIPROXY": config["vca_apiproxy"],
-                    },
-                }
-            ],
-            "kubernetesResources": {"ingressResources": []},
-        }
-
-        spec = pod_spec.make_pod_spec(
-            image_info, config, relation_state, app_name, port
-        )
-
-        self.assertDictEqual(expected_result, spec)
-
-    def test_make_pod_spec_with_vca_apiproxy(self) -> NoReturn:
-        """Testing make pod spec with vca_apiproxy."""
-        image_info = {"upstream-source": "opensourcemano/lcm:8"}
-        config = {
-            "database_commonkey": "commonkey",
-            "log_level": "INFO",
-            "vca_host": "vca",
-            "vca_port": 1212,
-            "vca_user": "vca_user",
-            "vca_pubkey": "vca_pubkey",
-            "vca_password": "vca_password",
-            "vca_cacert": "vca_cacert",
-            "vca_cloud": "vca_cloud",
-            "vca_k8s_cloud": "vca_k8s_cloud",
-        }
-        relation_state = {
-            "message_host": "kafka",
-            "message_port": 2181,
-            "database_uri": "mongodb://mongo",
-            "ro_host": "ro",
-            "ro_port": 9090,
-        }
-        app_name = "lcm"
-        port = 9999
-
-        expected_result = {
-            "version": 3,
-            "containers": [
-                {
-                    "name": app_name,
-                    "imageDetails": image_info,
-                    "imagePullPolicy": "Always",
-                    "ports": [
-                        {
-                            "name": app_name,
-                            "containerPort": port,
-                            "protocol": "TCP",
-                        }
-                    ],
-                    "envConfig": {
-                        "ALLOW_ANONYMOUS_LOGIN": "yes",
-                        "OSMLCM_GLOBAL_LOGLEVEL": config["log_level"],
-                        "OSMLCM_RO_HOST": relation_state["ro_host"],
-                        "OSMLCM_RO_PORT": relation_state["ro_port"],
-                        "OSMLCM_RO_TENANT": "osm",
-                        "OSMLCM_MESSAGE_DRIVER": "kafka",
-                        "OSMLCM_MESSAGE_HOST": relation_state["message_host"],
-                        "OSMLCM_MESSAGE_PORT": relation_state["message_port"],
-                        "OSMLCM_DATABASE_DRIVER": "mongo",
-                        "OSMLCM_DATABASE_URI": relation_state["database_uri"],
-                        "OSMLCM_DATABASE_COMMONKEY": config["database_commonkey"],
-                        "OSMLCM_STORAGE_DRIVER": "mongo",
-                        "OSMLCM_STORAGE_PATH": "/app/storage",
-                        "OSMLCM_STORAGE_COLLECTION": "files",
-                        "OSMLCM_STORAGE_URI": relation_state["database_uri"],
-                        "OSMLCM_VCA_HOST": config["vca_host"],
-                        "OSMLCM_VCA_PORT": config["vca_port"],
-                        "OSMLCM_VCA_USER": config["vca_user"],
-                        "OSMLCM_VCA_PUBKEY": config["vca_pubkey"],
-                        "OSMLCM_VCA_SECRET": config["vca_password"],
-                        "OSMLCM_VCA_CACERT": config["vca_cacert"],
-                        "OSMLCM_VCA_CLOUD": config["vca_cloud"],
-                        "OSMLCM_VCA_K8S_CLOUD": config["vca_k8s_cloud"],
-                    },
-                }
-            ],
-            "kubernetesResources": {"ingressResources": []},
-        }
-
-        spec = pod_spec.make_pod_spec(
-            image_info, config, relation_state, app_name, port
-        )
-
-        self.assertDictEqual(expected_result, spec)
-
-    def test_make_pod_spec_without_image_info(self) -> NoReturn:
-        """Testing make pod spec without image_info."""
-        image_info = None
-        config = {
-            "database_commonkey": "commonkey",
-            "log_level": "INFO",
-            "vca_host": "vca",
-            "vca_port": 1212,
-            "vca_user": "vca_user",
-            "vca_pubkey": "vca_pubkey",
-            "vca_password": "vca_password",
-            "vca_cacert": "vca_cacert",
-            "vca_cloud": "vca_cloud",
-            "vca_k8s_cloud": "vca_k8s_cloud",
-            "vca_apiproxy": "vca_apiproxy",
-        }
-        relation_state = {
-            "message_host": "kafka",
-            "message_port": 2181,
-            "database_uri": "mongodb://mongo",
-            "ro_host": "ro",
-            "ro_port": 9090,
-        }
-        app_name = "lcm"
-        port = 9999
-
-        spec = pod_spec.make_pod_spec(
-            image_info, config, relation_state, app_name, port
-        )
-
-        self.assertIsNone(spec)
-
-    def test_make_pod_spec_without_config(self) -> NoReturn:
-        """Testing make pod spec without config."""
-        image_info = {"upstream-source": "opensourcemano/lcm:8"}
-        config = {}
-        relation_state = {
-            "message_host": "kafka",
-            "message_port": 2181,
-            "database_uri": "mongodb://mongo",
-            "ro_host": "ro",
-            "ro_port": 9090,
-        }
-        app_name = "lcm"
-        port = 9999
-
-        with self.assertRaises(ValueError):
-            pod_spec.make_pod_spec(image_info, config, relation_state, app_name, port)
-
-    def test_make_pod_spec_without_relation_state(self) -> NoReturn:
-        """Testing make pod spec without relation_state."""
-        image_info = {"upstream-source": "opensourcemano/lcm:8"}
-        config = {
-            "database_commonkey": "commonkey",
-            "log_level": "INFO",
-            "vca_host": "vca",
-            "vca_port": 1212,
-            "vca_user": "vca_user",
-            "vca_pubkey": "vca_pubkey",
-            "vca_password": "vca_password",
-            "vca_cacert": "vca_cacert",
-            "vca_cloud": "vca_cloud",
-            "vca_k8s_cloud": "vca_k8s_cloud",
-            "vca_apiproxy": "vca_apiproxy",
-        }
-        relation_state = {}
-        app_name = "lcm"
-        port = 9999
-
-        with self.assertRaises(ValueError):
-            pod_spec.make_pod_spec(image_info, config, relation_state, app_name, port)
-
-
-if __name__ == "__main__":
-    unittest.main()
diff --git a/installers/charm/lcm/tox.ini b/installers/charm/lcm/tox.ini
deleted file mode 100644 (file)
index f3c9144..0000000
+++ /dev/null
@@ -1,128 +0,0 @@
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-#######################################################################################
-
-[tox]
-envlist = black, cover, flake8, pylint, yamllint, safety
-skipsdist = true
-
-[tox:jenkins]
-toxworkdir = /tmp/.tox
-
-[testenv]
-basepython = python3.8
-setenv =
-  VIRTUAL_ENV={envdir}
-  PYTHONPATH = {toxinidir}:{toxinidir}/lib:{toxinidir}/src
-  PYTHONDONTWRITEBYTECODE = 1
-deps =  -r{toxinidir}/requirements.txt
-
-
-#######################################################################################
-[testenv:black]
-deps = black
-commands =
-        black --check --diff src/ tests/
-
-
-#######################################################################################
-[testenv:cover]
-deps =  {[testenv]deps}
-        -r{toxinidir}/requirements-test.txt
-        coverage
-        nose2
-commands =
-        sh -c 'rm -f nosetests.xml'
-        coverage erase
-        nose2 -C --coverage src
-        coverage report --omit='*tests*'
-        coverage html -d ./cover --omit='*tests*'
-        coverage xml -o coverage.xml --omit=*tests*
-whitelist_externals = sh
-
-
-#######################################################################################
-[testenv:flake8]
-deps =  flake8
-        flake8-import-order
-commands =
-        flake8 src/ tests/
-
-
-#######################################################################################
-[testenv:pylint]
-deps =  {[testenv]deps}
-        -r{toxinidir}/requirements-test.txt
-        pylint==2.10.2
-commands =
-    pylint -E src/ tests/
-
-
-#######################################################################################
-[testenv:safety]
-setenv =
-        LC_ALL=C.UTF-8
-        LANG=C.UTF-8
-deps =  {[testenv]deps}
-        safety
-commands =
-        - safety check --full-report
-
-
-#######################################################################################
-[testenv:yamllint]
-deps =  {[testenv]deps}
-        -r{toxinidir}/requirements-test.txt
-        yamllint
-commands = yamllint .
-
-#######################################################################################
-[testenv:build]
-passenv=HTTP_PROXY HTTPS_PROXY NO_PROXY
-whitelist_externals =
-  charmcraft
-  sh
-commands =
-  charmcraft pack
-  sh -c 'ubuntu_version=20.04; \
-        architectures="amd64-aarch64-arm64"; \
-        charm_name=`cat metadata.yaml | grep -E "^name: " | cut -f 2 -d " "`; \
-        mv $charm_name"_ubuntu-"$ubuntu_version-$architectures.charm $charm_name.charm'
-
-#######################################################################################
-[flake8]
-ignore =
-        W291,
-        W293,
-        W503,
-        E123,
-        E125,
-        E226,
-        E241,
-exclude =
-        .git,
-        __pycache__,
-        .tox,
-max-line-length = 120
-show-source = True
-builtins = _
-max-complexity = 10
-import-order-style = google
diff --git a/installers/charm/lint.sh b/installers/charm/lint.sh
deleted file mode 100755 (executable)
index 3c42dd1..0000000
+++ /dev/null
@@ -1,32 +0,0 @@
-#!/bin/bash
-# Copyright 2020 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-#     Unless required by applicable law or agreed to in writing, software
-#     distributed under the License is distributed on an "AS IS" BASIS,
-#     WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-#     See the License for the specific language governing permissions and
-#     limitations under the License.
-
-set -eux
-
-function lint() {
-    cd $1
-    tox -e lint
-    cd ..
-}
-
-lint 'lcm-k8s'
-lint 'mon-k8s'
-lint 'nbi-k8s'
-lint 'pol-k8s'
-lint 'ro-k8s'
-lint 'ui-k8s'
-lint 'keystone'
-lint 'ng-ui'
-lint 'pla'
\ No newline at end of file
diff --git a/installers/charm/mon/.gitignore b/installers/charm/mon/.gitignore
deleted file mode 100644 (file)
index 2885df2..0000000
+++ /dev/null
@@ -1,30 +0,0 @@
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-venv
-.vscode
-build
-*.charm
-.coverage
-coverage.xml
-.stestr
-cover
-release
\ No newline at end of file
diff --git a/installers/charm/mon/.jujuignore b/installers/charm/mon/.jujuignore
deleted file mode 100644 (file)
index 3ae3e7d..0000000
+++ /dev/null
@@ -1,34 +0,0 @@
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-venv
-.vscode
-build
-*.charm
-.coverage
-coverage.xml
-.gitignore
-.stestr
-cover
-release
-tests/
-requirements*
-tox.ini
diff --git a/installers/charm/mon/.yamllint.yaml b/installers/charm/mon/.yamllint.yaml
deleted file mode 100644 (file)
index d71fb69..0000000
+++ /dev/null
@@ -1,34 +0,0 @@
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
----
-extends: default
-
-yaml-files:
-  - "*.yaml"
-  - "*.yml"
-  - ".yamllint"
-ignore: |
-  .tox
-  cover/
-  build/
-  venv
-  release/
diff --git a/installers/charm/mon/README.md b/installers/charm/mon/README.md
deleted file mode 100644 (file)
index 216a784..0000000
+++ /dev/null
@@ -1,23 +0,0 @@
-<!-- Copyright 2020 Canonical Ltd.
-
-Licensed under the Apache License, Version 2.0 (the "License"); you may
-not use this file except in compliance with the License. You may obtain
-a copy of the License at
-
-        http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-License for the specific language governing permissions and limitations
-under the License.
-
-For those usages not covered by the Apache License, Version 2.0 please
-contact: legal@canonical.com
-
-To get in touch with the maintainers, please contact:
-osm-charmers@lists.launchpad.net -->
-
-# MON operator Charm for Kubernetes
-
-## Requirements
diff --git a/installers/charm/mon/charmcraft.yaml b/installers/charm/mon/charmcraft.yaml
deleted file mode 100644 (file)
index 0a285a9..0000000
+++ /dev/null
@@ -1,37 +0,0 @@
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-type: charm
-bases:
-  - build-on:
-      - name: ubuntu
-        channel: "20.04"
-        architectures: ["amd64"]
-    run-on:
-      - name: ubuntu
-        channel: "20.04"
-        architectures:
-          - amd64
-          - aarch64
-          - arm64
-parts:
-  charm:
-    build-packages: [git]
diff --git a/installers/charm/mon/config.yaml b/installers/charm/mon/config.yaml
deleted file mode 100644 (file)
index 04f52c0..0000000
+++ /dev/null
@@ -1,135 +0,0 @@
-# Copyright 2020 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-options:
-  openstack_default_granularity:
-    description: Openstack default granularity
-    type: int
-    default: 300
-  global_request_timeout:
-    description: Global request timeout
-    type: int
-    default: 10
-  log_level:
-    description: Log level
-    type: string
-    default: INFO
-  database_commonkey:
-    description: Database common key
-    type: string
-    default: osm
-  mongodb_uri:
-    type: string
-    description: MongoDB URI (external database)
-  collector_interval:
-    description: Collector interval
-    type: int
-    default: 30
-  evaluator_interval:
-    description: Evaluator interval
-    type: int
-    default: 30
-  vca_host:
-    type: string
-    description: "The VCA host."
-    default: "admin"
-  vca_user:
-    type: string
-    description: "The VCA user name."
-    default: "admin"
-  vca_secret:
-    type: string
-    description: "The VCA user password."
-    default: "secret"
-  vca_cacert:
-    type: string
-    description: "The VCA cacert."
-    default: ""
-  grafana_url:
-    description: Grafana URL
-    type: string
-    default: http://grafana:3000
-  grafana_user:
-    description: Grafana user
-    type: string
-    default: admin
-  grafana_password:
-    description: Grafana password
-    type: string
-    default: admin
-  keystone_enabled:
-    description: MON will use Keystone backend
-    type: boolean
-    default: false
-  certificates:
-    type: string
-    description: |
-      comma-separated list of <name>:<content> certificates.
-      Where:
-        name: name of the file for the certificate
-        content: base64 content of the certificate
-      The path for the files is /certs.
-  image_pull_policy:
-    type: string
-    description: |
-      ImagePullPolicy configuration for the pod.
-      Possible values: always, ifnotpresent, never
-    default: always
-  debug_mode:
-    description: |
-      If true, debug mode is activated. It means that the service will not run,
-      and instead, the command for the container will be a `sleep infinity`.
-      Note: If enabled, security_context will be disabled.
-    type: boolean
-    default: false
-  debug_pubkey:
-    description: |
-      Public SSH key that will be injected to the application pod.
-    type: string
-  debug_mon_local_path:
-    description: |
-      Local full path to the MON project.
-
-      The path will be mounted to the docker image,
-      which means changes during the debugging will be saved in your local path.
-    type: string
-  debug_n2vc_local_path:
-    description: |
-      Local full path to the N2VC project.
-
-      The path will be mounted to the docker image,
-      which means changes during the debugging will be saved in your local path.
-    type: string
-  debug_common_local_path:
-    description: |
-      Local full path to the COMMON project.
-
-      The path will be mounted to the docker image,
-      which means changes during the debugging will be saved in your local path.
-    type: string
-  security_context:
-    description: Enables the security context of the pods
-    type: boolean
-    default: false
-  vm_infra_metrics:
-    description: Enables querying the VIMs asking for the status of the VMs
-    type: boolean
-    default: true
diff --git a/installers/charm/mon/lib/charms/kafka_k8s/v0/kafka.py b/installers/charm/mon/lib/charms/kafka_k8s/v0/kafka.py
deleted file mode 100644 (file)
index 1baf9a8..0000000
+++ /dev/null
@@ -1,207 +0,0 @@
-# Copyright 2022 Canonical Ltd.
-# See LICENSE file for licensing details.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#    http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
-# implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""Kafka library.
-
-This [library](https://juju.is/docs/sdk/libraries) implements both sides of the
-`kafka` [interface](https://juju.is/docs/sdk/relations).
-
-The *provider* side of this interface is implemented by the
-[kafka-k8s Charmed Operator](https://charmhub.io/kafka-k8s).
-
-Any Charmed Operator that *requires* Kafka for providing its
-service should implement the *requirer* side of this interface.
-
-In a nutshell using this library to implement a Charmed Operator *requiring*
-Kafka would look like
-
-```
-$ charmcraft fetch-lib charms.kafka_k8s.v0.kafka
-```
-
-`metadata.yaml`:
-
-```
-requires:
-  kafka:
-    interface: kafka
-    limit: 1
-```
-
-`src/charm.py`:
-
-```
-from charms.kafka_k8s.v0.kafka import KafkaEvents, KafkaRequires
-from ops.charm import CharmBase
-
-
-class MyCharm(CharmBase):
-
-    on = KafkaEvents()
-
-    def __init__(self, *args):
-        super().__init__(*args)
-        self.kafka = KafkaRequires(self)
-        self.framework.observe(
-            self.on.kafka_available,
-            self._on_kafka_available,
-        )
-        self.framework.observe(
-            self.on.kafka_broken,
-            self._on_kafka_broken,
-        )
-
-    def _on_kafka_available(self, event):
-        # Get Kafka host and port
-        host: str = self.kafka.host
-        port: int = self.kafka.port
-        # host => "kafka-k8s"
-        # port => 9092
-
-    def _on_kafka_broken(self, event):
-        # Stop service
-        # ...
-        self.unit.status = BlockedStatus("need kafka relation")
-```
-
-You can file bugs
-[here](https://github.com/charmed-osm/kafka-k8s-operator/issues)!
-"""
-
-from typing import Optional
-
-from ops.charm import CharmBase, CharmEvents
-from ops.framework import EventBase, EventSource, Object
-
-# The unique Charmhub library identifier, never change it
-from ops.model import Relation
-
-LIBID = "eacc8c85082347c9aae740e0220b8376"
-
-# Increment this major API version when introducing breaking changes
-LIBAPI = 0
-
-# Increment this PATCH version before using `charmcraft publish-lib` or reset
-# to 0 if you are raising the major API version
-LIBPATCH = 3
-
-
-KAFKA_HOST_APP_KEY = "host"
-KAFKA_PORT_APP_KEY = "port"
-
-
-class _KafkaAvailableEvent(EventBase):
-    """Event emitted when Kafka is available."""
-
-
-class _KafkaBrokenEvent(EventBase):
-    """Event emitted when Kafka relation is broken."""
-
-
-class KafkaEvents(CharmEvents):
-    """Kafka events.
-
-    This class defines the events that Kafka can emit.
-
-    Events:
-        kafka_available (_KafkaAvailableEvent)
-    """
-
-    kafka_available = EventSource(_KafkaAvailableEvent)
-    kafka_broken = EventSource(_KafkaBrokenEvent)
-
-
-class KafkaRequires(Object):
-    """Requires-side of the Kafka relation."""
-
-    def __init__(self, charm: CharmBase, endpoint_name: str = "kafka") -> None:
-        super().__init__(charm, endpoint_name)
-        self.charm = charm
-        self._endpoint_name = endpoint_name
-
-        # Observe relation events
-        event_observe_mapping = {
-            charm.on[self._endpoint_name].relation_changed: self._on_relation_changed,
-            charm.on[self._endpoint_name].relation_broken: self._on_relation_broken,
-        }
-        for event, observer in event_observe_mapping.items():
-            self.framework.observe(event, observer)
-
-    def _on_relation_changed(self, event) -> None:
-        if event.relation.app and all(
-            key in event.relation.data[event.relation.app]
-            for key in (KAFKA_HOST_APP_KEY, KAFKA_PORT_APP_KEY)
-        ):
-            self.charm.on.kafka_available.emit()
-
-    def _on_relation_broken(self, _) -> None:
-        self.charm.on.kafka_broken.emit()
-
-    @property
-    def host(self) -> str:
-        relation: Relation = self.model.get_relation(self._endpoint_name)
-        return (
-            relation.data[relation.app].get(KAFKA_HOST_APP_KEY)
-            if relation and relation.app
-            else None
-        )
-
-    @property
-    def port(self) -> int:
-        relation: Relation = self.model.get_relation(self._endpoint_name)
-        return (
-            int(relation.data[relation.app].get(KAFKA_PORT_APP_KEY))
-            if relation and relation.app
-            else None
-        )
-
-
-class KafkaProvides(Object):
-    """Provides-side of the Kafka relation."""
-
-    def __init__(self, charm: CharmBase, endpoint_name: str = "kafka") -> None:
-        super().__init__(charm, endpoint_name)
-        self._endpoint_name = endpoint_name
-
-    def set_host_info(self, host: str, port: int, relation: Optional[Relation] = None) -> None:
-        """Set Kafka host and port.
-
-        This function writes in the application data of the relation, therefore,
-        only the unit leader can call it.
-
-        Args:
-            host (str): Kafka hostname or IP address.
-            port (int): Kafka port.
-            relation (Optional[Relation]): Relation to update.
-                                           If not specified, all relations will be updated.
-
-        Raises:
-            Exception: if a non-leader unit calls this function.
-        """
-        if not self.model.unit.is_leader():
-            raise Exception("only the leader set host information.")
-
-        if relation:
-            self._update_relation_data(host, port, relation)
-            return
-
-        for relation in self.model.relations[self._endpoint_name]:
-            self._update_relation_data(host, port, relation)
-
-    def _update_relation_data(self, host: str, port: int, relation: Relation) -> None:
-        """Update data in relation if needed."""
-        relation.data[self.model.app][KAFKA_HOST_APP_KEY] = host
-        relation.data[self.model.app][KAFKA_PORT_APP_KEY] = str(port)
diff --git a/installers/charm/mon/metadata.yaml b/installers/charm/mon/metadata.yaml
deleted file mode 100644 (file)
index f3c3990..0000000
+++ /dev/null
@@ -1,49 +0,0 @@
-# Copyright 2020 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-name: osm-mon
-summary: OSM Monitoring (MON)
-description: |
-  A CAAS charm to deploy OSM's Monitoring (MON).
-series:
-  - kubernetes
-tags:
-  - kubernetes
-  - osm
-  - mon
-min-juju-version: 2.8.0
-deployment:
-  type: stateless
-  service: cluster
-resources:
-  image:
-    type: oci-image
-    description: OSM docker image for MON
-    upstream-source: "opensourcemano/mon:latest"
-requires:
-  kafka:
-    interface: kafka
-  mongodb:
-    interface: mongodb
-  prometheus:
-    interface: prometheus
-  keystone:
-    interface: keystone
diff --git a/installers/charm/mon/requirements-test.txt b/installers/charm/mon/requirements-test.txt
deleted file mode 100644 (file)
index cf61dd4..0000000
+++ /dev/null
@@ -1,20 +0,0 @@
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-mock==4.0.3
diff --git a/installers/charm/mon/requirements.txt b/installers/charm/mon/requirements.txt
deleted file mode 100644 (file)
index 1a8928c..0000000
+++ /dev/null
@@ -1,22 +0,0 @@
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-git+https://github.com/charmed-osm/ops-lib-charmed-osm/@master
\ No newline at end of file
diff --git a/installers/charm/mon/src/charm.py b/installers/charm/mon/src/charm.py
deleted file mode 100755 (executable)
index 9ad49ad..0000000
+++ /dev/null
@@ -1,395 +0,0 @@
-#!/usr/bin/env python3
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-# pylint: disable=E0213
-
-
-import base64
-import logging
-from typing import NoReturn, Optional
-
-
-from charms.kafka_k8s.v0.kafka import KafkaEvents, KafkaRequires
-from ops.main import main
-from opslib.osm.charm import CharmedOsmBase, RelationsMissing
-from opslib.osm.interfaces.keystone import KeystoneClient
-from opslib.osm.interfaces.mongo import MongoClient
-from opslib.osm.interfaces.prometheus import PrometheusClient
-from opslib.osm.pod import (
-    ContainerV3Builder,
-    FilesV3Builder,
-    PodRestartPolicy,
-    PodSpecV3Builder,
-)
-from opslib.osm.validator import ModelValidator, validator
-
-
-logger = logging.getLogger(__name__)
-
-PORT = 8000
-
-
-def _check_certificate_data(name: str, content: str):
-    if not name or not content:
-        raise ValueError("certificate name and content must be a non-empty string")
-
-
-def _extract_certificates(certs_config: str):
-    certificates = {}
-    if certs_config:
-        cert_list = certs_config.split(",")
-        for cert in cert_list:
-            name, content = cert.split(":")
-            _check_certificate_data(name, content)
-            certificates[name] = content
-    return certificates
-
-
-def decode(content: str):
-    return base64.b64decode(content.encode("utf-8")).decode("utf-8")
-
-
-class ConfigModel(ModelValidator):
-    keystone_enabled: bool
-    vca_host: str
-    vca_user: str
-    vca_secret: str
-    vca_cacert: str
-    database_commonkey: str
-    mongodb_uri: Optional[str]
-    log_level: str
-    openstack_default_granularity: int
-    global_request_timeout: int
-    collector_interval: int
-    vm_infra_metrics: bool
-    evaluator_interval: int
-    grafana_url: str
-    grafana_user: str
-    grafana_password: str
-    certificates: Optional[str]
-    image_pull_policy: str
-    debug_mode: bool
-    security_context: bool
-
-    @validator("log_level")
-    def validate_log_level(cls, v):
-        if v not in {"INFO", "DEBUG"}:
-            raise ValueError("value must be INFO or DEBUG")
-        return v
-
-    @validator("certificates")
-    def validate_certificates(cls, v):
-        # Raises an exception if it cannot extract the certificates
-        _extract_certificates(v)
-        return v
-
-    @validator("mongodb_uri")
-    def validate_mongodb_uri(cls, v):
-        if v and not v.startswith("mongodb://"):
-            raise ValueError("mongodb_uri is not properly formed")
-        return v
-
-    @validator("image_pull_policy")
-    def validate_image_pull_policy(cls, v):
-        values = {
-            "always": "Always",
-            "ifnotpresent": "IfNotPresent",
-            "never": "Never",
-        }
-        v = v.lower()
-        if v not in values.keys():
-            raise ValueError("value must be always, ifnotpresent or never")
-        return values[v]
-
-    @property
-    def certificates_dict(cls):
-        return _extract_certificates(cls.certificates) if cls.certificates else {}
-
-
-class MonCharm(CharmedOsmBase):
-    on = KafkaEvents()
-
-    def __init__(self, *args) -> NoReturn:
-        super().__init__(
-            *args,
-            oci_image="image",
-            vscode_workspace=VSCODE_WORKSPACE,
-        )
-        if self.config.get("debug_mode"):
-            self.enable_debug_mode(
-                pubkey=self.config.get("debug_pubkey"),
-                hostpaths={
-                    "MON": {
-                        "hostpath": self.config.get("debug_mon_local_path"),
-                        "container-path": "/usr/lib/python3/dist-packages/osm_mon",
-                    },
-                    "N2VC": {
-                        "hostpath": self.config.get("debug_n2vc_local_path"),
-                        "container-path": "/usr/lib/python3/dist-packages/n2vc",
-                    },
-                    "osm_common": {
-                        "hostpath": self.config.get("debug_common_local_path"),
-                        "container-path": "/usr/lib/python3/dist-packages/osm_common",
-                    },
-                },
-            )
-        self.kafka = KafkaRequires(self)
-        self.framework.observe(self.on.kafka_available, self.configure_pod)
-        self.framework.observe(self.on.kafka_broken, self.configure_pod)
-
-        self.mongodb_client = MongoClient(self, "mongodb")
-        self.framework.observe(self.on["mongodb"].relation_changed, self.configure_pod)
-        self.framework.observe(self.on["mongodb"].relation_broken, self.configure_pod)
-
-        self.prometheus_client = PrometheusClient(self, "prometheus")
-        self.framework.observe(
-            self.on["prometheus"].relation_changed, self.configure_pod
-        )
-        self.framework.observe(
-            self.on["prometheus"].relation_broken, self.configure_pod
-        )
-
-        self.keystone_client = KeystoneClient(self, "keystone")
-        self.framework.observe(self.on["keystone"].relation_changed, self.configure_pod)
-        self.framework.observe(self.on["keystone"].relation_broken, self.configure_pod)
-
-    def _check_missing_dependencies(self, config: ConfigModel):
-        missing_relations = []
-
-        if not self.kafka.host or not self.kafka.port:
-            missing_relations.append("kafka")
-        if not config.mongodb_uri and self.mongodb_client.is_missing_data_in_unit():
-            missing_relations.append("mongodb")
-        if self.prometheus_client.is_missing_data_in_app():
-            missing_relations.append("prometheus")
-        if config.keystone_enabled:
-            if self.keystone_client.is_missing_data_in_app():
-                missing_relations.append("keystone")
-
-        if missing_relations:
-            raise RelationsMissing(missing_relations)
-
-    def _build_cert_files(
-        self,
-        config: ConfigModel,
-    ):
-        cert_files_builder = FilesV3Builder()
-        for name, content in config.certificates_dict.items():
-            cert_files_builder.add_file(name, decode(content), mode=0o600)
-        return cert_files_builder.build()
-
-    def build_pod_spec(self, image_info):
-        # Validate config
-        config = ConfigModel(**dict(self.config))
-
-        if config.mongodb_uri and not self.mongodb_client.is_missing_data_in_unit():
-            raise Exception("Mongodb data cannot be provided via config and relation")
-
-        # Check relations
-        self._check_missing_dependencies(config)
-
-        security_context_enabled = (
-            config.security_context if not config.debug_mode else False
-        )
-
-        # Create Builder for the PodSpec
-        pod_spec_builder = PodSpecV3Builder(
-            enable_security_context=security_context_enabled
-        )
-
-        # Add secrets to the pod
-        mongodb_secret_name = f"{self.app.name}-mongodb-secret"
-        pod_spec_builder.add_secret(
-            mongodb_secret_name,
-            {
-                "uri": config.mongodb_uri or self.mongodb_client.connection_string,
-                "commonkey": config.database_commonkey,
-            },
-        )
-        grafana_secret_name = f"{self.app.name}-grafana-secret"
-        pod_spec_builder.add_secret(
-            grafana_secret_name,
-            {
-                "url": config.grafana_url,
-                "user": config.grafana_user,
-                "password": config.grafana_password,
-            },
-        )
-
-        vca_secret_name = f"{self.app.name}-vca-secret"
-        pod_spec_builder.add_secret(
-            vca_secret_name,
-            {
-                "host": config.vca_host,
-                "user": config.vca_user,
-                "secret": config.vca_secret,
-                "cacert": config.vca_cacert,
-            },
-        )
-
-        # Build Container
-        container_builder = ContainerV3Builder(
-            self.app.name,
-            image_info,
-            config.image_pull_policy,
-            run_as_non_root=security_context_enabled,
-        )
-        certs_files = self._build_cert_files(config)
-
-        if certs_files:
-            container_builder.add_volume_config("certs", "/certs", certs_files)
-
-        container_builder.add_port(name=self.app.name, port=PORT)
-        container_builder.add_envs(
-            {
-                # General configuration
-                "ALLOW_ANONYMOUS_LOGIN": "yes",
-                "OSMMON_OPENSTACK_DEFAULT_GRANULARITY": config.openstack_default_granularity,
-                "OSMMON_GLOBAL_REQUEST_TIMEOUT": config.global_request_timeout,
-                "OSMMON_GLOBAL_LOGLEVEL": config.log_level,
-                "OSMMON_COLLECTOR_INTERVAL": config.collector_interval,
-                "OSMMON_COLLECTOR_VM_INFRA_METRICS": config.vm_infra_metrics,
-                "OSMMON_EVALUATOR_INTERVAL": config.evaluator_interval,
-                # Kafka configuration
-                "OSMMON_MESSAGE_DRIVER": "kafka",
-                "OSMMON_MESSAGE_HOST": self.kafka.host,
-                "OSMMON_MESSAGE_PORT": self.kafka.port,
-                # Database configuration
-                "OSMMON_DATABASE_DRIVER": "mongo",
-                # Prometheus configuration
-                "OSMMON_PROMETHEUS_URL": f"http://{self.prometheus_client.hostname}:{self.prometheus_client.port}",
-            }
-        )
-        prometheus_user = self.prometheus_client.user
-        prometheus_password = self.prometheus_client.password
-        if prometheus_user and prometheus_password:
-            container_builder.add_envs(
-                {
-                    "OSMMON_PROMETHEUS_USER": prometheus_user,
-                    "OSMMON_PROMETHEUS_PASSWORD": prometheus_password,
-                }
-            )
-        container_builder.add_secret_envs(
-            secret_name=mongodb_secret_name,
-            envs={
-                "OSMMON_DATABASE_URI": "uri",
-                "OSMMON_DATABASE_COMMONKEY": "commonkey",
-            },
-        )
-        container_builder.add_secret_envs(
-            secret_name=vca_secret_name,
-            envs={
-                "OSMMON_VCA_HOST": "host",
-                "OSMMON_VCA_USER": "user",
-                "OSMMON_VCA_SECRET": "secret",
-                "OSMMON_VCA_CACERT": "cacert",
-            },
-        )
-        container_builder.add_secret_envs(
-            secret_name=grafana_secret_name,
-            envs={
-                "OSMMON_GRAFANA_URL": "url",
-                "OSMMON_GRAFANA_USER": "user",
-                "OSMMON_GRAFANA_PASSWORD": "password",
-            },
-        )
-        if config.keystone_enabled:
-            keystone_secret_name = f"{self.app.name}-keystone-secret"
-            pod_spec_builder.add_secret(
-                keystone_secret_name,
-                {
-                    "url": self.keystone_client.host,
-                    "user_domain": self.keystone_client.user_domain_name,
-                    "project_domain": self.keystone_client.project_domain_name,
-                    "service_username": self.keystone_client.username,
-                    "service_password": self.keystone_client.password,
-                    "service_project": self.keystone_client.service,
-                },
-            )
-            container_builder.add_env("OSMMON_KEYSTONE_ENABLED", True)
-            container_builder.add_secret_envs(
-                secret_name=keystone_secret_name,
-                envs={
-                    "OSMMON_KEYSTONE_URL": "url",
-                    "OSMMON_KEYSTONE_DOMAIN_NAME": "user_domain",
-                    "OSMMON_KEYSTONE_PROJECT_DOMAIN_NAME": "project_domain",
-                    "OSMMON_KEYSTONE_SERVICE_USER": "service_username",
-                    "OSMMON_KEYSTONE_SERVICE_PASSWORD": "service_password",
-                    "OSMMON_KEYSTONE_SERVICE_PROJECT": "service_project",
-                },
-            )
-        container = container_builder.build()
-
-        # Add restart policy
-        restart_policy = PodRestartPolicy()
-        restart_policy.add_secrets()
-        pod_spec_builder.set_restart_policy(restart_policy)
-
-        # Add container to pod spec
-        pod_spec_builder.add_container(container)
-
-        return pod_spec_builder.build()
-
-
-VSCODE_WORKSPACE = {
-    "folders": [
-        {"path": "/usr/lib/python3/dist-packages/osm_mon"},
-        {"path": "/usr/lib/python3/dist-packages/osm_common"},
-        {"path": "/usr/lib/python3/dist-packages/n2vc"},
-    ],
-    "settings": {},
-    "launch": {
-        "version": "0.2.0",
-        "configurations": [
-            {
-                "name": "MON Server",
-                "type": "python",
-                "request": "launch",
-                "module": "osm_mon.cmd.mon_server",
-                "justMyCode": False,
-            },
-            {
-                "name": "MON evaluator",
-                "type": "python",
-                "request": "launch",
-                "module": "osm_mon.cmd.mon_evaluator",
-                "justMyCode": False,
-            },
-            {
-                "name": "MON collector",
-                "type": "python",
-                "request": "launch",
-                "module": "osm_mon.cmd.mon_collector",
-                "justMyCode": False,
-            },
-            {
-                "name": "MON dashboarder",
-                "type": "python",
-                "request": "launch",
-                "module": "osm_mon.cmd.mon_dashboarder",
-                "justMyCode": False,
-            },
-        ],
-    },
-}
-if __name__ == "__main__":
-    main(MonCharm)
diff --git a/installers/charm/mon/src/pod_spec.py b/installers/charm/mon/src/pod_spec.py
deleted file mode 100644 (file)
index dcadfc0..0000000
+++ /dev/null
@@ -1,231 +0,0 @@
-#!/usr/bin/env python3
-# Copyright 2020 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-import logging
-from typing import Any, Dict, List, NoReturn
-
-logger = logging.getLogger(__name__)
-
-
-def _validate_data(
-    config_data: Dict[str, Any], relation_data: Dict[str, Any]
-) -> NoReturn:
-    """Validate input data.
-
-    Args:
-        config_data (Dict[str, Any]): configuration data.
-        relation_data (Dict[str, Any]): relation data.
-    """
-    config_validators = {
-        "openstack_default_granularity": lambda value, _: (
-            isinstance(value, int) and value > 0
-        ),
-        "global_request_timeout": lambda value, _: isinstance(value, int) and value > 0,
-        "log_level": lambda value, _: (
-            isinstance(value, str) and value in ("INFO", "DEBUG")
-        ),
-        "collector_interval": lambda value, _: isinstance(value, int) and value > 0,
-        "evaluator_interval": lambda value, _: isinstance(value, int) and value > 0,
-        "database_commonkey": lambda value, _: (
-            isinstance(value, str) and len(value) > 0
-        ),
-        "vca_host": lambda value, _: isinstance(value, str) and len(value) > 0,
-        "vca_user": lambda value, _: isinstance(value, str) and len(value) > 0,
-        "vca_password": lambda value, _: isinstance(value, str) and len(value) > 0,
-        "vca_cacert": lambda value, _: isinstance(value, str),
-    }
-    relation_validators = {
-        "message_host": lambda value, _: isinstance(value, str) and len(value) > 0,
-        "message_port": lambda value, _: isinstance(value, int) and value > 0,
-        "database_uri": lambda value, _: (
-            isinstance(value, str) and value.startswith("mongodb://")
-        ),
-        "prometheus_host": lambda value, _: isinstance(value, str) and len(value) > 0,
-        "prometheus_port": lambda value, _: isinstance(value, int) and value > 0,
-    }
-    problems = []
-
-    for key, validator in config_validators.items():
-        valid = validator(config_data.get(key), config_data)
-
-        if not valid:
-            problems.append(key)
-
-    for key, validator in relation_validators.items():
-        valid = validator(relation_data.get(key), relation_data)
-
-        if not valid:
-            problems.append(key)
-
-    if len(problems) > 0:
-        raise ValueError("Errors found in: {}".format(", ".join(problems)))
-
-
-def _make_pod_ports(port: int) -> List[Dict[str, Any]]:
-    """Generate pod ports details.
-
-    Args:
-        port (int): port to expose.
-
-    Returns:
-        List[Dict[str, Any]]: pod port details.
-    """
-    return [{"name": "mon", "containerPort": port, "protocol": "TCP"}]
-
-
-def _make_pod_envconfig(
-    config: Dict[str, Any], relation_state: Dict[str, Any]
-) -> Dict[str, Any]:
-    """Generate pod environment configuration.
-
-    Args:
-        config (Dict[str, Any]): configuration information.
-        relation_state (Dict[str, Any]): relation state information.
-
-    Returns:
-        Dict[str, Any]: pod environment configuration.
-    """
-    envconfig = {
-        # General configuration
-        "ALLOW_ANONYMOUS_LOGIN": "yes",
-        "OSMMON_OPENSTACK_DEFAULT_GRANULARITY": config["openstack_default_granularity"],
-        "OSMMON_GLOBAL_REQUEST_TIMEOUT": config["global_request_timeout"],
-        "OSMMON_GLOBAL_LOGLEVEL": config["log_level"],
-        "OSMMON_COLLECTOR_INTERVAL": config["collector_interval"],
-        "OSMMON_EVALUATOR_INTERVAL": config["evaluator_interval"],
-        # Kafka configuration
-        "OSMMON_MESSAGE_DRIVER": "kafka",
-        "OSMMON_MESSAGE_HOST": relation_state["message_host"],
-        "OSMMON_MESSAGE_PORT": relation_state["message_port"],
-        # Database configuration
-        "OSMMON_DATABASE_DRIVER": "mongo",
-        "OSMMON_DATABASE_URI": relation_state["database_uri"],
-        "OSMMON_DATABASE_COMMONKEY": config["database_commonkey"],
-        # Prometheus configuration
-        "OSMMON_PROMETHEUS_URL": f"http://{relation_state['prometheus_host']}:{relation_state['prometheus_port']}",
-        # VCA configuration
-        "OSMMON_VCA_HOST": config["vca_host"],
-        "OSMMON_VCA_USER": config["vca_user"],
-        "OSMMON_VCA_SECRET": config["vca_password"],
-        "OSMMON_VCA_CACERT": config["vca_cacert"],
-    }
-
-    return envconfig
-
-
-def _make_startup_probe() -> Dict[str, Any]:
-    """Generate startup probe.
-
-    Returns:
-        Dict[str, Any]: startup probe.
-    """
-    return {
-        "exec": {"command": ["/usr/bin/pgrep python3"]},
-        "initialDelaySeconds": 60,
-        "timeoutSeconds": 5,
-    }
-
-
-def _make_readiness_probe(port: int) -> Dict[str, Any]:
-    """Generate readiness probe.
-
-    Args:
-        port (int): [description]
-
-    Returns:
-        Dict[str, Any]: readiness probe.
-    """
-    return {
-        "tcpSocket": {
-            "port": port,
-        },
-        "periodSeconds": 10,
-        "timeoutSeconds": 5,
-        "successThreshold": 1,
-        "failureThreshold": 3,
-    }
-
-
-def _make_liveness_probe(port: int) -> Dict[str, Any]:
-    """Generate liveness probe.
-
-    Args:
-        port (int): [description]
-
-    Returns:
-        Dict[str, Any]: liveness probe.
-    """
-    return {
-        "tcpSocket": {
-            "port": port,
-        },
-        "initialDelaySeconds": 45,
-        "periodSeconds": 10,
-        "timeoutSeconds": 5,
-        "successThreshold": 1,
-        "failureThreshold": 3,
-    }
-
-
-def make_pod_spec(
-    image_info: Dict[str, str],
-    config: Dict[str, Any],
-    relation_state: Dict[str, Any],
-    app_name: str = "mon",
-    port: int = 8000,
-) -> Dict[str, Any]:
-    """Generate the pod spec information.
-
-    Args:
-        image_info (Dict[str, str]): Object provided by
-                                     OCIImageResource("image").fetch().
-        config (Dict[str, Any]): Configuration information.
-        relation_state (Dict[str, Any]): Relation state information.
-        app_name (str, optional): Application name. Defaults to "mon".
-        port (int, optional): Port for the container. Defaults to 8000.
-
-    Returns:
-        Dict[str, Any]: Pod spec dictionary for the charm.
-    """
-    if not image_info:
-        return None
-
-    _validate_data(config, relation_state)
-
-    ports = _make_pod_ports(port)
-    env_config = _make_pod_envconfig(config, relation_state)
-
-    return {
-        "version": 3,
-        "containers": [
-            {
-                "name": app_name,
-                "imageDetails": image_info,
-                "imagePullPolicy": "Always",
-                "ports": ports,
-                "envConfig": env_config,
-            }
-        ],
-        "kubernetesResources": {
-            "ingressResources": [],
-        },
-    }
diff --git a/installers/charm/mon/tests/__init__.py b/installers/charm/mon/tests/__init__.py
deleted file mode 100644 (file)
index 446d5ce..0000000
+++ /dev/null
@@ -1,40 +0,0 @@
-#!/usr/bin/env python3
-# Copyright 2020 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-"""Init mocking for unit tests."""
-
-import sys
-
-
-import mock
-
-
-class OCIImageResourceErrorMock(Exception):
-    pass
-
-
-sys.path.append("src")
-
-oci_image = mock.MagicMock()
-oci_image.OCIImageResourceError = OCIImageResourceErrorMock
-sys.modules["oci_image"] = oci_image
-sys.modules["oci_image"].OCIImageResource().fetch.return_value = {}
diff --git a/installers/charm/mon/tests/test_charm.py b/installers/charm/mon/tests/test_charm.py
deleted file mode 100644 (file)
index e9748d3..0000000
+++ /dev/null
@@ -1,411 +0,0 @@
-#!/usr/bin/env python3
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-import base64
-import sys
-from typing import NoReturn
-import unittest
-
-from charm import MonCharm
-from ops.model import ActiveStatus, BlockedStatus
-from ops.testing import Harness
-
-
-def encode(content: str):
-    return base64.b64encode(content.encode("ascii")).decode("utf-8")
-
-
-certificate_pem = encode(
-    """
------BEGIN CERTIFICATE-----
-MIIDazCCAlOgAwIBAgIUf1b0s3UKtrxHXH2rge7UaQyfJAMwDQYJKoZIhvcNAQEL
-BQAwRTELMAkGA1UEBhMCQVUxEzARBgNVBAgMClNvbWUtU3RhdGUxITAfBgNVBAoM
-GEludGVybmV0IFdpZGdpdHMgUHR5IEx0ZDAeFw0yMTAzMjIxNzEyMjdaFw0zMTAz
-MjAxNzEyMjdaMEUxCzAJBgNVBAYTAkFVMRMwEQYDVQQIDApTb21lLVN0YXRlMSEw
-HwYDVQQKDBhJbnRlcm5ldCBXaWRnaXRzIFB0eSBMdGQwggEiMA0GCSqGSIb3DQEB
-AQUAA4IBDwAwggEKAoIBAQCgCfCBgYAN6ON0yHDXuW407rFtJVRf0u46Jrp0Dk7J
-kkSZ1e7Kq14r7yFHazEBWv78oOdwBocvWrd8leLuf3bYGcHR65hRy6A/fbYm5Aje
-cKpwlFwaqfR4BLelwJl79jZ2rJX738cCBVrIk1nAVdOxGrXV4MTWUaKR2c+uKKvc
-OKRT+5VqCeP4N5FWeATZ/KqGu8uV9E9WhFgwIZyStemLyLaDbn5PmAQ6S9oeR5jJ
-o2gEEp/lDKvsqOWs76KFumSKa9hQs5Dw2lj0mb1UoyYK1gYc4ubzVChJadv44AU8
-MYtIjlFn1X1P+RjaKZNUIAGXkoLwYn6SizF6y6LiuFS9AgMBAAGjUzBRMB0GA1Ud
-DgQWBBRl+/23CB+FXczeAZRQyYcfOdy9YDAfBgNVHSMEGDAWgBRl+/23CB+FXcze
-AZRQyYcfOdy9YDAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3DQEBCwUAA4IBAQAd
-dkeDym6lRN8kWFtfu3IyiLF8G8sn91qNbH3Yr4TuTBhgcjYyW6PgisSbrNgA9ysE
-GoaF7ohb8GeVfCsQdK23+NpAlj/+DZ3OnGcxwXj1RUAz4yr9kanV1yuEtr1q2xJI
-UaECWr8HZlwGBAKNTGx2EXT2/2aFzgULpDcxzTKD+MRpKpMUrWhf9ULvVrclvHWe
-POLYhobUFuBHuo6rt5Rcq16j67zCX9EVTlAE3o2OECIWByK22sXdeOidYMpTkl4q
-8FrOqjNsx5d+SBPJBv/pqtBm4bA47Vx1P8tbWOQ4bXS0UmXgwpeBOU/O/ot30+KS
-JnKEy+dYyvVBKg77sRHw
------END CERTIFICATE-----
-"""
-)
-
-
-class TestCharm(unittest.TestCase):
-    """Prometheus Charm unit tests."""
-
-    def setUp(self) -> NoReturn:
-        """Test setup"""
-        self.image_info = sys.modules["oci_image"].OCIImageResource().fetch()
-        self.harness = Harness(MonCharm)
-        self.harness.set_leader(is_leader=True)
-        self.harness.begin()
-        self.config = {
-            "vca_host": "192.168.0.13",
-            "vca_user": "admin",
-            "vca_secret": "admin",
-            "vca_cacert": "cacert",
-            "database_commonkey": "commonkey",
-            "mongodb_uri": "",
-            "log_level": "INFO",
-            "openstack_default_granularity": 10,
-            "global_request_timeout": 10,
-            "collector_interval": 30,
-            "evaluator_interval": 30,
-            "keystone_enabled": True,
-            "certificates": f"cert1:{certificate_pem}",
-        }
-        self.harness.update_config(self.config)
-
-    def test_config_changed_no_relations(
-        self,
-    ) -> NoReturn:
-        """Test ingress resources without HTTP."""
-
-        self.harness.charm.on.config_changed.emit()
-
-        # Assertions
-        self.assertIsInstance(self.harness.charm.unit.status, BlockedStatus)
-        self.assertTrue(
-            all(
-                relation in self.harness.charm.unit.status.message
-                for relation in ["mongodb", "kafka", "prometheus", "keystone"]
-            )
-        )
-
-    def test_config_changed_non_leader(
-        self,
-    ) -> NoReturn:
-        """Test ingress resources without HTTP."""
-        self.harness.set_leader(is_leader=False)
-        self.harness.charm.on.config_changed.emit()
-
-        # Assertions
-        self.assertIsInstance(self.harness.charm.unit.status, ActiveStatus)
-
-    def test_with_relations_and_mongodb_config(
-        self,
-    ) -> NoReturn:
-        "Test with relations (internal)"
-        self.initialize_kafka_relation()
-        self.initialize_mongo_config()
-        self.initialize_prometheus_relation()
-        self.initialize_keystone_relation()
-        # Verifying status
-        self.assertNotIsInstance(self.harness.charm.unit.status, BlockedStatus)
-
-    def test_with_relations(
-        self,
-    ) -> NoReturn:
-        "Test with relations (internal)"
-        self.initialize_kafka_relation()
-        self.initialize_mongo_relation()
-        self.initialize_prometheus_relation()
-        self.initialize_keystone_relation()
-        # Verifying status
-        self.assertNotIsInstance(self.harness.charm.unit.status, BlockedStatus)
-
-    def test_exception_mongodb_relation_and_config(
-        self,
-    ) -> NoReturn:
-        "Test with relations and config for mongodb. Must fail"
-        self.initialize_mongo_relation()
-        self.initialize_mongo_config()
-        # Verifying status
-        self.assertIsInstance(self.harness.charm.unit.status, BlockedStatus)
-
-    def initialize_kafka_relation(self):
-        kafka_relation_id = self.harness.add_relation("kafka", "kafka")
-        self.harness.add_relation_unit(kafka_relation_id, "kafka/0")
-        self.harness.update_relation_data(
-            kafka_relation_id, "kafka", {"host": "kafka", "port": 9092}
-        )
-
-    def initialize_mongo_config(self):
-        self.harness.update_config({"mongodb_uri": "mongodb://mongo:27017"})
-
-    def initialize_mongo_relation(self):
-        mongodb_relation_id = self.harness.add_relation("mongodb", "mongodb")
-        self.harness.add_relation_unit(mongodb_relation_id, "mongodb/0")
-        self.harness.update_relation_data(
-            mongodb_relation_id,
-            "mongodb/0",
-            {"connection_string": "mongodb://mongo:27017"},
-        )
-
-    def initialize_prometheus_relation(self):
-        prometheus_relation_id = self.harness.add_relation("prometheus", "prometheus")
-        self.harness.add_relation_unit(prometheus_relation_id, "prometheus/0")
-        self.harness.update_relation_data(
-            prometheus_relation_id,
-            "prometheus",
-            {"hostname": "prometheus", "port": 9090},
-        )
-
-    def initialize_keystone_relation(self):
-        keystone_relation_id = self.harness.add_relation("keystone", "keystone")
-        self.harness.add_relation_unit(keystone_relation_id, "keystone/0")
-        self.harness.update_relation_data(
-            keystone_relation_id,
-            "keystone",
-            {
-                "host": "host",
-                "port": 5000,
-                "user_domain_name": "ud",
-                "project_domain_name": "pd",
-                "username": "u",
-                "password": "p",
-                "service": "s",
-                "keystone_db_password": "something",
-                "region_id": "something",
-                "admin_username": "something",
-                "admin_password": "something",
-                "admin_project_name": "something",
-            },
-        )
-
-
-if __name__ == "__main__":
-    unittest.main()
-
-
-# class TestCharm(unittest.TestCase):
-#     """MON Charm unit tests."""
-
-#     def setUp(self) -> NoReturn:
-#         """Test setup"""
-#         self.harness = Harness(MonCharm)
-#         self.harness.set_leader(is_leader=True)
-#         self.harness.begin()
-
-#     def test_on_start_without_relations(self) -> NoReturn:
-#         """Test installation without any relation."""
-#         self.harness.charm.on.start.emit()
-
-#         # Verifying status
-#         self.assertIsInstance(self.harness.charm.unit.status, BlockedStatus)
-
-#         # Verifying status message
-#         self.assertGreater(len(self.harness.charm.unit.status.message), 0)
-#         self.assertTrue(
-#             self.harness.charm.unit.status.message.startswith("Waiting for ")
-#         )
-#         self.assertIn("kafka", self.harness.charm.unit.status.message)
-#         self.assertIn("mongodb", self.harness.charm.unit.status.message)
-#         self.assertIn("prometheus", self.harness.charm.unit.status.message)
-#         self.assertTrue(self.harness.charm.unit.status.message.endswith(" relations"))
-
-#     def test_on_start_with_relations(self) -> NoReturn:
-#         """Test deployment without keystone."""
-#         expected_result = {
-#             "version": 3,
-#             "containers": [
-#                 {
-#                     "name": "mon",
-#                     "imageDetails": self.harness.charm.image.fetch(),
-#                     "imagePullPolicy": "Always",
-#                     "ports": [
-#                         {
-#                             "name": "mon",
-#                             "containerPort": 8000,
-#                             "protocol": "TCP",
-#                         }
-#                     ],
-#                     "envConfig": {
-#                         "ALLOW_ANONYMOUS_LOGIN": "yes",
-#                         "OSMMON_OPENSTACK_DEFAULT_GRANULARITY": 300,
-#                         "OSMMON_GLOBAL_REQUEST_TIMEOUT": 10,
-#                         "OSMMON_GLOBAL_LOGLEVEL": "INFO",
-#                         "OSMMON_COLLECTOR_INTERVAL": 30,
-#                         "OSMMON_EVALUATOR_INTERVAL": 30,
-#                         "OSMMON_MESSAGE_DRIVER": "kafka",
-#                         "OSMMON_MESSAGE_HOST": "kafka",
-#                         "OSMMON_MESSAGE_PORT": 9092,
-#                         "OSMMON_DATABASE_DRIVER": "mongo",
-#                         "OSMMON_DATABASE_URI": "mongodb://mongo:27017",
-#                         "OSMMON_DATABASE_COMMONKEY": "osm",
-#                         "OSMMON_PROMETHEUS_URL": "http://prometheus:9090",
-#                         "OSMMON_VCA_HOST": "admin",
-#                         "OSMMON_VCA_USER": "admin",
-#                         "OSMMON_VCA_SECRET": "secret",
-#                         "OSMMON_VCA_CACERT": "",
-#                     },
-#                 }
-#             ],
-#             "kubernetesResources": {"ingressResources": []},
-#         }
-
-#         self.harness.charm.on.start.emit()
-
-#         # Check if kafka datastore is initialized
-#         self.assertIsNone(self.harness.charm.state.message_host)
-#         self.assertIsNone(self.harness.charm.state.message_port)
-
-#         # Check if mongodb datastore is initialized
-#         self.assertIsNone(self.harness.charm.state.database_uri)
-
-#         # Check if prometheus datastore is initialized
-#         self.assertIsNone(self.harness.charm.state.prometheus_host)
-#         self.assertIsNone(self.harness.charm.state.prometheus_port)
-
-#         # Initializing the kafka relation
-#         kafka_relation_id = self.harness.add_relation("kafka", "kafka")
-#         self.harness.add_relation_unit(kafka_relation_id, "kafka/0")
-#         self.harness.update_relation_data(
-#             kafka_relation_id, "kafka/0", {"host": "kafka", "port": 9092}
-#         )
-
-#         # Initializing the mongo relation
-#         mongodb_relation_id = self.harness.add_relation("mongodb", "mongodb")
-#         self.harness.add_relation_unit(mongodb_relation_id, "mongodb/0")
-#         self.harness.update_relation_data(
-#             mongodb_relation_id,
-#             "mongodb/0",
-#             {"connection_string": "mongodb://mongo:27017"},
-#         )
-
-#         # Initializing the prometheus relation
-#         prometheus_relation_id = self.harness.add_relation("prometheus", "prometheus")
-#         self.harness.add_relation_unit(prometheus_relation_id, "prometheus/0")
-#         self.harness.update_relation_data(
-#             prometheus_relation_id,
-#             "prometheus",
-#             {"hostname": "prometheus", "port": 9090},
-#         )
-
-#         # Checking if kafka data is stored
-#         self.assertEqual(self.harness.charm.state.message_host, "kafka")
-#         self.assertEqual(self.harness.charm.state.message_port, 9092)
-
-#         # Checking if mongodb data is stored
-#         self.assertEqual(self.harness.charm.state.database_uri, "mongodb://mongo:27017")
-
-#         # Checking if prometheus data is stored
-#         self.assertEqual(self.harness.charm.state.prometheus_host, "prometheus")
-#         self.assertEqual(self.harness.charm.state.prometheus_port, 9090)
-
-#         # Verifying status
-#         self.assertNotIsInstance(self.harness.charm.unit.status, BlockedStatus)
-
-#         pod_spec, _ = self.harness.get_pod_spec()
-
-#         self.assertDictEqual(expected_result, pod_spec)
-
-#     def test_on_kafka_unit_relation_changed(self) -> NoReturn:
-#         """Test to see if kafka relation is updated."""
-#         self.harness.charm.on.start.emit()
-
-#         self.assertIsNone(self.harness.charm.state.message_host)
-#         self.assertIsNone(self.harness.charm.state.message_port)
-
-#         relation_id = self.harness.add_relation("kafka", "kafka")
-#         self.harness.add_relation_unit(relation_id, "kafka/0")
-#         self.harness.update_relation_data(
-#             relation_id, "kafka/0", {"host": "kafka", "port": 9092}
-#         )
-
-#         self.assertEqual(self.harness.charm.state.message_host, "kafka")
-#         self.assertEqual(self.harness.charm.state.message_port, 9092)
-
-#         # Verifying status
-#         self.assertIsInstance(self.harness.charm.unit.status, BlockedStatus)
-
-#         # Verifying status message
-#         self.assertGreater(len(self.harness.charm.unit.status.message), 0)
-#         self.assertTrue(
-#             self.harness.charm.unit.status.message.startswith("Waiting for ")
-#         )
-#         self.assertNotIn("kafka", self.harness.charm.unit.status.message)
-#         self.assertIn("mongodb", self.harness.charm.unit.status.message)
-#         self.assertIn("prometheus", self.harness.charm.unit.status.message)
-#         self.assertTrue(self.harness.charm.unit.status.message.endswith(" relations"))
-
-#     def test_on_mongodb_unit_relation_changed(self) -> NoReturn:
-#         """Test to see if mongodb relation is updated."""
-#         self.harness.charm.on.start.emit()
-
-#         self.assertIsNone(self.harness.charm.state.database_uri)
-
-#         relation_id = self.harness.add_relation("mongodb", "mongodb")
-#         self.harness.add_relation_unit(relation_id, "mongodb/0")
-#         self.harness.update_relation_data(
-#             relation_id, "mongodb/0", {"connection_string": "mongodb://mongo:27017"}
-#         )
-
-#         self.assertEqual(self.harness.charm.state.database_uri, "mongodb://mongo:27017")
-
-#         # Verifying status
-#         self.assertIsInstance(self.harness.charm.unit.status, BlockedStatus)
-
-#         # Verifying status message
-#         self.assertGreater(len(self.harness.charm.unit.status.message), 0)
-#         self.assertTrue(
-#             self.harness.charm.unit.status.message.startswith("Waiting for ")
-#         )
-#         self.assertIn("kafka", self.harness.charm.unit.status.message)
-#         self.assertNotIn("mongodb", self.harness.charm.unit.status.message)
-#         self.assertIn("prometheus", self.harness.charm.unit.status.message)
-#         self.assertTrue(self.harness.charm.unit.status.message.endswith(" relations"))
-
-#     def test_on_prometheus_unit_relation_changed(self) -> NoReturn:
-#         """Test to see if prometheus relation is updated."""
-#         self.harness.charm.on.start.emit()
-
-#         self.assertIsNone(self.harness.charm.state.prometheus_host)
-#         self.assertIsNone(self.harness.charm.state.prometheus_port)
-
-#         relation_id = self.harness.add_relation("prometheus", "prometheus")
-#         self.harness.add_relation_unit(relation_id, "prometheus/0")
-#         self.harness.update_relation_data(
-#             relation_id, "prometheus", {"hostname": "prometheus", "port": 9090}
-#         )
-
-#         self.assertEqual(self.harness.charm.state.prometheus_host, "prometheus")
-#         self.assertEqual(self.harness.charm.state.prometheus_port, 9090)
-
-#         # Verifying status
-#         self.assertIsInstance(self.harness.charm.unit.status, BlockedStatus)
-
-#         # Verifying status message
-#         self.assertGreater(len(self.harness.charm.unit.status.message), 0)
-#         self.assertTrue(
-#             self.harness.charm.unit.status.message.startswith("Waiting for ")
-#         )
-#         self.assertIn("kafka", self.harness.charm.unit.status.message)
-#         self.assertIn("mongodb", self.harness.charm.unit.status.message)
-#         self.assertNotIn("prometheus", self.harness.charm.unit.status.message)
-#         self.assertTrue(self.harness.charm.unit.status.message.endswith(" relations"))
-
-
-# if __name__ == "__main__":
-#     unittest.main()
diff --git a/installers/charm/mon/tests/test_pod_spec.py b/installers/charm/mon/tests/test_pod_spec.py
deleted file mode 100644 (file)
index 86a3d16..0000000
+++ /dev/null
@@ -1,295 +0,0 @@
-#!/usr/bin/env python3
-# Copyright 2020 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-from typing import NoReturn
-import unittest
-
-import pod_spec
-
-
-class TestPodSpec(unittest.TestCase):
-    """Pod spec unit tests."""
-
-    def test_make_pod_ports(self) -> NoReturn:
-        """Testing make pod ports."""
-        port = 8000
-
-        expected_result = [
-            {
-                "name": "mon",
-                "containerPort": port,
-                "protocol": "TCP",
-            }
-        ]
-
-        pod_ports = pod_spec._make_pod_ports(port)
-
-        self.assertListEqual(expected_result, pod_ports)
-
-    def test_make_pod_envconfig(self) -> NoReturn:
-        """Testing make pod envconfig."""
-        config = {
-            "openstack_default_granularity": 300,
-            "global_request_timeout": 10,
-            "log_level": "INFO",
-            "database_commonkey": "osm",
-            "collector_interval": 30,
-            "evaluator_interval": 30,
-            "vca_host": "admin",
-            "vca_user": "admin",
-            "vca_password": "secret",
-            "vca_cacert": "",
-        }
-        relation_state = {
-            "message_host": "kafka",
-            "message_port": 9090,
-            "database_uri": "mongodb://mongo",
-            "prometheus_host": "prometheus",
-            "prometheus_port": 9082,
-        }
-
-        expected_result = {
-            "ALLOW_ANONYMOUS_LOGIN": "yes",
-            "OSMMON_OPENSTACK_DEFAULT_GRANULARITY": config[
-                "openstack_default_granularity"
-            ],
-            "OSMMON_GLOBAL_REQUEST_TIMEOUT": config["global_request_timeout"],
-            "OSMMON_GLOBAL_LOGLEVEL": config["log_level"],
-            "OSMMON_COLLECTOR_INTERVAL": config["collector_interval"],
-            "OSMMON_EVALUATOR_INTERVAL": config["evaluator_interval"],
-            "OSMMON_MESSAGE_DRIVER": "kafka",
-            "OSMMON_MESSAGE_HOST": relation_state["message_host"],
-            "OSMMON_MESSAGE_PORT": relation_state["message_port"],
-            "OSMMON_DATABASE_DRIVER": "mongo",
-            "OSMMON_DATABASE_URI": relation_state["database_uri"],
-            "OSMMON_DATABASE_COMMONKEY": config["database_commonkey"],
-            "OSMMON_PROMETHEUS_URL": f"http://{relation_state['prometheus_host']}:{relation_state['prometheus_port']}",
-            "OSMMON_VCA_HOST": config["vca_host"],
-            "OSMMON_VCA_USER": config["vca_user"],
-            "OSMMON_VCA_SECRET": config["vca_password"],
-            "OSMMON_VCA_CACERT": config["vca_cacert"],
-        }
-
-        pod_envconfig = pod_spec._make_pod_envconfig(config, relation_state)
-
-        self.assertDictEqual(expected_result, pod_envconfig)
-
-    def test_make_startup_probe(self) -> NoReturn:
-        """Testing make startup probe."""
-        expected_result = {
-            "exec": {"command": ["/usr/bin/pgrep python3"]},
-            "initialDelaySeconds": 60,
-            "timeoutSeconds": 5,
-        }
-
-        startup_probe = pod_spec._make_startup_probe()
-
-        self.assertDictEqual(expected_result, startup_probe)
-
-    def test_make_readiness_probe(self) -> NoReturn:
-        """Testing make readiness probe."""
-        port = 8000
-
-        expected_result = {
-            "tcpSocket": {
-                "port": port,
-            },
-            "periodSeconds": 10,
-            "timeoutSeconds": 5,
-            "successThreshold": 1,
-            "failureThreshold": 3,
-        }
-
-        readiness_probe = pod_spec._make_readiness_probe(port)
-
-        self.assertDictEqual(expected_result, readiness_probe)
-
-    def test_make_liveness_probe(self) -> NoReturn:
-        """Testing make liveness probe."""
-        port = 8000
-
-        expected_result = {
-            "tcpSocket": {
-                "port": port,
-            },
-            "initialDelaySeconds": 45,
-            "periodSeconds": 10,
-            "timeoutSeconds": 5,
-            "successThreshold": 1,
-            "failureThreshold": 3,
-        }
-
-        liveness_probe = pod_spec._make_liveness_probe(port)
-
-        self.assertDictEqual(expected_result, liveness_probe)
-
-    def test_make_pod_spec(self) -> NoReturn:
-        """Testing make pod spec."""
-        image_info = {"upstream-source": "opensourcemano/mon:8"}
-        config = {
-            "site_url": "",
-            "openstack_default_granularity": 300,
-            "global_request_timeout": 10,
-            "log_level": "INFO",
-            "database_commonkey": "osm",
-            "collector_interval": 30,
-            "evaluator_interval": 30,
-            "vca_host": "admin",
-            "vca_user": "admin",
-            "vca_password": "secret",
-            "vca_cacert": "",
-        }
-        relation_state = {
-            "message_host": "kafka",
-            "message_port": 9090,
-            "database_uri": "mongodb://mongo",
-            "prometheus_host": "prometheus",
-            "prometheus_port": 9082,
-        }
-        app_name = "mon"
-        port = 8000
-
-        expected_result = {
-            "version": 3,
-            "containers": [
-                {
-                    "name": app_name,
-                    "imageDetails": image_info,
-                    "imagePullPolicy": "Always",
-                    "ports": [
-                        {
-                            "name": app_name,
-                            "containerPort": port,
-                            "protocol": "TCP",
-                        }
-                    ],
-                    "envConfig": {
-                        "ALLOW_ANONYMOUS_LOGIN": "yes",
-                        "OSMMON_OPENSTACK_DEFAULT_GRANULARITY": config[
-                            "openstack_default_granularity"
-                        ],
-                        "OSMMON_GLOBAL_REQUEST_TIMEOUT": config[
-                            "global_request_timeout"
-                        ],
-                        "OSMMON_GLOBAL_LOGLEVEL": config["log_level"],
-                        "OSMMON_COLLECTOR_INTERVAL": config["collector_interval"],
-                        "OSMMON_EVALUATOR_INTERVAL": config["evaluator_interval"],
-                        "OSMMON_MESSAGE_DRIVER": "kafka",
-                        "OSMMON_MESSAGE_HOST": relation_state["message_host"],
-                        "OSMMON_MESSAGE_PORT": relation_state["message_port"],
-                        "OSMMON_DATABASE_DRIVER": "mongo",
-                        "OSMMON_DATABASE_URI": relation_state["database_uri"],
-                        "OSMMON_DATABASE_COMMONKEY": config["database_commonkey"],
-                        "OSMMON_PROMETHEUS_URL": (
-                            f"http://{relation_state['prometheus_host']}:{relation_state['prometheus_port']}"
-                        ),
-                        "OSMMON_VCA_HOST": config["vca_host"],
-                        "OSMMON_VCA_USER": config["vca_user"],
-                        "OSMMON_VCA_SECRET": config["vca_password"],
-                        "OSMMON_VCA_CACERT": config["vca_cacert"],
-                    },
-                }
-            ],
-            "kubernetesResources": {"ingressResources": []},
-        }
-
-        spec = pod_spec.make_pod_spec(
-            image_info, config, relation_state, app_name, port
-        )
-
-        self.assertDictEqual(expected_result, spec)
-
-    def test_make_pod_spec_without_image_info(self) -> NoReturn:
-        """Testing make pod spec without image_info."""
-        image_info = None
-        config = {
-            "site_url": "",
-            "openstack_default_granularity": 300,
-            "global_request_timeout": 10,
-            "log_level": "INFO",
-            "database_commonkey": "osm",
-            "collector_interval": 30,
-            "evaluator_interval": 30,
-            "vca_host": "admin",
-            "vca_user": "admin",
-            "vca_password": "secret",
-            "vca_cacert": "",
-        }
-        relation_state = {
-            "message_host": "kafka",
-            "message_port": 9090,
-            "database_uri": "mongodb://mongo",
-            "prometheus_host": "prometheus",
-            "prometheus_port": 9082,
-        }
-        app_name = "mon"
-        port = 8000
-
-        spec = pod_spec.make_pod_spec(
-            image_info, config, relation_state, app_name, port
-        )
-
-        self.assertIsNone(spec)
-
-    def test_make_pod_spec_without_config(self) -> NoReturn:
-        """Testing make pod spec without config."""
-        image_info = {"upstream-source": "opensourcemano/mon:8"}
-        config = {}
-        relation_state = {
-            "message_host": "kafka",
-            "message_port": 9090,
-            "database_uri": "mongodb://mongo",
-            "prometheus_host": "prometheus",
-            "prometheus_port": 9082,
-        }
-        app_name = "mon"
-        port = 8000
-
-        with self.assertRaises(ValueError):
-            pod_spec.make_pod_spec(image_info, config, relation_state, app_name, port)
-
-    def test_make_pod_spec_without_relation_state(self) -> NoReturn:
-        """Testing make pod spec without relation_state."""
-        image_info = {"upstream-source": "opensourcemano/mon:8"}
-        config = {
-            "site_url": "",
-            "openstack_default_granularity": 300,
-            "global_request_timeout": 10,
-            "log_level": "INFO",
-            "database_commonkey": "osm",
-            "collector_interval": 30,
-            "evaluator_interval": 30,
-            "vca_host": "admin",
-            "vca_user": "admin",
-            "vca_password": "secret",
-            "vca_cacert": "",
-        }
-        relation_state = {}
-        app_name = "mon"
-        port = 8000
-
-        with self.assertRaises(ValueError):
-            pod_spec.make_pod_spec(image_info, config, relation_state, app_name, port)
-
-
-if __name__ == "__main__":
-    unittest.main()
diff --git a/installers/charm/mon/tox.ini b/installers/charm/mon/tox.ini
deleted file mode 100644 (file)
index f3c9144..0000000
+++ /dev/null
@@ -1,128 +0,0 @@
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-#######################################################################################
-
-[tox]
-envlist = black, cover, flake8, pylint, yamllint, safety
-skipsdist = true
-
-[tox:jenkins]
-toxworkdir = /tmp/.tox
-
-[testenv]
-basepython = python3.8
-setenv =
-  VIRTUAL_ENV={envdir}
-  PYTHONPATH = {toxinidir}:{toxinidir}/lib:{toxinidir}/src
-  PYTHONDONTWRITEBYTECODE = 1
-deps =  -r{toxinidir}/requirements.txt
-
-
-#######################################################################################
-[testenv:black]
-deps = black
-commands =
-        black --check --diff src/ tests/
-
-
-#######################################################################################
-[testenv:cover]
-deps =  {[testenv]deps}
-        -r{toxinidir}/requirements-test.txt
-        coverage
-        nose2
-commands =
-        sh -c 'rm -f nosetests.xml'
-        coverage erase
-        nose2 -C --coverage src
-        coverage report --omit='*tests*'
-        coverage html -d ./cover --omit='*tests*'
-        coverage xml -o coverage.xml --omit=*tests*
-whitelist_externals = sh
-
-
-#######################################################################################
-[testenv:flake8]
-deps =  flake8
-        flake8-import-order
-commands =
-        flake8 src/ tests/
-
-
-#######################################################################################
-[testenv:pylint]
-deps =  {[testenv]deps}
-        -r{toxinidir}/requirements-test.txt
-        pylint==2.10.2
-commands =
-    pylint -E src/ tests/
-
-
-#######################################################################################
-[testenv:safety]
-setenv =
-        LC_ALL=C.UTF-8
-        LANG=C.UTF-8
-deps =  {[testenv]deps}
-        safety
-commands =
-        - safety check --full-report
-
-
-#######################################################################################
-[testenv:yamllint]
-deps =  {[testenv]deps}
-        -r{toxinidir}/requirements-test.txt
-        yamllint
-commands = yamllint .
-
-#######################################################################################
-[testenv:build]
-passenv=HTTP_PROXY HTTPS_PROXY NO_PROXY
-whitelist_externals =
-  charmcraft
-  sh
-commands =
-  charmcraft pack
-  sh -c 'ubuntu_version=20.04; \
-        architectures="amd64-aarch64-arm64"; \
-        charm_name=`cat metadata.yaml | grep -E "^name: " | cut -f 2 -d " "`; \
-        mv $charm_name"_ubuntu-"$ubuntu_version-$architectures.charm $charm_name.charm'
-
-#######################################################################################
-[flake8]
-ignore =
-        W291,
-        W293,
-        W503,
-        E123,
-        E125,
-        E226,
-        E241,
-exclude =
-        .git,
-        __pycache__,
-        .tox,
-max-line-length = 120
-show-source = True
-builtins = _
-max-complexity = 10
-import-order-style = google
diff --git a/installers/charm/nbi/.gitignore b/installers/charm/nbi/.gitignore
deleted file mode 100644 (file)
index 2885df2..0000000
+++ /dev/null
@@ -1,30 +0,0 @@
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-venv
-.vscode
-build
-*.charm
-.coverage
-coverage.xml
-.stestr
-cover
-release
\ No newline at end of file
diff --git a/installers/charm/nbi/.jujuignore b/installers/charm/nbi/.jujuignore
deleted file mode 100644 (file)
index 3ae3e7d..0000000
+++ /dev/null
@@ -1,34 +0,0 @@
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-venv
-.vscode
-build
-*.charm
-.coverage
-coverage.xml
-.gitignore
-.stestr
-cover
-release
-tests/
-requirements*
-tox.ini
diff --git a/installers/charm/nbi/.yamllint.yaml b/installers/charm/nbi/.yamllint.yaml
deleted file mode 100644 (file)
index d71fb69..0000000
+++ /dev/null
@@ -1,34 +0,0 @@
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
----
-extends: default
-
-yaml-files:
-  - "*.yaml"
-  - "*.yml"
-  - ".yamllint"
-ignore: |
-  .tox
-  cover/
-  build/
-  venv
-  release/
diff --git a/installers/charm/nbi/README.md b/installers/charm/nbi/README.md
deleted file mode 100644 (file)
index de0a4bf..0000000
+++ /dev/null
@@ -1,23 +0,0 @@
-<!-- Copyright 2020 Canonical Ltd.
-
-Licensed under the Apache License, Version 2.0 (the "License"); you may
-not use this file except in compliance with the License. You may obtain
-a copy of the License at
-
-        http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-License for the specific language governing permissions and limitations
-under the License.
-
-For those usages not covered by the Apache License, Version 2.0 please
-contact: legal@canonical.com
-
-To get in touch with the maintainers, please contact:
-osm-charmers@lists.launchpad.net -->
-
-# NBI operator Charm for Kubernetes
-
-## Requirements
\ No newline at end of file
diff --git a/installers/charm/nbi/charmcraft.yaml b/installers/charm/nbi/charmcraft.yaml
deleted file mode 100644 (file)
index 0a285a9..0000000
+++ /dev/null
@@ -1,37 +0,0 @@
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-type: charm
-bases:
-  - build-on:
-      - name: ubuntu
-        channel: "20.04"
-        architectures: ["amd64"]
-    run-on:
-      - name: ubuntu
-        channel: "20.04"
-        architectures:
-          - amd64
-          - aarch64
-          - arm64
-parts:
-  charm:
-    build-packages: [git]
diff --git a/installers/charm/nbi/config.yaml b/installers/charm/nbi/config.yaml
deleted file mode 100644 (file)
index f10304f..0000000
+++ /dev/null
@@ -1,109 +0,0 @@
-# Copyright 2020 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-options:
-  max_file_size:
-    type: int
-    description: |
-      The maximum file size, in megabytes. If there is a reverse proxy in front
-      of Keystone, it may need to be configured to handle the requested size.
-      Note: if set to 0, there is no limit.
-    default: 0
-  ingress_class:
-    type: string
-    description: |
-      Ingress class name. This is useful for selecting the ingress to be used
-      in case there are multiple ingresses in the underlying k8s clusters.
-  ingress_whitelist_source_range:
-    type: string
-    description: |
-      A comma-separated list of CIDRs to store in the
-      ingress.kubernetes.io/whitelist-source-range annotation.
-
-      This can be used to lock down access to
-      Keystone based on source IP address.
-    default: ""
-  tls_secret_name:
-    type: string
-    description: TLS Secret name
-    default: ""
-  site_url:
-    type: string
-    description: Ingress URL
-    default: ""
-  cluster_issuer:
-    type: string
-    description: Name of the cluster issuer for TLS certificates
-    default: ""
-  log_level:
-    description: "Log Level"
-    type: string
-    default: "INFO"
-  database_commonkey:
-    description: Database COMMON KEY
-    type: string
-    default: osm
-  auth_backend:
-    type: string
-    description: Authentication backend ('internal' or 'keystone')
-    default: internal
-  enable_test:
-    type: boolean
-    description: Enable test endpoints of NBI.
-    default: false
-  mongodb_uri:
-    type: string
-    description: MongoDB URI (external database)
-  image_pull_policy:
-    type: string
-    description: |
-      ImagePullPolicy configuration for the pod.
-      Possible values: always, ifnotpresent, never
-    default: always
-  debug_mode:
-    description: |
-      If true, debug mode is activated. It means that the service will not run,
-      and instead, the command for the container will be a `sleep infinity`.
-      Note: If enabled, security_context will be disabled.
-    type: boolean
-    default: false
-  debug_pubkey:
-    description: |
-      Public SSH key that will be injected to the application pod.
-    type: string
-  debug_nbi_local_path:
-    description: |
-      Local full path to the NBI project.
-
-      The path will be mounted to the docker image,
-      which means changes during the debugging will be saved in your local path.
-    type: string
-  debug_common_local_path:
-    description: |
-      Local full path to the COMMON project.
-
-      The path will be mounted to the docker image,
-      which means changes during the debugging will be saved in your local path.
-    type: string
-  security_context:
-    description: Enables the security context of the pods
-    type: boolean
-    default: false
diff --git a/installers/charm/nbi/lib/charms/kafka_k8s/v0/kafka.py b/installers/charm/nbi/lib/charms/kafka_k8s/v0/kafka.py
deleted file mode 100644 (file)
index 1baf9a8..0000000
+++ /dev/null
@@ -1,207 +0,0 @@
-# Copyright 2022 Canonical Ltd.
-# See LICENSE file for licensing details.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#    http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
-# implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""Kafka library.
-
-This [library](https://juju.is/docs/sdk/libraries) implements both sides of the
-`kafka` [interface](https://juju.is/docs/sdk/relations).
-
-The *provider* side of this interface is implemented by the
-[kafka-k8s Charmed Operator](https://charmhub.io/kafka-k8s).
-
-Any Charmed Operator that *requires* Kafka for providing its
-service should implement the *requirer* side of this interface.
-
-In a nutshell using this library to implement a Charmed Operator *requiring*
-Kafka would look like
-
-```
-$ charmcraft fetch-lib charms.kafka_k8s.v0.kafka
-```
-
-`metadata.yaml`:
-
-```
-requires:
-  kafka:
-    interface: kafka
-    limit: 1
-```
-
-`src/charm.py`:
-
-```
-from charms.kafka_k8s.v0.kafka import KafkaEvents, KafkaRequires
-from ops.charm import CharmBase
-
-
-class MyCharm(CharmBase):
-
-    on = KafkaEvents()
-
-    def __init__(self, *args):
-        super().__init__(*args)
-        self.kafka = KafkaRequires(self)
-        self.framework.observe(
-            self.on.kafka_available,
-            self._on_kafka_available,
-        )
-        self.framework.observe(
-            self.on.kafka_broken,
-            self._on_kafka_broken,
-        )
-
-    def _on_kafka_available(self, event):
-        # Get Kafka host and port
-        host: str = self.kafka.host
-        port: int = self.kafka.port
-        # host => "kafka-k8s"
-        # port => 9092
-
-    def _on_kafka_broken(self, event):
-        # Stop service
-        # ...
-        self.unit.status = BlockedStatus("need kafka relation")
-```
-
-You can file bugs
-[here](https://github.com/charmed-osm/kafka-k8s-operator/issues)!
-"""
-
-from typing import Optional
-
-from ops.charm import CharmBase, CharmEvents
-from ops.framework import EventBase, EventSource, Object
-
-# The unique Charmhub library identifier, never change it
-from ops.model import Relation
-
-LIBID = "eacc8c85082347c9aae740e0220b8376"
-
-# Increment this major API version when introducing breaking changes
-LIBAPI = 0
-
-# Increment this PATCH version before using `charmcraft publish-lib` or reset
-# to 0 if you are raising the major API version
-LIBPATCH = 3
-
-
-KAFKA_HOST_APP_KEY = "host"
-KAFKA_PORT_APP_KEY = "port"
-
-
-class _KafkaAvailableEvent(EventBase):
-    """Event emitted when Kafka is available."""
-
-
-class _KafkaBrokenEvent(EventBase):
-    """Event emitted when Kafka relation is broken."""
-
-
-class KafkaEvents(CharmEvents):
-    """Kafka events.
-
-    This class defines the events that Kafka can emit.
-
-    Events:
-        kafka_available (_KafkaAvailableEvent)
-    """
-
-    kafka_available = EventSource(_KafkaAvailableEvent)
-    kafka_broken = EventSource(_KafkaBrokenEvent)
-
-
-class KafkaRequires(Object):
-    """Requires-side of the Kafka relation."""
-
-    def __init__(self, charm: CharmBase, endpoint_name: str = "kafka") -> None:
-        super().__init__(charm, endpoint_name)
-        self.charm = charm
-        self._endpoint_name = endpoint_name
-
-        # Observe relation events
-        event_observe_mapping = {
-            charm.on[self._endpoint_name].relation_changed: self._on_relation_changed,
-            charm.on[self._endpoint_name].relation_broken: self._on_relation_broken,
-        }
-        for event, observer in event_observe_mapping.items():
-            self.framework.observe(event, observer)
-
-    def _on_relation_changed(self, event) -> None:
-        if event.relation.app and all(
-            key in event.relation.data[event.relation.app]
-            for key in (KAFKA_HOST_APP_KEY, KAFKA_PORT_APP_KEY)
-        ):
-            self.charm.on.kafka_available.emit()
-
-    def _on_relation_broken(self, _) -> None:
-        self.charm.on.kafka_broken.emit()
-
-    @property
-    def host(self) -> str:
-        relation: Relation = self.model.get_relation(self._endpoint_name)
-        return (
-            relation.data[relation.app].get(KAFKA_HOST_APP_KEY)
-            if relation and relation.app
-            else None
-        )
-
-    @property
-    def port(self) -> int:
-        relation: Relation = self.model.get_relation(self._endpoint_name)
-        return (
-            int(relation.data[relation.app].get(KAFKA_PORT_APP_KEY))
-            if relation and relation.app
-            else None
-        )
-
-
-class KafkaProvides(Object):
-    """Provides-side of the Kafka relation."""
-
-    def __init__(self, charm: CharmBase, endpoint_name: str = "kafka") -> None:
-        super().__init__(charm, endpoint_name)
-        self._endpoint_name = endpoint_name
-
-    def set_host_info(self, host: str, port: int, relation: Optional[Relation] = None) -> None:
-        """Set Kafka host and port.
-
-        This function writes in the application data of the relation, therefore,
-        only the unit leader can call it.
-
-        Args:
-            host (str): Kafka hostname or IP address.
-            port (int): Kafka port.
-            relation (Optional[Relation]): Relation to update.
-                                           If not specified, all relations will be updated.
-
-        Raises:
-            Exception: if a non-leader unit calls this function.
-        """
-        if not self.model.unit.is_leader():
-            raise Exception("only the leader set host information.")
-
-        if relation:
-            self._update_relation_data(host, port, relation)
-            return
-
-        for relation in self.model.relations[self._endpoint_name]:
-            self._update_relation_data(host, port, relation)
-
-    def _update_relation_data(self, host: str, port: int, relation: Relation) -> None:
-        """Update data in relation if needed."""
-        relation.data[self.model.app][KAFKA_HOST_APP_KEY] = host
-        relation.data[self.model.app][KAFKA_PORT_APP_KEY] = str(port)
diff --git a/installers/charm/nbi/metadata.yaml b/installers/charm/nbi/metadata.yaml
deleted file mode 100644 (file)
index 381497b..0000000
+++ /dev/null
@@ -1,56 +0,0 @@
-# Copyright 2020 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-name: osm-nbi
-summary: OSM Northbound Interface (NBI)
-description: |
-  A CAAS charm to deploy OSM's Northbound Interface (NBI).
-series:
-  - kubernetes
-tags:
-  - kubernetes
-  - osm
-  - nbi
-min-juju-version: 2.8.0
-deployment:
-  type: stateless
-  service: cluster
-resources:
-  image:
-    type: oci-image
-    description: OSM docker image for NBI
-    upstream-source: "opensourcemano/nbi:latest"
-requires:
-  kafka:
-    interface: kafka
-    limit: 1
-  mongodb:
-    interface: mongodb
-    limit: 1
-  keystone:
-    interface: keystone
-    limit: 1
-  prometheus:
-    interface: prometheus
-    limit: 1
-provides:
-  nbi:
-    interface: http
diff --git a/installers/charm/nbi/requirements-test.txt b/installers/charm/nbi/requirements-test.txt
deleted file mode 100644 (file)
index 316f6d2..0000000
+++ /dev/null
@@ -1,21 +0,0 @@
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-
-mock==4.0.3
diff --git a/installers/charm/nbi/requirements.txt b/installers/charm/nbi/requirements.txt
deleted file mode 100644 (file)
index 8bb93ad..0000000
+++ /dev/null
@@ -1,22 +0,0 @@
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-git+https://github.com/charmed-osm/ops-lib-charmed-osm/@master
diff --git a/installers/charm/nbi/src/charm.py b/installers/charm/nbi/src/charm.py
deleted file mode 100755 (executable)
index cb47d1c..0000000
+++ /dev/null
@@ -1,384 +0,0 @@
-#!/usr/bin/env python3
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-# pylint: disable=E0213
-
-
-from ipaddress import ip_network
-import logging
-from typing import NoReturn, Optional
-from urllib.parse import urlparse
-
-
-from charms.kafka_k8s.v0.kafka import KafkaEvents, KafkaRequires
-from ops.main import main
-from opslib.osm.charm import CharmedOsmBase, RelationsMissing
-from opslib.osm.interfaces.http import HttpServer
-from opslib.osm.interfaces.keystone import KeystoneClient
-from opslib.osm.interfaces.mongo import MongoClient
-from opslib.osm.interfaces.prometheus import PrometheusClient
-from opslib.osm.pod import (
-    ContainerV3Builder,
-    IngressResourceV3Builder,
-    PodRestartPolicy,
-    PodSpecV3Builder,
-)
-from opslib.osm.validator import ModelValidator, validator
-
-
-logger = logging.getLogger(__name__)
-
-PORT = 9999
-
-
-class ConfigModel(ModelValidator):
-    enable_test: bool
-    auth_backend: str
-    database_commonkey: str
-    log_level: str
-    max_file_size: int
-    site_url: Optional[str]
-    cluster_issuer: Optional[str]
-    ingress_class: Optional[str]
-    ingress_whitelist_source_range: Optional[str]
-    tls_secret_name: Optional[str]
-    mongodb_uri: Optional[str]
-    image_pull_policy: str
-    debug_mode: bool
-    security_context: bool
-
-    @validator("auth_backend")
-    def validate_auth_backend(cls, v):
-        if v not in {"internal", "keystone"}:
-            raise ValueError("value must be 'internal' or 'keystone'")
-        return v
-
-    @validator("log_level")
-    def validate_log_level(cls, v):
-        if v not in {"INFO", "DEBUG"}:
-            raise ValueError("value must be INFO or DEBUG")
-        return v
-
-    @validator("max_file_size")
-    def validate_max_file_size(cls, v):
-        if v < 0:
-            raise ValueError("value must be equal or greater than 0")
-        return v
-
-    @validator("site_url")
-    def validate_site_url(cls, v):
-        if v:
-            parsed = urlparse(v)
-            if not parsed.scheme.startswith("http"):
-                raise ValueError("value must start with http")
-        return v
-
-    @validator("ingress_whitelist_source_range")
-    def validate_ingress_whitelist_source_range(cls, v):
-        if v:
-            ip_network(v)
-        return v
-
-    @validator("mongodb_uri")
-    def validate_mongodb_uri(cls, v):
-        if v and not v.startswith("mongodb://"):
-            raise ValueError("mongodb_uri is not properly formed")
-        return v
-
-    @validator("image_pull_policy")
-    def validate_image_pull_policy(cls, v):
-        values = {
-            "always": "Always",
-            "ifnotpresent": "IfNotPresent",
-            "never": "Never",
-        }
-        v = v.lower()
-        if v not in values.keys():
-            raise ValueError("value must be always, ifnotpresent or never")
-        return values[v]
-
-
-class NbiCharm(CharmedOsmBase):
-    on = KafkaEvents()
-
-    def __init__(self, *args) -> NoReturn:
-        super().__init__(
-            *args,
-            oci_image="image",
-            vscode_workspace=VSCODE_WORKSPACE,
-        )
-        if self.config.get("debug_mode"):
-            self.enable_debug_mode(
-                pubkey=self.config.get("debug_pubkey"),
-                hostpaths={
-                    "NBI": {
-                        "hostpath": self.config.get("debug_nbi_local_path"),
-                        "container-path": "/usr/lib/python3/dist-packages/osm_nbi",
-                    },
-                    "osm_common": {
-                        "hostpath": self.config.get("debug_common_local_path"),
-                        "container-path": "/usr/lib/python3/dist-packages/osm_common",
-                    },
-                },
-            )
-
-        self.kafka = KafkaRequires(self)
-        self.framework.observe(self.on.kafka_available, self.configure_pod)
-        self.framework.observe(self.on.kafka_broken, self.configure_pod)
-
-        self.mongodb_client = MongoClient(self, "mongodb")
-        self.framework.observe(self.on["mongodb"].relation_changed, self.configure_pod)
-        self.framework.observe(self.on["mongodb"].relation_broken, self.configure_pod)
-
-        self.prometheus_client = PrometheusClient(self, "prometheus")
-        self.framework.observe(
-            self.on["prometheus"].relation_changed, self.configure_pod
-        )
-        self.framework.observe(
-            self.on["prometheus"].relation_broken, self.configure_pod
-        )
-
-        self.keystone_client = KeystoneClient(self, "keystone")
-        self.framework.observe(self.on["keystone"].relation_changed, self.configure_pod)
-        self.framework.observe(self.on["keystone"].relation_broken, self.configure_pod)
-
-        self.http_server = HttpServer(self, "nbi")
-        self.framework.observe(self.on["nbi"].relation_joined, self._publish_nbi_info)
-
-    def _publish_nbi_info(self, event):
-        """Publishes NBI information.
-
-        Args:
-            event (EventBase): RO relation event.
-        """
-        if self.unit.is_leader():
-            self.http_server.publish_info(self.app.name, PORT)
-
-    def _check_missing_dependencies(self, config: ConfigModel):
-        missing_relations = []
-
-        if not self.kafka.host or not self.kafka.port:
-            missing_relations.append("kafka")
-        if not config.mongodb_uri and self.mongodb_client.is_missing_data_in_unit():
-            missing_relations.append("mongodb")
-        if self.prometheus_client.is_missing_data_in_app():
-            missing_relations.append("prometheus")
-        if config.auth_backend == "keystone":
-            if self.keystone_client.is_missing_data_in_app():
-                missing_relations.append("keystone")
-
-        if missing_relations:
-            raise RelationsMissing(missing_relations)
-
-    def build_pod_spec(self, image_info):
-        # Validate config
-        config = ConfigModel(**dict(self.config))
-
-        if config.mongodb_uri and not self.mongodb_client.is_missing_data_in_unit():
-            raise Exception("Mongodb data cannot be provided via config and relation")
-
-        # Check relations
-        self._check_missing_dependencies(config)
-
-        security_context_enabled = (
-            config.security_context if not config.debug_mode else False
-        )
-
-        # Create Builder for the PodSpec
-        pod_spec_builder = PodSpecV3Builder(
-            enable_security_context=security_context_enabled
-        )
-
-        # Add secrets to the pod
-        mongodb_secret_name = f"{self.app.name}-mongodb-secret"
-        pod_spec_builder.add_secret(
-            mongodb_secret_name,
-            {
-                "uri": config.mongodb_uri or self.mongodb_client.connection_string,
-                "commonkey": config.database_commonkey,
-            },
-        )
-
-        # Build Init Container
-        pod_spec_builder.add_init_container(
-            {
-                "name": "init-check",
-                "image": "alpine:latest",
-                "command": [
-                    "sh",
-                    "-c",
-                    f"until (nc -zvw1 {self.kafka.host} {self.kafka.port} ); do sleep 3; done; exit 0",
-                ],
-            }
-        )
-
-        # Build Container
-        container_builder = ContainerV3Builder(
-            self.app.name,
-            image_info,
-            config.image_pull_policy,
-            run_as_non_root=security_context_enabled,
-        )
-        container_builder.add_port(name=self.app.name, port=PORT)
-        container_builder.add_tcpsocket_readiness_probe(
-            PORT,
-            initial_delay_seconds=5,
-            timeout_seconds=5,
-        )
-        container_builder.add_tcpsocket_liveness_probe(
-            PORT,
-            initial_delay_seconds=45,
-            timeout_seconds=10,
-        )
-        container_builder.add_envs(
-            {
-                # General configuration
-                "ALLOW_ANONYMOUS_LOGIN": "yes",
-                "OSMNBI_SERVER_ENABLE_TEST": config.enable_test,
-                "OSMNBI_STATIC_DIR": "/app/osm_nbi/html_public",
-                # Kafka configuration
-                "OSMNBI_MESSAGE_HOST": self.kafka.host,
-                "OSMNBI_MESSAGE_DRIVER": "kafka",
-                "OSMNBI_MESSAGE_PORT": self.kafka.port,
-                # Database configuration
-                "OSMNBI_DATABASE_DRIVER": "mongo",
-                # Storage configuration
-                "OSMNBI_STORAGE_DRIVER": "mongo",
-                "OSMNBI_STORAGE_PATH": "/app/storage",
-                "OSMNBI_STORAGE_COLLECTION": "files",
-                # Prometheus configuration
-                "OSMNBI_PROMETHEUS_HOST": self.prometheus_client.hostname,
-                "OSMNBI_PROMETHEUS_PORT": self.prometheus_client.port,
-                # Log configuration
-                "OSMNBI_LOG_LEVEL": config.log_level,
-            }
-        )
-        container_builder.add_secret_envs(
-            secret_name=mongodb_secret_name,
-            envs={
-                "OSMNBI_DATABASE_URI": "uri",
-                "OSMNBI_DATABASE_COMMONKEY": "commonkey",
-                "OSMNBI_STORAGE_URI": "uri",
-            },
-        )
-        if config.auth_backend == "internal":
-            container_builder.add_env("OSMNBI_AUTHENTICATION_BACKEND", "internal")
-        elif config.auth_backend == "keystone":
-            keystone_secret_name = f"{self.app.name}-keystone-secret"
-            pod_spec_builder.add_secret(
-                keystone_secret_name,
-                {
-                    "url": self.keystone_client.host,
-                    "port": self.keystone_client.port,
-                    "user_domain": self.keystone_client.user_domain_name,
-                    "project_domain": self.keystone_client.project_domain_name,
-                    "service_username": self.keystone_client.username,
-                    "service_password": self.keystone_client.password,
-                    "service_project": self.keystone_client.service,
-                },
-            )
-            container_builder.add_env("OSMNBI_AUTHENTICATION_BACKEND", "keystone")
-            container_builder.add_secret_envs(
-                secret_name=keystone_secret_name,
-                envs={
-                    "OSMNBI_AUTHENTICATION_AUTH_URL": "url",
-                    "OSMNBI_AUTHENTICATION_AUTH_PORT": "port",
-                    "OSMNBI_AUTHENTICATION_USER_DOMAIN_NAME": "user_domain",
-                    "OSMNBI_AUTHENTICATION_PROJECT_DOMAIN_NAME": "project_domain",
-                    "OSMNBI_AUTHENTICATION_SERVICE_USERNAME": "service_username",
-                    "OSMNBI_AUTHENTICATION_SERVICE_PASSWORD": "service_password",
-                    "OSMNBI_AUTHENTICATION_SERVICE_PROJECT": "service_project",
-                },
-            )
-        container = container_builder.build()
-
-        # Add container to pod spec
-        pod_spec_builder.add_container(container)
-
-        # Add ingress resources to pod spec if site url exists
-        if config.site_url:
-            parsed = urlparse(config.site_url)
-            annotations = {
-                "nginx.ingress.kubernetes.io/proxy-body-size": "{}".format(
-                    str(config.max_file_size) + "m"
-                    if config.max_file_size > 0
-                    else config.max_file_size
-                ),
-                "nginx.ingress.kubernetes.io/backend-protocol": "HTTPS",
-            }
-            if config.ingress_class:
-                annotations["kubernetes.io/ingress.class"] = config.ingress_class
-            ingress_resource_builder = IngressResourceV3Builder(
-                f"{self.app.name}-ingress", annotations
-            )
-
-            if config.ingress_whitelist_source_range:
-                annotations[
-                    "nginx.ingress.kubernetes.io/whitelist-source-range"
-                ] = config.ingress_whitelist_source_range
-
-            if config.cluster_issuer:
-                annotations["cert-manager.io/cluster-issuer"] = config.cluster_issuer
-
-            if parsed.scheme == "https":
-                ingress_resource_builder.add_tls(
-                    [parsed.hostname], config.tls_secret_name
-                )
-            else:
-                annotations["nginx.ingress.kubernetes.io/ssl-redirect"] = "false"
-
-            ingress_resource_builder.add_rule(parsed.hostname, self.app.name, PORT)
-            ingress_resource = ingress_resource_builder.build()
-            pod_spec_builder.add_ingress_resource(ingress_resource)
-
-        # Add restart policy
-        restart_policy = PodRestartPolicy()
-        restart_policy.add_secrets()
-        pod_spec_builder.set_restart_policy(restart_policy)
-
-        return pod_spec_builder.build()
-
-
-VSCODE_WORKSPACE = {
-    "folders": [
-        {"path": "/usr/lib/python3/dist-packages/osm_nbi"},
-        {"path": "/usr/lib/python3/dist-packages/osm_common"},
-        {"path": "/usr/lib/python3/dist-packages/osm_im"},
-    ],
-    "settings": {},
-    "launch": {
-        "version": "0.2.0",
-        "configurations": [
-            {
-                "name": "NBI",
-                "type": "python",
-                "request": "launch",
-                "module": "osm_nbi.nbi",
-                "justMyCode": False,
-            }
-        ],
-    },
-}
-
-
-if __name__ == "__main__":
-    main(NbiCharm)
diff --git a/installers/charm/nbi/src/pod_spec.py b/installers/charm/nbi/src/pod_spec.py
deleted file mode 100644 (file)
index b8f5904..0000000
+++ /dev/null
@@ -1,419 +0,0 @@
-#!/usr/bin/env python3
-# Copyright 2020 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-from ipaddress import ip_network
-from typing import Any, Callable, Dict, List, NoReturn
-from urllib.parse import urlparse
-
-
-def _validate_max_file_size(max_file_size: int, site_url: str) -> bool:
-    """Validate max_file_size.
-
-    Args:
-        max_file_size (int): maximum file size allowed.
-        site_url (str): endpoint url.
-
-    Returns:
-        bool: True if valid, false otherwise.
-    """
-    if not site_url:
-        return True
-
-    parsed = urlparse(site_url)
-
-    if not parsed.scheme.startswith("http"):
-        return True
-
-    if max_file_size is None:
-        return False
-
-    return max_file_size >= 0
-
-
-def _validate_ip_network(network: str) -> bool:
-    """Validate IP network.
-
-    Args:
-        network (str): IP network range.
-
-    Returns:
-        bool: True if valid, false otherwise.
-    """
-    if not network:
-        return True
-
-    try:
-        ip_network(network)
-    except ValueError:
-        return False
-
-    return True
-
-
-def _validate_keystone_config(keystone: bool, value: Any, validator: Callable) -> bool:
-    """Validate keystone configurations.
-
-    Args:
-        keystone (bool): is keystone enabled, true if so, false otherwise.
-        value (Any): value to be validated.
-        validator (Callable): function to validate configuration.
-
-    Returns:
-        bool: true if valid, false otherwise.
-    """
-    if not keystone:
-        return True
-
-    return validator(value)
-
-
-def _validate_data(
-    config_data: Dict[str, Any], relation_data: Dict[str, Any], keystone: bool
-) -> NoReturn:
-    """Validate input data.
-
-    Args:
-        config_data (Dict[str, Any]): configuration data.
-        relation_data (Dict[str, Any]): relation data.
-        keystone (bool): is keystone to be used.
-    """
-    config_validators = {
-        "enable_test": lambda value, _: isinstance(value, bool),
-        "database_commonkey": lambda value, _: (
-            isinstance(value, str) and len(value) > 1
-        ),
-        "log_level": lambda value, _: (
-            isinstance(value, str) and value in ("INFO", "DEBUG")
-        ),
-        "auth_backend": lambda value, _: (
-            isinstance(value, str) and (value == "internal" or value == "keystone")
-        ),
-        "site_url": lambda value, _: isinstance(value, str)
-        if value is not None
-        else True,
-        "max_file_size": lambda value, values: _validate_max_file_size(
-            value, values.get("site_url")
-        ),
-        "ingress_whitelist_source_range": lambda value, _: _validate_ip_network(value),
-        "tls_secret_name": lambda value, _: isinstance(value, str)
-        if value is not None
-        else True,
-    }
-    relation_validators = {
-        "message_host": lambda value, _: isinstance(value, str),
-        "message_port": lambda value, _: isinstance(value, int) and value > 0,
-        "database_uri": lambda value, _: (
-            isinstance(value, str) and value.startswith("mongodb://")
-        ),
-        "prometheus_host": lambda value, _: isinstance(value, str),
-        "prometheus_port": lambda value, _: isinstance(value, int) and value > 0,
-        "keystone_host": lambda value, _: _validate_keystone_config(
-            keystone, value, lambda x: isinstance(x, str) and len(x) > 0
-        ),
-        "keystone_port": lambda value, _: _validate_keystone_config(
-            keystone, value, lambda x: isinstance(x, int) and x > 0
-        ),
-        "keystone_user_domain_name": lambda value, _: _validate_keystone_config(
-            keystone, value, lambda x: isinstance(x, str) and len(x) > 0
-        ),
-        "keystone_project_domain_name": lambda value, _: _validate_keystone_config(
-            keystone, value, lambda x: isinstance(x, str) and len(x) > 0
-        ),
-        "keystone_username": lambda value, _: _validate_keystone_config(
-            keystone, value, lambda x: isinstance(x, str) and len(x) > 0
-        ),
-        "keystone_password": lambda value, _: _validate_keystone_config(
-            keystone, value, lambda x: isinstance(x, str) and len(x) > 0
-        ),
-        "keystone_service": lambda value, _: _validate_keystone_config(
-            keystone, value, lambda x: isinstance(x, str) and len(x) > 0
-        ),
-    }
-    problems = []
-
-    for key, validator in config_validators.items():
-        valid = validator(config_data.get(key), config_data)
-
-        if not valid:
-            problems.append(key)
-
-    for key, validator in relation_validators.items():
-        valid = validator(relation_data.get(key), relation_data)
-
-        if not valid:
-            problems.append(key)
-
-    if len(problems) > 0:
-        raise ValueError("Errors found in: {}".format(", ".join(problems)))
-
-
-def _make_pod_ports(port: int) -> List[Dict[str, Any]]:
-    """Generate pod ports details.
-
-    Args:
-        port (int): port to expose.
-
-    Returns:
-        List[Dict[str, Any]]: pod port details.
-    """
-    return [{"name": "nbi", "containerPort": port, "protocol": "TCP"}]
-
-
-def _make_pod_envconfig(
-    config: Dict[str, Any], relation_state: Dict[str, Any]
-) -> Dict[str, Any]:
-    """Generate pod environment configuration.
-
-    Args:
-        config (Dict[str, Any]): configuration information.
-        relation_state (Dict[str, Any]): relation state information.
-
-    Returns:
-        Dict[str, Any]: pod environment configuration.
-    """
-    envconfig = {
-        # General configuration
-        "ALLOW_ANONYMOUS_LOGIN": "yes",
-        "OSMNBI_SERVER_ENABLE_TEST": config["enable_test"],
-        "OSMNBI_STATIC_DIR": "/app/osm_nbi/html_public",
-        # Kafka configuration
-        "OSMNBI_MESSAGE_HOST": relation_state["message_host"],
-        "OSMNBI_MESSAGE_DRIVER": "kafka",
-        "OSMNBI_MESSAGE_PORT": relation_state["message_port"],
-        # Database configuration
-        "OSMNBI_DATABASE_DRIVER": "mongo",
-        "OSMNBI_DATABASE_URI": relation_state["database_uri"],
-        "OSMNBI_DATABASE_COMMONKEY": config["database_commonkey"],
-        # Storage configuration
-        "OSMNBI_STORAGE_DRIVER": "mongo",
-        "OSMNBI_STORAGE_PATH": "/app/storage",
-        "OSMNBI_STORAGE_COLLECTION": "files",
-        "OSMNBI_STORAGE_URI": relation_state["database_uri"],
-        # Prometheus configuration
-        "OSMNBI_PROMETHEUS_HOST": relation_state["prometheus_host"],
-        "OSMNBI_PROMETHEUS_PORT": relation_state["prometheus_port"],
-        # Log configuration
-        "OSMNBI_LOG_LEVEL": config["log_level"],
-    }
-
-    if config["auth_backend"] == "internal":
-        envconfig["OSMNBI_AUTHENTICATION_BACKEND"] = "internal"
-    elif config["auth_backend"] == "keystone":
-        envconfig.update(
-            {
-                "OSMNBI_AUTHENTICATION_BACKEND": "keystone",
-                "OSMNBI_AUTHENTICATION_AUTH_URL": relation_state["keystone_host"],
-                "OSMNBI_AUTHENTICATION_AUTH_PORT": relation_state["keystone_port"],
-                "OSMNBI_AUTHENTICATION_USER_DOMAIN_NAME": relation_state[
-                    "keystone_user_domain_name"
-                ],
-                "OSMNBI_AUTHENTICATION_PROJECT_DOMAIN_NAME": relation_state[
-                    "keystone_project_domain_name"
-                ],
-                "OSMNBI_AUTHENTICATION_SERVICE_USERNAME": relation_state[
-                    "keystone_username"
-                ],
-                "OSMNBI_AUTHENTICATION_SERVICE_PASSWORD": relation_state[
-                    "keystone_password"
-                ],
-                "OSMNBI_AUTHENTICATION_SERVICE_PROJECT": relation_state[
-                    "keystone_service"
-                ],
-            }
-        )
-    else:
-        raise ValueError("auth_backend needs to be either internal or keystone")
-
-    return envconfig
-
-
-def _make_pod_ingress_resources(
-    config: Dict[str, Any], app_name: str, port: int
-) -> List[Dict[str, Any]]:
-    """Generate pod ingress resources.
-
-    Args:
-        config (Dict[str, Any]): configuration information.
-        app_name (str): application name.
-        port (int): port to expose.
-
-    Returns:
-        List[Dict[str, Any]]: pod ingress resources.
-    """
-    site_url = config.get("site_url")
-
-    if not site_url:
-        return
-
-    parsed = urlparse(site_url)
-
-    if not parsed.scheme.startswith("http"):
-        return
-
-    max_file_size = config["max_file_size"]
-    ingress_whitelist_source_range = config["ingress_whitelist_source_range"]
-
-    annotations = {
-        "nginx.ingress.kubernetes.io/proxy-body-size": "{}".format(
-            str(max_file_size) + "m" if max_file_size > 0 else max_file_size
-        ),
-        "nginx.ingress.kubernetes.io/backend-protocol": "HTTPS",
-    }
-
-    if ingress_whitelist_source_range:
-        annotations[
-            "nginx.ingress.kubernetes.io/whitelist-source-range"
-        ] = ingress_whitelist_source_range
-
-    ingress_spec_tls = None
-
-    if parsed.scheme == "https":
-        ingress_spec_tls = [{"hosts": [parsed.hostname]}]
-        tls_secret_name = config["tls_secret_name"]
-        if tls_secret_name:
-            ingress_spec_tls[0]["secretName"] = tls_secret_name
-    else:
-        annotations["nginx.ingress.kubernetes.io/ssl-redirect"] = "false"
-
-    ingress = {
-        "name": "{}-ingress".format(app_name),
-        "annotations": annotations,
-        "spec": {
-            "rules": [
-                {
-                    "host": parsed.hostname,
-                    "http": {
-                        "paths": [
-                            {
-                                "path": "/",
-                                "backend": {
-                                    "serviceName": app_name,
-                                    "servicePort": port,
-                                },
-                            }
-                        ]
-                    },
-                }
-            ]
-        },
-    }
-    if ingress_spec_tls:
-        ingress["spec"]["tls"] = ingress_spec_tls
-
-    return [ingress]
-
-
-def _make_startup_probe() -> Dict[str, Any]:
-    """Generate startup probe.
-
-    Returns:
-        Dict[str, Any]: startup probe.
-    """
-    return {
-        "exec": {"command": ["/usr/bin/pgrep python3"]},
-        "initialDelaySeconds": 60,
-        "timeoutSeconds": 5,
-    }
-
-
-def _make_readiness_probe(port: int) -> Dict[str, Any]:
-    """Generate readiness probe.
-
-    Args:
-        port (int): [description]
-
-    Returns:
-        Dict[str, Any]: readiness probe.
-    """
-    return {
-        "httpGet": {
-            "path": "/osm/",
-            "port": port,
-        },
-        "initialDelaySeconds": 45,
-        "timeoutSeconds": 5,
-    }
-
-
-def _make_liveness_probe(port: int) -> Dict[str, Any]:
-    """Generate liveness probe.
-
-    Args:
-        port (int): [description]
-
-    Returns:
-        Dict[str, Any]: liveness probe.
-    """
-    return {
-        "httpGet": {
-            "path": "/osm/",
-            "port": port,
-        },
-        "initialDelaySeconds": 45,
-        "timeoutSeconds": 5,
-    }
-
-
-def make_pod_spec(
-    image_info: Dict[str, str],
-    config: Dict[str, Any],
-    relation_state: Dict[str, Any],
-    app_name: str = "nbi",
-    port: int = 9999,
-) -> Dict[str, Any]:
-    """Generate the pod spec information.
-
-    Args:
-        image_info (Dict[str, str]): Object provided by
-                                     OCIImageResource("image").fetch().
-        config (Dict[str, Any]): Configuration information.
-        relation_state (Dict[str, Any]): Relation state information.
-        app_name (str, optional): Application name. Defaults to "nbi".
-        port (int, optional): Port for the container. Defaults to 9999.
-
-    Returns:
-        Dict[str, Any]: Pod spec dictionary for the charm.
-    """
-    if not image_info:
-        return None
-
-    _validate_data(config, relation_state, config.get("auth_backend") == "keystone")
-
-    ports = _make_pod_ports(port)
-    env_config = _make_pod_envconfig(config, relation_state)
-    ingress_resources = _make_pod_ingress_resources(config, app_name, port)
-
-    return {
-        "version": 3,
-        "containers": [
-            {
-                "name": app_name,
-                "imageDetails": image_info,
-                "imagePullPolicy": "Always",
-                "ports": ports,
-                "envConfig": env_config,
-            }
-        ],
-        "kubernetesResources": {
-            "ingressResources": ingress_resources or [],
-        },
-    }
diff --git a/installers/charm/nbi/tests/__init__.py b/installers/charm/nbi/tests/__init__.py
deleted file mode 100644 (file)
index 446d5ce..0000000
+++ /dev/null
@@ -1,40 +0,0 @@
-#!/usr/bin/env python3
-# Copyright 2020 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-"""Init mocking for unit tests."""
-
-import sys
-
-
-import mock
-
-
-class OCIImageResourceErrorMock(Exception):
-    pass
-
-
-sys.path.append("src")
-
-oci_image = mock.MagicMock()
-oci_image.OCIImageResourceError = OCIImageResourceErrorMock
-sys.modules["oci_image"] = oci_image
-sys.modules["oci_image"].OCIImageResource().fetch.return_value = {}
diff --git a/installers/charm/nbi/tests/test_charm.py b/installers/charm/nbi/tests/test_charm.py
deleted file mode 100644 (file)
index 92c2980..0000000
+++ /dev/null
@@ -1,295 +0,0 @@
-#!/usr/bin/env python3
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-import sys
-from typing import NoReturn
-import unittest
-
-
-from charm import NbiCharm
-from ops.model import ActiveStatus, BlockedStatus
-from ops.testing import Harness
-
-
-class TestCharm(unittest.TestCase):
-    """Prometheus Charm unit tests."""
-
-    def setUp(self) -> NoReturn:
-        """Test setup"""
-        self.image_info = sys.modules["oci_image"].OCIImageResource().fetch()
-        self.harness = Harness(NbiCharm)
-        self.harness.set_leader(is_leader=True)
-        self.harness.begin()
-        self.config = {
-            "enable_test": False,
-            "auth_backend": "internal",
-            "database_commonkey": "key",
-            "mongodb_uri": "",
-            "log_level": "INFO",
-            "max_file_size": 0,
-            "ingress_whitelist_source_range": "",
-            "tls_secret_name": "",
-            "site_url": "https://nbi.192.168.100.100.nip.io",
-            "cluster_issuer": "vault-issuer",
-        }
-        self.harness.update_config(self.config)
-
-    def test_config_changed_no_relations(
-        self,
-    ) -> NoReturn:
-        """Test ingress resources without HTTP."""
-
-        self.harness.charm.on.config_changed.emit()
-
-        # Assertions
-        self.assertIsInstance(self.harness.charm.unit.status, BlockedStatus)
-        self.assertTrue(
-            all(
-                relation in self.harness.charm.unit.status.message
-                for relation in ["mongodb", "kafka", "prometheus"]
-            )
-        )
-
-    def test_config_changed_non_leader(
-        self,
-    ) -> NoReturn:
-        """Test ingress resources without HTTP."""
-        self.harness.set_leader(is_leader=False)
-        self.harness.charm.on.config_changed.emit()
-
-        # Assertions
-        self.assertIsInstance(self.harness.charm.unit.status, ActiveStatus)
-
-    def test_with_relations_internal_and_mongodb_config(
-        self,
-    ) -> NoReturn:
-        "Test with relations and mongodb config (internal)"
-        self.initialize_kafka_relation()
-        self.initialize_mongo_config()
-        self.initialize_prometheus_relation()
-        # Verifying status
-        self.assertNotIsInstance(self.harness.charm.unit.status, BlockedStatus)
-
-    def test_with_relations_internal(
-        self,
-    ) -> NoReturn:
-        "Test with relations (internal)"
-        self.initialize_kafka_relation()
-        self.initialize_mongo_relation()
-        self.initialize_prometheus_relation()
-        # Verifying status
-        self.assertNotIsInstance(self.harness.charm.unit.status, BlockedStatus)
-
-    def test_with_relations_and_mongodb_config_with_keystone_missing(
-        self,
-    ) -> NoReturn:
-        "Test with relations and mongodb config (keystone)"
-        self.harness.update_config({"auth_backend": "keystone"})
-        self.initialize_kafka_relation()
-        self.initialize_mongo_config()
-        self.initialize_prometheus_relation()
-        # Verifying status
-        self.assertIsInstance(self.harness.charm.unit.status, BlockedStatus)
-        self.assertTrue("keystone" in self.harness.charm.unit.status.message)
-
-    def test_with_relations_keystone_missing(
-        self,
-    ) -> NoReturn:
-        "Test with relations (keystone)"
-        self.harness.update_config({"auth_backend": "keystone"})
-        self.initialize_kafka_relation()
-        self.initialize_mongo_relation()
-        self.initialize_prometheus_relation()
-        # Verifying status
-        self.assertIsInstance(self.harness.charm.unit.status, BlockedStatus)
-        self.assertTrue("keystone" in self.harness.charm.unit.status.message)
-
-    def test_with_relations_and_mongodb_config_with_keystone(
-        self,
-    ) -> NoReturn:
-        "Test with relations (keystone)"
-        self.harness.update_config({"auth_backend": "keystone"})
-        self.initialize_kafka_relation()
-        self.initialize_mongo_config()
-        self.initialize_prometheus_relation()
-        self.initialize_keystone_relation()
-        # Verifying status
-        self.assertNotIsInstance(self.harness.charm.unit.status, BlockedStatus)
-
-    def test_with_relations_keystone(
-        self,
-    ) -> NoReturn:
-        "Test with relations (keystone)"
-        self.harness.update_config({"auth_backend": "keystone"})
-        self.initialize_kafka_relation()
-        self.initialize_mongo_relation()
-        self.initialize_prometheus_relation()
-        self.initialize_keystone_relation()
-        # Verifying status
-        self.assertNotIsInstance(self.harness.charm.unit.status, BlockedStatus)
-
-    def test_mongodb_exception_relation_and_config(
-        self,
-    ) -> NoReturn:
-        self.initialize_mongo_config()
-        self.initialize_mongo_relation()
-        # Verifying status
-        self.assertIsInstance(self.harness.charm.unit.status, BlockedStatus)
-
-    def initialize_kafka_relation(self):
-        kafka_relation_id = self.harness.add_relation("kafka", "kafka")
-        self.harness.add_relation_unit(kafka_relation_id, "kafka/0")
-        self.harness.update_relation_data(
-            kafka_relation_id, "kafka", {"host": "kafka", "port": 9092}
-        )
-
-    def initialize_mongo_config(self):
-        self.harness.update_config({"mongodb_uri": "mongodb://mongo:27017"})
-
-    def initialize_mongo_relation(self):
-        mongodb_relation_id = self.harness.add_relation("mongodb", "mongodb")
-        self.harness.add_relation_unit(mongodb_relation_id, "mongodb/0")
-        self.harness.update_relation_data(
-            mongodb_relation_id,
-            "mongodb/0",
-            {"connection_string": "mongodb://mongo:27017"},
-        )
-
-    def initialize_keystone_relation(self):
-        keystone_relation_id = self.harness.add_relation("keystone", "keystone")
-        self.harness.add_relation_unit(keystone_relation_id, "keystone/0")
-        self.harness.update_relation_data(
-            keystone_relation_id,
-            "keystone",
-            {
-                "host": "host",
-                "port": 5000,
-                "user_domain_name": "ud",
-                "project_domain_name": "pd",
-                "username": "u",
-                "password": "p",
-                "service": "s",
-                "keystone_db_password": "something",
-                "region_id": "something",
-                "admin_username": "something",
-                "admin_password": "something",
-                "admin_project_name": "something",
-            },
-        )
-
-    def initialize_prometheus_relation(self):
-        prometheus_relation_id = self.harness.add_relation("prometheus", "prometheus")
-        self.harness.add_relation_unit(prometheus_relation_id, "prometheus/0")
-        self.harness.update_relation_data(
-            prometheus_relation_id,
-            "prometheus",
-            {"hostname": "prometheus", "port": 9090},
-        )
-
-
-if __name__ == "__main__":
-    unittest.main()
-
-
-# class TestCharm(unittest.TestCase):
-#     """Prometheus Charm unit tests."""
-
-#     def setUp(self) -> NoReturn:
-#         """Test setup"""
-#         self.image_info = sys.modules["oci_image"].OCIImageResource().fetch()
-#         self.harness = Harness(NbiCharm)
-#         self.harness.set_leader(is_leader=True)
-#         self.harness.begin()
-#         self.config = {
-#             "enable_ng_ro": True,
-#             "database_commonkey": "commonkey",
-#             "log_level": "INFO",
-#             "vim_database": "db_name",
-#             "ro_database": "ro_db_name",
-#             "openmano_tenant": "mano",
-#         }
-
-#     def test_config_changed_no_relations(
-#         self,
-#     ) -> NoReturn:
-#         """Test ingress resources without HTTP."""
-
-#         self.harness.charm.on.config_changed.emit()
-
-#         # Assertions
-#         self.assertIsInstance(self.harness.charm.unit.status, BlockedStatus)
-#         self.assertTrue(
-#             all(
-#                 relation in self.harness.charm.unit.status.message
-#                 for relation in ["mongodb", "kafka"]
-#             )
-#         )
-
-#         # Disable ng-ro
-#         self.harness.update_config({"enable_ng_ro": False})
-#         self.assertIsInstance(self.harness.charm.unit.status, BlockedStatus)
-#         self.assertTrue(
-#             all(
-#                 relation in self.harness.charm.unit.status.message
-#                 for relation in ["mysql"]
-#             )
-#         )
-
-#     def test_config_changed_non_leader(
-#         self,
-#     ) -> NoReturn:
-#         """Test ingress resources without HTTP."""
-#         self.harness.set_leader(is_leader=False)
-#         self.harness.charm.on.config_changed.emit()
-
-#         # Assertions
-#         self.assertIsInstance(self.harness.charm.unit.status, ActiveStatus)
-
-#     def test_with_relations_ng(
-#         self,
-#     ) -> NoReturn:
-#         "Test with relations (ng-ro)"
-
-#         # Initializing the kafka relation
-#         kafka_relation_id = self.harness.add_relation("kafka", "kafka")
-#         self.harness.add_relation_unit(kafka_relation_id, "kafka/0")
-#         self.harness.update_relation_data(
-#             kafka_relation_id, "kafka/0", {"host": "kafka", "port": 9092}
-#         )
-
-#         # Initializing the mongo relation
-#         mongodb_relation_id = self.harness.add_relation("mongodb", "mongodb")
-#         self.harness.add_relation_unit(mongodb_relation_id, "mongodb/0")
-#         self.harness.update_relation_data(
-#             mongodb_relation_id,
-#             "mongodb/0",
-#             {"connection_string": "mongodb://mongo:27017"},
-#         )
-
-#         self.harness.charm.on.config_changed.emit()
-
-#         # Verifying status
-#         self.assertNotIsInstance(self.harness.charm.unit.status, BlockedStatus)
-
-
-# if __name__ == "__main__":
-#     unittest.main()
diff --git a/installers/charm/nbi/tests/test_pod_spec.py b/installers/charm/nbi/tests/test_pod_spec.py
deleted file mode 100644 (file)
index 360895f..0000000
+++ /dev/null
@@ -1,647 +0,0 @@
-#!/usr/bin/env python3
-# Copyright 2020 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-from typing import NoReturn
-import unittest
-
-import pod_spec
-
-
-class TestPodSpec(unittest.TestCase):
-    """Pod spec unit tests."""
-
-    def test_make_pod_ports(self) -> NoReturn:
-        """Testing make pod ports."""
-        port = 9999
-
-        expected_result = [
-            {
-                "name": "nbi",
-                "containerPort": port,
-                "protocol": "TCP",
-            }
-        ]
-
-        pod_ports = pod_spec._make_pod_ports(port)
-
-        self.assertListEqual(expected_result, pod_ports)
-
-    def test_make_pod_envconfig_without_keystone(self) -> NoReturn:
-        """Teting make pod envconfig without Keystone."""
-        config = {
-            "enable_test": False,
-            "database_commonkey": "commonkey",
-            "log_level": "DEBUG",
-            "auth_backend": "internal",
-        }
-        relation_state = {
-            "message_host": "kafka",
-            "message_port": 9090,
-            "database_uri": "mongodb://mongo",
-            "prometheus_host": "prometheus",
-            "prometheus_port": 9082,
-        }
-
-        expected_result = {
-            "ALLOW_ANONYMOUS_LOGIN": "yes",
-            "OSMNBI_SERVER_ENABLE_TEST": config["enable_test"],
-            "OSMNBI_STATIC_DIR": "/app/osm_nbi/html_public",
-            "OSMNBI_MESSAGE_HOST": relation_state["message_host"],
-            "OSMNBI_MESSAGE_DRIVER": "kafka",
-            "OSMNBI_MESSAGE_PORT": relation_state["message_port"],
-            "OSMNBI_DATABASE_DRIVER": "mongo",
-            "OSMNBI_DATABASE_URI": relation_state["database_uri"],
-            "OSMNBI_DATABASE_COMMONKEY": config["database_commonkey"],
-            "OSMNBI_STORAGE_DRIVER": "mongo",
-            "OSMNBI_STORAGE_PATH": "/app/storage",
-            "OSMNBI_STORAGE_COLLECTION": "files",
-            "OSMNBI_STORAGE_URI": relation_state["database_uri"],
-            "OSMNBI_PROMETHEUS_HOST": relation_state["prometheus_host"],
-            "OSMNBI_PROMETHEUS_PORT": relation_state["prometheus_port"],
-            "OSMNBI_LOG_LEVEL": config["log_level"],
-            "OSMNBI_AUTHENTICATION_BACKEND": config["auth_backend"],
-        }
-
-        pod_envconfig = pod_spec._make_pod_envconfig(config, relation_state)
-
-        self.assertDictEqual(expected_result, pod_envconfig)
-
-    def test_make_pod_envconfig_with_keystone(self) -> NoReturn:
-        """Teting make pod envconfig with Keystone."""
-        config = {
-            "enable_test": False,
-            "database_commonkey": "commonkey",
-            "log_level": "DEBUG",
-            "auth_backend": "keystone",
-        }
-        relation_state = {
-            "message_host": "kafka",
-            "message_port": 9090,
-            "database_uri": "mongodb://mongo",
-            "prometheus_host": "prometheus",
-            "prometheus_port": 9082,
-            "keystone_host": "keystone",
-            "keystone_port": 5000,
-            "keystone_user_domain_name": "user_domain",
-            "keystone_project_domain_name": "project_domain",
-            "keystone_username": "username",
-            "keystone_password": "password",
-            "keystone_service": "service",
-        }
-
-        expected_result = {
-            "ALLOW_ANONYMOUS_LOGIN": "yes",
-            "OSMNBI_SERVER_ENABLE_TEST": config["enable_test"],
-            "OSMNBI_STATIC_DIR": "/app/osm_nbi/html_public",
-            "OSMNBI_MESSAGE_HOST": relation_state["message_host"],
-            "OSMNBI_MESSAGE_DRIVER": "kafka",
-            "OSMNBI_MESSAGE_PORT": relation_state["message_port"],
-            "OSMNBI_DATABASE_DRIVER": "mongo",
-            "OSMNBI_DATABASE_URI": relation_state["database_uri"],
-            "OSMNBI_DATABASE_COMMONKEY": config["database_commonkey"],
-            "OSMNBI_STORAGE_DRIVER": "mongo",
-            "OSMNBI_STORAGE_PATH": "/app/storage",
-            "OSMNBI_STORAGE_COLLECTION": "files",
-            "OSMNBI_STORAGE_URI": relation_state["database_uri"],
-            "OSMNBI_PROMETHEUS_HOST": relation_state["prometheus_host"],
-            "OSMNBI_PROMETHEUS_PORT": relation_state["prometheus_port"],
-            "OSMNBI_LOG_LEVEL": config["log_level"],
-            "OSMNBI_AUTHENTICATION_BACKEND": config["auth_backend"],
-            "OSMNBI_AUTHENTICATION_AUTH_URL": relation_state["keystone_host"],
-            "OSMNBI_AUTHENTICATION_AUTH_PORT": relation_state["keystone_port"],
-            "OSMNBI_AUTHENTICATION_USER_DOMAIN_NAME": relation_state[
-                "keystone_user_domain_name"
-            ],
-            "OSMNBI_AUTHENTICATION_PROJECT_DOMAIN_NAME": relation_state[
-                "keystone_project_domain_name"
-            ],
-            "OSMNBI_AUTHENTICATION_SERVICE_USERNAME": relation_state[
-                "keystone_username"
-            ],
-            "OSMNBI_AUTHENTICATION_SERVICE_PASSWORD": relation_state[
-                "keystone_password"
-            ],
-            "OSMNBI_AUTHENTICATION_SERVICE_PROJECT": relation_state["keystone_service"],
-        }
-
-        pod_envconfig = pod_spec._make_pod_envconfig(config, relation_state)
-
-        self.assertDictEqual(expected_result, pod_envconfig)
-
-    def test_make_pod_envconfig_wrong_auth_backend(self) -> NoReturn:
-        """Teting make pod envconfig with wrong auth_backend."""
-        config = {
-            "enable_test": False,
-            "database_commonkey": "commonkey",
-            "log_level": "DEBUG",
-            "auth_backend": "kerberos",
-        }
-        relation_state = {
-            "message_host": "kafka",
-            "message_port": 9090,
-            "database_uri": "mongodb://mongo",
-            "prometheus_host": "prometheus",
-            "prometheus_port": 9082,
-            "keystone_host": "keystone",
-            "keystone_port": 5000,
-            "keystone_user_domain_name": "user_domain",
-            "keystone_project_domain_name": "project_domain",
-            "keystone_username": "username",
-            "keystone_password": "password",
-            "keystone_service": "service",
-        }
-
-        with self.assertRaises(ValueError) as exc:
-            pod_spec._make_pod_envconfig(config, relation_state)
-
-        self.assertTrue(
-            "auth_backend needs to be either internal or keystone" in str(exc.exception)
-        )
-
-    def test_make_pod_ingress_resources_without_site_url(self) -> NoReturn:
-        """Testing make pod ingress resources without site_url."""
-        config = {"site_url": ""}
-        app_name = "nbi"
-        port = 9999
-
-        pod_ingress_resources = pod_spec._make_pod_ingress_resources(
-            config, app_name, port
-        )
-
-        self.assertIsNone(pod_ingress_resources)
-
-    def test_make_pod_ingress_resources(self) -> NoReturn:
-        """Testing make pod ingress resources."""
-        config = {
-            "site_url": "http://nbi",
-            "max_file_size": 0,
-            "ingress_whitelist_source_range": "",
-        }
-        app_name = "nbi"
-        port = 9999
-
-        expected_result = [
-            {
-                "name": f"{app_name}-ingress",
-                "annotations": {
-                    "nginx.ingress.kubernetes.io/proxy-body-size": f"{config['max_file_size']}",
-                    "nginx.ingress.kubernetes.io/backend-protocol": "HTTPS",
-                    "nginx.ingress.kubernetes.io/ssl-redirect": "false",
-                },
-                "spec": {
-                    "rules": [
-                        {
-                            "host": app_name,
-                            "http": {
-                                "paths": [
-                                    {
-                                        "path": "/",
-                                        "backend": {
-                                            "serviceName": app_name,
-                                            "servicePort": port,
-                                        },
-                                    }
-                                ]
-                            },
-                        }
-                    ]
-                },
-            }
-        ]
-
-        pod_ingress_resources = pod_spec._make_pod_ingress_resources(
-            config, app_name, port
-        )
-
-        self.assertListEqual(expected_result, pod_ingress_resources)
-
-    def test_make_pod_ingress_resources_with_whitelist_source_range(self) -> NoReturn:
-        """Testing make pod ingress resources with whitelist_source_range."""
-        config = {
-            "site_url": "http://nbi",
-            "max_file_size": 0,
-            "ingress_whitelist_source_range": "0.0.0.0/0",
-        }
-        app_name = "nbi"
-        port = 9999
-
-        expected_result = [
-            {
-                "name": f"{app_name}-ingress",
-                "annotations": {
-                    "nginx.ingress.kubernetes.io/proxy-body-size": f"{config['max_file_size']}",
-                    "nginx.ingress.kubernetes.io/backend-protocol": "HTTPS",
-                    "nginx.ingress.kubernetes.io/ssl-redirect": "false",
-                    "nginx.ingress.kubernetes.io/whitelist-source-range": config[
-                        "ingress_whitelist_source_range"
-                    ],
-                },
-                "spec": {
-                    "rules": [
-                        {
-                            "host": app_name,
-                            "http": {
-                                "paths": [
-                                    {
-                                        "path": "/",
-                                        "backend": {
-                                            "serviceName": app_name,
-                                            "servicePort": port,
-                                        },
-                                    }
-                                ]
-                            },
-                        }
-                    ]
-                },
-            }
-        ]
-
-        pod_ingress_resources = pod_spec._make_pod_ingress_resources(
-            config, app_name, port
-        )
-
-        self.assertListEqual(expected_result, pod_ingress_resources)
-
-    def test_make_pod_ingress_resources_with_https(self) -> NoReturn:
-        """Testing make pod ingress resources with HTTPs."""
-        config = {
-            "site_url": "https://nbi",
-            "max_file_size": 0,
-            "ingress_whitelist_source_range": "",
-            "tls_secret_name": "",
-        }
-        app_name = "nbi"
-        port = 9999
-
-        expected_result = [
-            {
-                "name": f"{app_name}-ingress",
-                "annotations": {
-                    "nginx.ingress.kubernetes.io/proxy-body-size": f"{config['max_file_size']}",
-                    "nginx.ingress.kubernetes.io/backend-protocol": "HTTPS",
-                },
-                "spec": {
-                    "rules": [
-                        {
-                            "host": app_name,
-                            "http": {
-                                "paths": [
-                                    {
-                                        "path": "/",
-                                        "backend": {
-                                            "serviceName": app_name,
-                                            "servicePort": port,
-                                        },
-                                    }
-                                ]
-                            },
-                        }
-                    ],
-                    "tls": [{"hosts": [app_name]}],
-                },
-            }
-        ]
-
-        pod_ingress_resources = pod_spec._make_pod_ingress_resources(
-            config, app_name, port
-        )
-
-        self.assertListEqual(expected_result, pod_ingress_resources)
-
-    def test_make_pod_ingress_resources_with_https_tls_secret_name(self) -> NoReturn:
-        """Testing make pod ingress resources with HTTPs and TLS secret name."""
-        config = {
-            "site_url": "https://nbi",
-            "max_file_size": 0,
-            "ingress_whitelist_source_range": "",
-            "tls_secret_name": "secret_name",
-        }
-        app_name = "nbi"
-        port = 9999
-
-        expected_result = [
-            {
-                "name": f"{app_name}-ingress",
-                "annotations": {
-                    "nginx.ingress.kubernetes.io/proxy-body-size": f"{config['max_file_size']}",
-                    "nginx.ingress.kubernetes.io/backend-protocol": "HTTPS",
-                },
-                "spec": {
-                    "rules": [
-                        {
-                            "host": app_name,
-                            "http": {
-                                "paths": [
-                                    {
-                                        "path": "/",
-                                        "backend": {
-                                            "serviceName": app_name,
-                                            "servicePort": port,
-                                        },
-                                    }
-                                ]
-                            },
-                        }
-                    ],
-                    "tls": [
-                        {"hosts": [app_name], "secretName": config["tls_secret_name"]}
-                    ],
-                },
-            }
-        ]
-
-        pod_ingress_resources = pod_spec._make_pod_ingress_resources(
-            config, app_name, port
-        )
-
-        self.assertListEqual(expected_result, pod_ingress_resources)
-
-    def test_make_startup_probe(self) -> NoReturn:
-        """Testing make startup probe."""
-        expected_result = {
-            "exec": {"command": ["/usr/bin/pgrep python3"]},
-            "initialDelaySeconds": 60,
-            "timeoutSeconds": 5,
-        }
-
-        startup_probe = pod_spec._make_startup_probe()
-
-        self.assertDictEqual(expected_result, startup_probe)
-
-    def test_make_readiness_probe(self) -> NoReturn:
-        """Testing make readiness probe."""
-        port = 9999
-
-        expected_result = {
-            "httpGet": {
-                "path": "/osm/",
-                "port": port,
-            },
-            "initialDelaySeconds": 45,
-            "timeoutSeconds": 5,
-        }
-
-        readiness_probe = pod_spec._make_readiness_probe(port)
-
-        self.assertDictEqual(expected_result, readiness_probe)
-
-    def test_make_liveness_probe(self) -> NoReturn:
-        """Testing make liveness probe."""
-        port = 9999
-
-        expected_result = {
-            "httpGet": {
-                "path": "/osm/",
-                "port": port,
-            },
-            "initialDelaySeconds": 45,
-            "timeoutSeconds": 5,
-        }
-
-        liveness_probe = pod_spec._make_liveness_probe(port)
-
-        self.assertDictEqual(expected_result, liveness_probe)
-
-    def test_make_pod_spec_without_image_info(self) -> NoReturn:
-        """Testing make pod spec without image_info."""
-        image_info = None
-        config = {
-            "enable_test": False,
-            "database_commonkey": "commonkey",
-            "log_level": "DEBUG",
-            "auth_backend": "internal",
-            "site_url": "",
-        }
-        relation_state = {
-            "message_host": "kafka",
-            "message_port": 9090,
-            "database_uri": "mongodb://mongo",
-            "prometheus_host": "prometheus",
-            "prometheus_port": 9082,
-        }
-        app_name = "nbi"
-        port = 9999
-
-        spec = pod_spec.make_pod_spec(
-            image_info, config, relation_state, app_name, port
-        )
-
-        self.assertIsNone(spec)
-
-    def test_make_pod_spec_without_config(self) -> NoReturn:
-        """Testing make pod spec without config."""
-        image_info = {"upstream-source": "opensourcemano/nbi:8"}
-        config = {}
-        relation_state = {
-            "message_host": "kafka",
-            "message_port": 9090,
-            "database_uri": "mongodb://mongo",
-            "prometheus_host": "prometheus",
-            "prometheus_port": 9082,
-        }
-        app_name = "nbi"
-        port = 9999
-
-        with self.assertRaises(ValueError):
-            pod_spec.make_pod_spec(image_info, config, relation_state, app_name, port)
-
-    def test_make_pod_spec_without_relation_state(self) -> NoReturn:
-        """Testing make pod spec without relation_state."""
-        image_info = {"upstream-source": "opensourcemano/nbi:8"}
-        config = {
-            "enable_test": False,
-            "database_commonkey": "commonkey",
-            "log_level": "DEBUG",
-            "auth_backend": "internal",
-            "site_url": "",
-        }
-        relation_state = {}
-        app_name = "nbi"
-        port = 9999
-
-        with self.assertRaises(ValueError):
-            pod_spec.make_pod_spec(image_info, config, relation_state, app_name, port)
-
-    def test_make_pod_spec(self) -> NoReturn:
-        """Testing make pod spec."""
-        image_info = {"upstream-source": "opensourcemano/nbi:8"}
-        config = {
-            "enable_test": False,
-            "database_commonkey": "commonkey",
-            "log_level": "DEBUG",
-            "auth_backend": "internal",
-            "site_url": "",
-        }
-        relation_state = {
-            "message_host": "kafka",
-            "message_port": 9090,
-            "database_uri": "mongodb://mongo",
-            "prometheus_host": "prometheus",
-            "prometheus_port": 9082,
-        }
-        app_name = "nbi"
-        port = 9999
-
-        expected_result = {
-            "version": 3,
-            "containers": [
-                {
-                    "name": app_name,
-                    "imageDetails": image_info,
-                    "imagePullPolicy": "Always",
-                    "ports": [
-                        {
-                            "name": "nbi",
-                            "containerPort": port,
-                            "protocol": "TCP",
-                        }
-                    ],
-                    "envConfig": {
-                        "ALLOW_ANONYMOUS_LOGIN": "yes",
-                        "OSMNBI_SERVER_ENABLE_TEST": config["enable_test"],
-                        "OSMNBI_STATIC_DIR": "/app/osm_nbi/html_public",
-                        "OSMNBI_MESSAGE_HOST": relation_state["message_host"],
-                        "OSMNBI_MESSAGE_DRIVER": "kafka",
-                        "OSMNBI_MESSAGE_PORT": relation_state["message_port"],
-                        "OSMNBI_DATABASE_DRIVER": "mongo",
-                        "OSMNBI_DATABASE_URI": relation_state["database_uri"],
-                        "OSMNBI_DATABASE_COMMONKEY": config["database_commonkey"],
-                        "OSMNBI_STORAGE_DRIVER": "mongo",
-                        "OSMNBI_STORAGE_PATH": "/app/storage",
-                        "OSMNBI_STORAGE_COLLECTION": "files",
-                        "OSMNBI_STORAGE_URI": relation_state["database_uri"],
-                        "OSMNBI_PROMETHEUS_HOST": relation_state["prometheus_host"],
-                        "OSMNBI_PROMETHEUS_PORT": relation_state["prometheus_port"],
-                        "OSMNBI_LOG_LEVEL": config["log_level"],
-                        "OSMNBI_AUTHENTICATION_BACKEND": config["auth_backend"],
-                    },
-                }
-            ],
-            "kubernetesResources": {
-                "ingressResources": [],
-            },
-        }
-
-        spec = pod_spec.make_pod_spec(
-            image_info, config, relation_state, app_name, port
-        )
-
-        self.assertDictEqual(expected_result, spec)
-
-    def test_make_pod_spec_with_keystone(self) -> NoReturn:
-        """Testing make pod spec with keystone."""
-        image_info = {"upstream-source": "opensourcemano/nbi:8"}
-        config = {
-            "enable_test": False,
-            "database_commonkey": "commonkey",
-            "log_level": "DEBUG",
-            "auth_backend": "keystone",
-            "site_url": "",
-        }
-        relation_state = {
-            "message_host": "kafka",
-            "message_port": 9090,
-            "database_uri": "mongodb://mongo",
-            "prometheus_host": "prometheus",
-            "prometheus_port": 9082,
-            "keystone_host": "keystone",
-            "keystone_port": 5000,
-            "keystone_user_domain_name": "user_domain",
-            "keystone_project_domain_name": "project_domain",
-            "keystone_username": "username",
-            "keystone_password": "password",
-            "keystone_service": "service",
-        }
-        app_name = "nbi"
-        port = 9999
-
-        expected_result = {
-            "version": 3,
-            "containers": [
-                {
-                    "name": app_name,
-                    "imageDetails": image_info,
-                    "imagePullPolicy": "Always",
-                    "ports": [
-                        {
-                            "name": "nbi",
-                            "containerPort": port,
-                            "protocol": "TCP",
-                        }
-                    ],
-                    "envConfig": {
-                        "ALLOW_ANONYMOUS_LOGIN": "yes",
-                        "OSMNBI_SERVER_ENABLE_TEST": config["enable_test"],
-                        "OSMNBI_STATIC_DIR": "/app/osm_nbi/html_public",
-                        "OSMNBI_MESSAGE_HOST": relation_state["message_host"],
-                        "OSMNBI_MESSAGE_DRIVER": "kafka",
-                        "OSMNBI_MESSAGE_PORT": relation_state["message_port"],
-                        "OSMNBI_DATABASE_DRIVER": "mongo",
-                        "OSMNBI_DATABASE_URI": relation_state["database_uri"],
-                        "OSMNBI_DATABASE_COMMONKEY": config["database_commonkey"],
-                        "OSMNBI_STORAGE_DRIVER": "mongo",
-                        "OSMNBI_STORAGE_PATH": "/app/storage",
-                        "OSMNBI_STORAGE_COLLECTION": "files",
-                        "OSMNBI_STORAGE_URI": relation_state["database_uri"],
-                        "OSMNBI_PROMETHEUS_HOST": relation_state["prometheus_host"],
-                        "OSMNBI_PROMETHEUS_PORT": relation_state["prometheus_port"],
-                        "OSMNBI_LOG_LEVEL": config["log_level"],
-                        "OSMNBI_AUTHENTICATION_BACKEND": config["auth_backend"],
-                        "OSMNBI_AUTHENTICATION_AUTH_URL": relation_state[
-                            "keystone_host"
-                        ],
-                        "OSMNBI_AUTHENTICATION_AUTH_PORT": relation_state[
-                            "keystone_port"
-                        ],
-                        "OSMNBI_AUTHENTICATION_USER_DOMAIN_NAME": relation_state[
-                            "keystone_user_domain_name"
-                        ],
-                        "OSMNBI_AUTHENTICATION_PROJECT_DOMAIN_NAME": relation_state[
-                            "keystone_project_domain_name"
-                        ],
-                        "OSMNBI_AUTHENTICATION_SERVICE_USERNAME": relation_state[
-                            "keystone_username"
-                        ],
-                        "OSMNBI_AUTHENTICATION_SERVICE_PASSWORD": relation_state[
-                            "keystone_password"
-                        ],
-                        "OSMNBI_AUTHENTICATION_SERVICE_PROJECT": relation_state[
-                            "keystone_service"
-                        ],
-                    },
-                }
-            ],
-            "kubernetesResources": {
-                "ingressResources": [],
-            },
-        }
-
-        spec = pod_spec.make_pod_spec(
-            image_info, config, relation_state, app_name, port
-        )
-
-        self.assertDictEqual(expected_result, spec)
-
-
-if __name__ == "__main__":
-    unittest.main()
diff --git a/installers/charm/nbi/tox.ini b/installers/charm/nbi/tox.ini
deleted file mode 100644 (file)
index f3c9144..0000000
+++ /dev/null
@@ -1,128 +0,0 @@
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-#######################################################################################
-
-[tox]
-envlist = black, cover, flake8, pylint, yamllint, safety
-skipsdist = true
-
-[tox:jenkins]
-toxworkdir = /tmp/.tox
-
-[testenv]
-basepython = python3.8
-setenv =
-  VIRTUAL_ENV={envdir}
-  PYTHONPATH = {toxinidir}:{toxinidir}/lib:{toxinidir}/src
-  PYTHONDONTWRITEBYTECODE = 1
-deps =  -r{toxinidir}/requirements.txt
-
-
-#######################################################################################
-[testenv:black]
-deps = black
-commands =
-        black --check --diff src/ tests/
-
-
-#######################################################################################
-[testenv:cover]
-deps =  {[testenv]deps}
-        -r{toxinidir}/requirements-test.txt
-        coverage
-        nose2
-commands =
-        sh -c 'rm -f nosetests.xml'
-        coverage erase
-        nose2 -C --coverage src
-        coverage report --omit='*tests*'
-        coverage html -d ./cover --omit='*tests*'
-        coverage xml -o coverage.xml --omit=*tests*
-whitelist_externals = sh
-
-
-#######################################################################################
-[testenv:flake8]
-deps =  flake8
-        flake8-import-order
-commands =
-        flake8 src/ tests/
-
-
-#######################################################################################
-[testenv:pylint]
-deps =  {[testenv]deps}
-        -r{toxinidir}/requirements-test.txt
-        pylint==2.10.2
-commands =
-    pylint -E src/ tests/
-
-
-#######################################################################################
-[testenv:safety]
-setenv =
-        LC_ALL=C.UTF-8
-        LANG=C.UTF-8
-deps =  {[testenv]deps}
-        safety
-commands =
-        - safety check --full-report
-
-
-#######################################################################################
-[testenv:yamllint]
-deps =  {[testenv]deps}
-        -r{toxinidir}/requirements-test.txt
-        yamllint
-commands = yamllint .
-
-#######################################################################################
-[testenv:build]
-passenv=HTTP_PROXY HTTPS_PROXY NO_PROXY
-whitelist_externals =
-  charmcraft
-  sh
-commands =
-  charmcraft pack
-  sh -c 'ubuntu_version=20.04; \
-        architectures="amd64-aarch64-arm64"; \
-        charm_name=`cat metadata.yaml | grep -E "^name: " | cut -f 2 -d " "`; \
-        mv $charm_name"_ubuntu-"$ubuntu_version-$architectures.charm $charm_name.charm'
-
-#######################################################################################
-[flake8]
-ignore =
-        W291,
-        W293,
-        W503,
-        E123,
-        E125,
-        E226,
-        E241,
-exclude =
-        .git,
-        __pycache__,
-        .tox,
-max-line-length = 120
-show-source = True
-builtins = _
-max-complexity = 10
-import-order-style = google
diff --git a/installers/charm/ng-ui/.gitignore b/installers/charm/ng-ui/.gitignore
deleted file mode 100644 (file)
index 493739e..0000000
+++ /dev/null
@@ -1,30 +0,0 @@
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-venv
-.vscode
-build
-*.charm
-.coverage
-coverage.xml
-.stestr
-cover
-release
diff --git a/installers/charm/ng-ui/.jujuignore b/installers/charm/ng-ui/.jujuignore
deleted file mode 100644 (file)
index 3ae3e7d..0000000
+++ /dev/null
@@ -1,34 +0,0 @@
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-venv
-.vscode
-build
-*.charm
-.coverage
-coverage.xml
-.gitignore
-.stestr
-cover
-release
-tests/
-requirements*
-tox.ini
diff --git a/installers/charm/ng-ui/.yamllint.yaml b/installers/charm/ng-ui/.yamllint.yaml
deleted file mode 100644 (file)
index d71fb69..0000000
+++ /dev/null
@@ -1,34 +0,0 @@
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
----
-extends: default
-
-yaml-files:
-  - "*.yaml"
-  - "*.yml"
-  - ".yamllint"
-ignore: |
-  .tox
-  cover/
-  build/
-  venv
-  release/
diff --git a/installers/charm/ng-ui/README.md b/installers/charm/ng-ui/README.md
deleted file mode 100644 (file)
index 9b77b5d..0000000
+++ /dev/null
@@ -1,47 +0,0 @@
-<!-- #   Copyright 2020 Canonical Ltd.
-#
-#   Licensed under the Apache License, Version 2.0 (the "License");
-#   you may not use this file except in compliance with the License.
-#   You may obtain a copy of the License at
-#
-#       http://www.apache.org/licenses/LICENSE-2.0
-#
-#   Unless required by applicable law or agreed to in writing, software
-#   distributed under the License is distributed on an "AS IS" BASIS,
-#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-#   See the License for the specific language governing permissions and
-#   limitations under the License. -->
-
-# NG-UI Charm
-
-## How to deploy
-
-```bash
-juju deploy . # cs:~charmed-osm/ng-ui --channel edge
-juju relate ng-ui nbi
-```
-
-## How to expose the NG-UI through ingress
-
-```bash
-juju config ng-ui site_url=ng.<k8s_worker_ip>.xip.io
-juju expose ng-ui
-```
-
-> Note: The <k8s_worker_ip> is the IP of the K8s worker node. With microk8s, you can see the IP with `microk8s.config`. It is usually the IP of your host machine.
-
-## How to scale
-
-```bash
-    juju scale-application ng-ui 3
-```
-
-
-## Config Examples
-
-```bash
-juju config ng-ui image=opensourcemano/ng-ui:<tag>
-juju config ng-ui port=80
-juju config server_name=<name>
-juju config max_file_size=25
-```
diff --git a/installers/charm/ng-ui/charmcraft.yaml b/installers/charm/ng-ui/charmcraft.yaml
deleted file mode 100644 (file)
index 0a285a9..0000000
+++ /dev/null
@@ -1,37 +0,0 @@
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-type: charm
-bases:
-  - build-on:
-      - name: ubuntu
-        channel: "20.04"
-        architectures: ["amd64"]
-    run-on:
-      - name: ubuntu
-        channel: "20.04"
-        architectures:
-          - amd64
-          - aarch64
-          - arm64
-parts:
-  charm:
-    build-packages: [git]
diff --git a/installers/charm/ng-ui/config.yaml b/installers/charm/ng-ui/config.yaml
deleted file mode 100644 (file)
index c5f447b..0000000
+++ /dev/null
@@ -1,66 +0,0 @@
-# -*- coding: utf-8 -*-
-
-# Copyright 2020 Arctos Labs Scandinavia AB
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#    http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
-# implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-options:
-  server_name:
-    description: Server name
-    type: string
-    default: localhost
-  port:
-    description: Port to expose
-    type: int
-    default: 80
-  max_file_size:
-    type: int
-    description: |
-      The maximum file size, in megabytes. If there is a reverse proxy in front
-      of Keystone, it may need to be configured to handle the requested size.
-      Note: if set to 0, there is no limit.
-    default: 0
-  ingress_class:
-    type: string
-    description: |
-      Ingress class name. This is useful for selecting the ingress to be used
-      in case there are multiple ingresses in the underlying k8s clusters.
-  ingress_whitelist_source_range:
-    type: string
-    description: |
-      A comma-separated list of CIDRs to store in the
-      ingress.kubernetes.io/whitelist-source-range annotation.
-    default: ""
-  tls_secret_name:
-    type: string
-    description: TLS Secret name
-    default: ""
-  site_url:
-    type: string
-    description: Ingress URL
-    default: ""
-  cluster_issuer:
-    type: string
-    description: Name of the cluster issuer for TLS certificates
-    default: ""
-  image_pull_policy:
-    type: string
-    description: |
-      ImagePullPolicy configuration for the pod.
-      Possible values: always, ifnotpresent, never
-    default: always
-  security_context:
-    description: Enables the security context of the pods
-    type: boolean
-    default: false
diff --git a/installers/charm/ng-ui/metadata.yaml b/installers/charm/ng-ui/metadata.yaml
deleted file mode 100644 (file)
index 60643b5..0000000
+++ /dev/null
@@ -1,32 +0,0 @@
-#   Copyright 2020 Canonical Ltd.
-#
-#   Licensed under the Apache License, Version 2.0 (the "License");
-#   you may not use this file except in compliance with the License.
-#   You may obtain a copy of the License at
-#
-#       http://www.apache.org/licenses/LICENSE-2.0
-#
-#   Unless required by applicable law or agreed to in writing, software
-#   distributed under the License is distributed on an "AS IS" BASIS,
-#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-#   See the License for the specific language governing permissions and
-#   limitations under the License.
-
-name: osm-ng-ui
-summary: A Next Generation UI charm for Opensource MANO
-description: |
-  New UI for OSM
-series:
-  - kubernetes
-min-juju-version: 2.7.0
-deployment:
-  type: stateless
-  service: cluster
-requires:
-  nbi:
-    interface: http
-resources:
-  image:
-    type: oci-image
-    description: OSM docker image for NBI
-    upstream-source: "opensourcemano/ng-ui:latest"
diff --git a/installers/charm/ng-ui/requirements-test.txt b/installers/charm/ng-ui/requirements-test.txt
deleted file mode 100644 (file)
index cf61dd4..0000000
+++ /dev/null
@@ -1,20 +0,0 @@
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-mock==4.0.3
diff --git a/installers/charm/ng-ui/requirements.txt b/installers/charm/ng-ui/requirements.txt
deleted file mode 100644 (file)
index 10ade5d..0000000
+++ /dev/null
@@ -1,23 +0,0 @@
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-pydantic  # TODO: remove it
-git+https://github.com/charmed-osm/ops-lib-charmed-osm/@master
\ No newline at end of file
diff --git a/installers/charm/ng-ui/src/charm.py b/installers/charm/ng-ui/src/charm.py
deleted file mode 100755 (executable)
index 39675d0..0000000
+++ /dev/null
@@ -1,205 +0,0 @@
-#!/usr/bin/env python3
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-# pylint: disable=E0213
-
-
-from ipaddress import ip_network
-import logging
-from pathlib import Path
-from string import Template
-from typing import NoReturn, Optional
-from urllib.parse import urlparse
-
-from ops.main import main
-from opslib.osm.charm import CharmedOsmBase, RelationsMissing
-from opslib.osm.interfaces.http import HttpClient
-from opslib.osm.pod import (
-    ContainerV3Builder,
-    FilesV3Builder,
-    IngressResourceV3Builder,
-    PodSpecV3Builder,
-)
-from opslib.osm.validator import ModelValidator, validator
-
-
-logger = logging.getLogger(__name__)
-
-
-class ConfigModel(ModelValidator):
-    port: int
-    server_name: str
-    max_file_size: int
-    site_url: Optional[str]
-    cluster_issuer: Optional[str]
-    ingress_class: Optional[str]
-    ingress_whitelist_source_range: Optional[str]
-    tls_secret_name: Optional[str]
-    image_pull_policy: str
-    security_context: bool
-
-    @validator("port")
-    def validate_port(cls, v):
-        if v <= 0:
-            raise ValueError("value must be greater than 0")
-        return v
-
-    @validator("max_file_size")
-    def validate_max_file_size(cls, v):
-        if v < 0:
-            raise ValueError("value must be equal or greater than 0")
-        return v
-
-    @validator("site_url")
-    def validate_site_url(cls, v):
-        if v:
-            parsed = urlparse(v)
-            if not parsed.scheme.startswith("http"):
-                raise ValueError("value must start with http")
-        return v
-
-    @validator("ingress_whitelist_source_range")
-    def validate_ingress_whitelist_source_range(cls, v):
-        if v:
-            ip_network(v)
-        return v
-
-    @validator("image_pull_policy")
-    def validate_image_pull_policy(cls, v):
-        values = {
-            "always": "Always",
-            "ifnotpresent": "IfNotPresent",
-            "never": "Never",
-        }
-        v = v.lower()
-        if v not in values.keys():
-            raise ValueError("value must be always, ifnotpresent or never")
-        return values[v]
-
-
-class NgUiCharm(CharmedOsmBase):
-    def __init__(self, *args) -> NoReturn:
-        super().__init__(*args, oci_image="image")
-
-        self.nbi_client = HttpClient(self, "nbi")
-        self.framework.observe(self.on["nbi"].relation_changed, self.configure_pod)
-        self.framework.observe(self.on["nbi"].relation_broken, self.configure_pod)
-
-    def _check_missing_dependencies(self, config: ConfigModel):
-        missing_relations = []
-
-        if self.nbi_client.is_missing_data_in_app():
-            missing_relations.append("nbi")
-
-        if missing_relations:
-            raise RelationsMissing(missing_relations)
-
-    def _build_files(self, config: ConfigModel):
-        files_builder = FilesV3Builder()
-        files_builder.add_file(
-            "default",
-            Template(Path("templates/default.template").read_text()).substitute(
-                port=config.port,
-                server_name=config.server_name,
-                max_file_size=config.max_file_size,
-                nbi_host=self.nbi_client.host,
-                nbi_port=self.nbi_client.port,
-            ),
-        )
-        return files_builder.build()
-
-    def build_pod_spec(self, image_info):
-        # Validate config
-        config = ConfigModel(**dict(self.config))
-        # Check relations
-        self._check_missing_dependencies(config)
-        # Create Builder for the PodSpec
-        pod_spec_builder = PodSpecV3Builder(
-            enable_security_context=config.security_context
-        )
-        # Build Container
-        container_builder = ContainerV3Builder(
-            self.app.name,
-            image_info,
-            config.image_pull_policy,
-            run_as_non_root=config.security_context,
-        )
-        container_builder.add_port(name=self.app.name, port=config.port)
-        container = container_builder.build()
-        container_builder.add_tcpsocket_readiness_probe(
-            config.port,
-            initial_delay_seconds=45,
-            timeout_seconds=5,
-        )
-        container_builder.add_tcpsocket_liveness_probe(
-            config.port,
-            initial_delay_seconds=45,
-            timeout_seconds=15,
-        )
-        container_builder.add_volume_config(
-            "configuration",
-            "/etc/nginx/sites-available/",
-            self._build_files(config),
-        )
-        # Add container to pod spec
-        pod_spec_builder.add_container(container)
-        # Add ingress resources to pod spec if site url exists
-        if config.site_url:
-            parsed = urlparse(config.site_url)
-            annotations = {
-                "nginx.ingress.kubernetes.io/proxy-body-size": "{}".format(
-                    str(config.max_file_size) + "m"
-                    if config.max_file_size > 0
-                    else config.max_file_size
-                )
-            }
-            if config.ingress_class:
-                annotations["kubernetes.io/ingress.class"] = config.ingress_class
-            ingress_resource_builder = IngressResourceV3Builder(
-                f"{self.app.name}-ingress", annotations
-            )
-
-            if config.ingress_whitelist_source_range:
-                annotations[
-                    "nginx.ingress.kubernetes.io/whitelist-source-range"
-                ] = config.ingress_whitelist_source_range
-
-            if config.cluster_issuer:
-                annotations["cert-manager.io/cluster-issuer"] = config.cluster_issuer
-
-            if parsed.scheme == "https":
-                ingress_resource_builder.add_tls(
-                    [parsed.hostname], config.tls_secret_name
-                )
-            else:
-                annotations["nginx.ingress.kubernetes.io/ssl-redirect"] = "false"
-
-            ingress_resource_builder.add_rule(
-                parsed.hostname, self.app.name, config.port
-            )
-            ingress_resource = ingress_resource_builder.build()
-            pod_spec_builder.add_ingress_resource(ingress_resource)
-        return pod_spec_builder.build()
-
-
-if __name__ == "__main__":
-    main(NgUiCharm)
diff --git a/installers/charm/ng-ui/src/pod_spec.py b/installers/charm/ng-ui/src/pod_spec.py
deleted file mode 100644 (file)
index 95d5f72..0000000
+++ /dev/null
@@ -1,299 +0,0 @@
-#!/usr/bin/env python3
-# Copyright 2020 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-# pylint: disable=E0213,E0611
-
-
-import logging
-from pydantic import (
-    BaseModel,
-    conint,
-    IPvAnyNetwork,
-    PositiveInt,
-    validator,
-)
-from typing import Any, Dict, List, Optional
-from urllib.parse import urlparse
-from pathlib import Path
-from string import Template
-
-logger = logging.getLogger(__name__)
-
-
-class ConfigData(BaseModel):
-    """Configuration data model."""
-
-    port: PositiveInt
-    site_url: Optional[str]
-    max_file_size: Optional[conint(ge=0)]
-    ingress_whitelist_source_range: Optional[IPvAnyNetwork]
-    tls_secret_name: Optional[str]
-
-    @validator("max_file_size", pre=True, always=True)
-    def validate_max_file_size(cls, value, values, **kwargs):
-        site_url = values.get("site_url")
-
-        if not site_url:
-            return value
-
-        parsed = urlparse(site_url)
-
-        if not parsed.scheme.startswith("http"):
-            return value
-
-        if value is None:
-            raise ValueError("max_file_size needs to be defined if site_url is defined")
-
-        return value
-
-    @validator("ingress_whitelist_source_range", pre=True, always=True)
-    def validate_ingress_whitelist_source_range(cls, value, values, **kwargs):
-        if not value:
-            return None
-
-        return value
-
-
-class RelationData(BaseModel):
-    """Relation data model."""
-
-    nbi_host: str
-    nbi_port: PositiveInt
-
-
-def _make_pod_ports(port: int) -> List[Dict[str, Any]]:
-    """Generate pod ports details.
-
-    Args:
-        port (int): Port to expose.
-
-    Returns:
-        List[Dict[str, Any]]: pod port details.
-    """
-    return [
-        {"name": "http", "containerPort": port, "protocol": "TCP"},
-    ]
-
-
-def _make_pod_ingress_resources(
-    config: Dict[str, Any], app_name: str, port: int
-) -> List[Dict[str, Any]]:
-    """Generate pod ingress resources.
-
-    Args:
-        config (Dict[str, Any]): configuration information.
-        app_name (str): application name.
-        port (int): port to expose.
-
-    Returns:
-        List[Dict[str, Any]]: pod ingress resources.
-    """
-    site_url = config.get("site_url")
-
-    if not site_url:
-        return
-
-    parsed = urlparse(site_url)
-
-    if not parsed.scheme.startswith("http"):
-        return
-
-    max_file_size = config["max_file_size"]
-    ingress_whitelist_source_range = config["ingress_whitelist_source_range"]
-
-    annotations = {
-        "nginx.ingress.kubernetes.io/proxy-body-size": "{}".format(
-            str(max_file_size) + "m" if max_file_size > 0 else max_file_size
-        ),
-    }
-
-    if ingress_whitelist_source_range:
-        annotations[
-            "nginx.ingress.kubernetes.io/whitelist-source-range"
-        ] = ingress_whitelist_source_range
-
-    ingress_spec_tls = None
-
-    if parsed.scheme == "https":
-        ingress_spec_tls = [{"hosts": [parsed.hostname]}]
-        tls_secret_name = config["tls_secret_name"]
-        if tls_secret_name:
-            ingress_spec_tls[0]["secretName"] = tls_secret_name
-    else:
-        annotations["nginx.ingress.kubernetes.io/ssl-redirect"] = "false"
-
-    ingress = {
-        "name": "{}-ingress".format(app_name),
-        "annotations": annotations,
-        "spec": {
-            "rules": [
-                {
-                    "host": parsed.hostname,
-                    "http": {
-                        "paths": [
-                            {
-                                "path": "/",
-                                "backend": {
-                                    "serviceName": app_name,
-                                    "servicePort": port,
-                                },
-                            }
-                        ]
-                    },
-                }
-            ]
-        },
-    }
-    if ingress_spec_tls:
-        ingress["spec"]["tls"] = ingress_spec_tls
-
-    return [ingress]
-
-
-def _make_startup_probe() -> Dict[str, Any]:
-    """Generate startup probe.
-
-    Returns:
-        Dict[str, Any]: startup probe.
-    """
-    return {
-        "exec": {"command": ["/usr/bin/pgrep python3"]},
-        "initialDelaySeconds": 60,
-        "timeoutSeconds": 5,
-    }
-
-
-def _make_readiness_probe(port: int) -> Dict[str, Any]:
-    """Generate readiness probe.
-
-    Args:
-        port (int): [description]
-
-    Returns:
-        Dict[str, Any]: readiness probe.
-    """
-    return {
-        "tcpSocket": {
-            "port": port,
-        },
-        "initialDelaySeconds": 45,
-        "timeoutSeconds": 5,
-    }
-
-
-def _make_liveness_probe(port: int) -> Dict[str, Any]:
-    """Generate liveness probe.
-
-    Args:
-        port (int): [description]
-
-    Returns:
-        Dict[str, Any]: liveness probe.
-    """
-    return {
-        "tcpSocket": {
-            "port": port,
-        },
-        "initialDelaySeconds": 45,
-        "timeoutSeconds": 5,
-    }
-
-
-def _make_pod_volume_config(
-    config: Dict[str, Any],
-    relation_state: Dict[str, Any],
-) -> List[Dict[str, Any]]:
-    """Generate volume config with files.
-
-    Args:
-        config (Dict[str, Any]): configuration information.
-
-    Returns:
-        Dict[str, Any]: volume config.
-    """
-    template_data = {**config, **relation_state}
-    template_data["max_file_size"] = f'{template_data["max_file_size"]}M'
-    return [
-        {
-            "name": "configuration",
-            "mountPath": "/etc/nginx/sites-available/",
-            "files": [
-                {
-                    "path": "default",
-                    "content": Template(Path("files/default").read_text()).substitute(
-                        template_data
-                    ),
-                }
-            ],
-        }
-    ]
-
-
-def make_pod_spec(
-    image_info: Dict[str, str],
-    config: Dict[str, Any],
-    relation_state: Dict[str, Any],
-    app_name: str = "ng-ui",
-) -> Dict[str, Any]:
-    """Generate the pod spec information.
-
-    Args:
-        image_info (Dict[str, str]): Object provided by
-                                     OCIImageResource("image").fetch().
-        config (Dict[str, Any]): Configuration information.
-        relation_state (Dict[str, Any]): Relation state information.
-        app_name (str, optional): Application name. Defaults to "ng-ui".
-        port (int, optional): Port for the container. Defaults to 80.
-
-    Returns:
-        Dict[str, Any]: Pod spec dictionary for the charm.
-    """
-    if not image_info:
-        return None
-
-    ConfigData(**(config))
-    RelationData(**(relation_state))
-
-    ports = _make_pod_ports(config["port"])
-    ingress_resources = _make_pod_ingress_resources(config, app_name, config["port"])
-    kubernetes = {
-        # "startupProbe": _make_startup_probe(),
-        "readinessProbe": _make_readiness_probe(config["port"]),
-        "livenessProbe": _make_liveness_probe(config["port"]),
-    }
-    volume_config = _make_pod_volume_config(config, relation_state)
-    return {
-        "version": 3,
-        "containers": [
-            {
-                "name": app_name,
-                "imageDetails": image_info,
-                "imagePullPolicy": "Always",
-                "ports": ports,
-                "kubernetes": kubernetes,
-                "volumeConfig": volume_config,
-            }
-        ],
-        "kubernetesResources": {
-            "ingressResources": ingress_resources or [],
-        },
-    }
diff --git a/installers/charm/ng-ui/templates/default.template b/installers/charm/ng-ui/templates/default.template
deleted file mode 100644 (file)
index f946263..0000000
+++ /dev/null
@@ -1,33 +0,0 @@
-#   Copyright 2020 Canonical Ltd.
-#
-#   Licensed under the Apache License, Version 2.0 (the "License");
-#   you may not use this file except in compliance with the License.
-#   You may obtain a copy of the License at
-#
-#       http://www.apache.org/licenses/LICENSE-2.0
-#
-#   Unless required by applicable law or agreed to in writing, software
-#   distributed under the License is distributed on an "AS IS" BASIS,
-#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-#   See the License for the specific language governing permissions and
-#   limitations under the License.
-
-
-
-server {
-    listen       $port;
-    server_name  $server_name;
-    root   /usr/share/nginx/html;
-    index  index.html index.htm;
-    client_max_body_size $max_file_size;
-
-    location /osm {
-        proxy_pass https://$nbi_host:$nbi_port;
-        proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
-        proxy_set_header Accept-Encoding "";
-    }
-
-    location / {
-        try_files $$uri $$uri/ /index.html;
-    }
-}
diff --git a/installers/charm/ng-ui/tests/__init__.py b/installers/charm/ng-ui/tests/__init__.py
deleted file mode 100644 (file)
index 446d5ce..0000000
+++ /dev/null
@@ -1,40 +0,0 @@
-#!/usr/bin/env python3
-# Copyright 2020 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-"""Init mocking for unit tests."""
-
-import sys
-
-
-import mock
-
-
-class OCIImageResourceErrorMock(Exception):
-    pass
-
-
-sys.path.append("src")
-
-oci_image = mock.MagicMock()
-oci_image.OCIImageResourceError = OCIImageResourceErrorMock
-sys.modules["oci_image"] = oci_image
-sys.modules["oci_image"].OCIImageResource().fetch.return_value = {}
diff --git a/installers/charm/ng-ui/tests/test_charm.py b/installers/charm/ng-ui/tests/test_charm.py
deleted file mode 100644 (file)
index 2765e81..0000000
+++ /dev/null
@@ -1,97 +0,0 @@
-#!/usr/bin/env python3
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-import sys
-from typing import NoReturn
-import unittest
-
-from charm import NgUiCharm
-from ops.model import ActiveStatus, BlockedStatus
-from ops.testing import Harness
-
-
-class TestCharm(unittest.TestCase):
-    """Prometheus Charm unit tests."""
-
-    def setUp(self) -> NoReturn:
-        """Test setup"""
-        self.image_info = sys.modules["oci_image"].OCIImageResource().fetch()
-        self.harness = Harness(NgUiCharm)
-        self.harness.set_leader(is_leader=True)
-        self.harness.begin()
-        self.config = {
-            "server_name": "localhost",
-            "port": 80,
-            "max_file_size": 0,
-            "ingress_whitelist_source_range": "",
-            "tls_secret_name": "",
-            "site_url": "https://ui.192.168.100.100.nip.io",
-            "cluster_issuer": "vault-issuer",
-        }
-        self.harness.update_config(self.config)
-
-    def test_config_changed_no_relations(
-        self,
-    ) -> NoReturn:
-        """Test ingress resources without HTTP."""
-
-        self.harness.charm.on.config_changed.emit()
-
-        # Assertions
-        self.assertIsInstance(self.harness.charm.unit.status, BlockedStatus)
-        self.assertTrue(
-            all(
-                relation in self.harness.charm.unit.status.message
-                for relation in ["nbi"]
-            )
-        )
-
-    def test_config_changed_non_leader(
-        self,
-    ) -> NoReturn:
-        """Test ingress resources without HTTP."""
-        self.harness.set_leader(is_leader=False)
-        self.harness.charm.on.config_changed.emit()
-
-        # Assertions
-        self.assertIsInstance(self.harness.charm.unit.status, ActiveStatus)
-
-    def test_with_relations(
-        self,
-    ) -> NoReturn:
-        "Test with relations (internal)"
-        self.initialize_nbi_relation()
-        # Verifying status
-        self.assertNotIsInstance(self.harness.charm.unit.status, BlockedStatus)
-
-    def initialize_nbi_relation(self):
-        http_relation_id = self.harness.add_relation("nbi", "nbi")
-        self.harness.add_relation_unit(http_relation_id, "nbi")
-        self.harness.update_relation_data(
-            http_relation_id,
-            "nbi",
-            {"host": "nbi", "port": 9999},
-        )
-
-
-if __name__ == "__main__":
-    unittest.main()
diff --git a/installers/charm/ng-ui/tox.ini b/installers/charm/ng-ui/tox.ini
deleted file mode 100644 (file)
index 58e13a6..0000000
+++ /dev/null
@@ -1,126 +0,0 @@
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-#######################################################################################
-
-[tox]
-envlist = black, cover, flake8, pylint, yamllint, safety
-skipsdist = true
-
-[tox:jenkins]
-toxworkdir = /tmp/.tox
-
-[testenv]
-basepython = python3.8
-setenv = VIRTUAL_ENV={envdir}
-         PYTHONDONTWRITEBYTECODE = 1
-deps =  -r{toxinidir}/requirements.txt
-
-
-#######################################################################################
-[testenv:black]
-deps = black
-commands =
-        black --check --diff src/ tests/
-
-
-#######################################################################################
-[testenv:cover]
-deps =  {[testenv]deps}
-        -r{toxinidir}/requirements-test.txt
-        coverage
-        nose2
-commands =
-        sh -c 'rm -f nosetests.xml'
-        coverage erase
-        nose2 -C --coverage src
-        coverage report --omit='*tests*'
-        coverage html -d ./cover --omit='*tests*'
-        coverage xml -o coverage.xml --omit=*tests*
-whitelist_externals = sh
-
-
-#######################################################################################
-[testenv:flake8]
-deps =  flake8
-        flake8-import-order
-commands =
-        flake8 src/ tests/ --exclude=*pod_spec*
-
-
-#######################################################################################
-[testenv:pylint]
-deps =  {[testenv]deps}
-        -r{toxinidir}/requirements-test.txt
-        pylint==2.10.2
-commands =
-    pylint -E src/ tests/
-
-
-#######################################################################################
-[testenv:safety]
-setenv =
-        LC_ALL=C.UTF-8
-        LANG=C.UTF-8
-deps =  {[testenv]deps}
-        safety
-commands =
-        - safety check --full-report
-
-
-#######################################################################################
-[testenv:yamllint]
-deps =  {[testenv]deps}
-        -r{toxinidir}/requirements-test.txt
-        yamllint
-commands = yamllint .
-
-#######################################################################################
-[testenv:build]
-passenv=HTTP_PROXY HTTPS_PROXY NO_PROXY
-whitelist_externals =
-  charmcraft
-  sh
-commands =
-  charmcraft pack
-  sh -c 'ubuntu_version=20.04; \
-        architectures="amd64-aarch64-arm64"; \
-        charm_name=`cat metadata.yaml | grep -E "^name: " | cut -f 2 -d " "`; \
-        mv $charm_name"_ubuntu-"$ubuntu_version-$architectures.charm $charm_name.charm'
-
-#######################################################################################
-[flake8]
-ignore =
-        W291,
-        W293,
-        W503,
-        E123,
-        E125,
-        E226,
-        E241,
-exclude =
-        .git,
-        __pycache__,
-        .tox,
-max-line-length = 120
-show-source = True
-builtins = _
-max-complexity = 10
-import-order-style = google
index ac15a0e..e539f7b 100644 (file)
@@ -54,14 +54,14 @@ options:
     type: boolean
     description: |
       Great for OSM Developers! (Not recommended for production deployments)
-        
+
       This action activates the Debug Mode, which sets up the container to be ready for debugging.
       As part of the setup, SSH is enabled and a VSCode workspace file is automatically populated.
 
       After enabling the debug-mode, execute the following command to get the information you need
       to start debugging:
-        `juju run-action get-debug-mode-information <unit name> --wait`
-      
+        `juju run-action <unit name> get-debug-mode-information --wait`
+
       The previous command returns the command you need to execute, and the SSH password that was set.
 
       See also:
@@ -79,7 +79,7 @@ options:
         $ git clone "https://osm.etsi.org/gerrit/osm/LCM" /home/ubuntu/LCM
         $ juju config lcm lcm-hostpath=/home/ubuntu/LCM
 
-      This configuration only applies if option `debug-mode` is set to true. 
+      This configuration only applies if option `debug-mode` is set to true.
   n2vc-hostpath:
     type: string
     description: |
@@ -101,4 +101,4 @@ options:
         $ git clone "https://osm.etsi.org/gerrit/osm/common" /home/ubuntu/common
         $ juju config lcm common-hostpath=/home/ubuntu/common
 
-      This configuration only applies if option `debug-mode` is set to true. 
+      This configuration only applies if option `debug-mode` is set to true.
diff --git a/installers/charm/osm-lcm/lib/charms/data_platform_libs/v0/data_interfaces.py b/installers/charm/osm-lcm/lib/charms/data_platform_libs/v0/data_interfaces.py
new file mode 100644 (file)
index 0000000..b3da5aa
--- /dev/null
@@ -0,0 +1,1130 @@
+# Copyright 2023 Canonical Ltd.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Library to manage the relation for the data-platform products.
+
+This library contains the Requires and Provides classes for handling the relation
+between an application and multiple managed application supported by the data-team:
+MySQL, Postgresql, MongoDB, Redis,  and Kakfa.
+
+### Database (MySQL, Postgresql, MongoDB, and Redis)
+
+#### Requires Charm
+This library is a uniform interface to a selection of common database
+metadata, with added custom events that add convenience to database management,
+and methods to consume the application related data.
+
+
+Following an example of using the DatabaseCreatedEvent, in the context of the
+application charm code:
+
+```python
+
+from charms.data_platform_libs.v0.data_interfaces import (
+    DatabaseCreatedEvent,
+    DatabaseRequires,
+)
+
+class ApplicationCharm(CharmBase):
+    # Application charm that connects to database charms.
+
+    def __init__(self, *args):
+        super().__init__(*args)
+
+        # Charm events defined in the database requires charm library.
+        self.database = DatabaseRequires(self, relation_name="database", database_name="database")
+        self.framework.observe(self.database.on.database_created, self._on_database_created)
+
+    def _on_database_created(self, event: DatabaseCreatedEvent) -> None:
+        # Handle the created database
+
+        # Create configuration file for app
+        config_file = self._render_app_config_file(
+            event.username,
+            event.password,
+            event.endpoints,
+        )
+
+        # Start application with rendered configuration
+        self._start_application(config_file)
+
+        # Set active status
+        self.unit.status = ActiveStatus("received database credentials")
+```
+
+As shown above, the library provides some custom events to handle specific situations,
+which are listed below:
+
+-  database_created: event emitted when the requested database is created.
+-  endpoints_changed: event emitted when the read/write endpoints of the database have changed.
+-  read_only_endpoints_changed: event emitted when the read-only endpoints of the database
+  have changed. Event is not triggered if read/write endpoints changed too.
+
+If it is needed to connect multiple database clusters to the same relation endpoint
+the application charm can implement the same code as if it would connect to only
+one database cluster (like the above code example).
+
+To differentiate multiple clusters connected to the same relation endpoint
+the application charm can use the name of the remote application:
+
+```python
+
+def _on_database_created(self, event: DatabaseCreatedEvent) -> None:
+    # Get the remote app name of the cluster that triggered this event
+    cluster = event.relation.app.name
+```
+
+It is also possible to provide an alias for each different database cluster/relation.
+
+So, it is possible to differentiate the clusters in two ways.
+The first is to use the remote application name, i.e., `event.relation.app.name`, as above.
+
+The second way is to use different event handlers to handle each cluster events.
+The implementation would be something like the following code:
+
+```python
+
+from charms.data_platform_libs.v0.data_interfaces import (
+    DatabaseCreatedEvent,
+    DatabaseRequires,
+)
+
+class ApplicationCharm(CharmBase):
+    # Application charm that connects to database charms.
+
+    def __init__(self, *args):
+        super().__init__(*args)
+
+        # Define the cluster aliases and one handler for each cluster database created event.
+        self.database = DatabaseRequires(
+            self,
+            relation_name="database",
+            database_name="database",
+            relations_aliases = ["cluster1", "cluster2"],
+        )
+        self.framework.observe(
+            self.database.on.cluster1_database_created, self._on_cluster1_database_created
+        )
+        self.framework.observe(
+            self.database.on.cluster2_database_created, self._on_cluster2_database_created
+        )
+
+    def _on_cluster1_database_created(self, event: DatabaseCreatedEvent) -> None:
+        # Handle the created database on the cluster named cluster1
+
+        # Create configuration file for app
+        config_file = self._render_app_config_file(
+            event.username,
+            event.password,
+            event.endpoints,
+        )
+        ...
+
+    def _on_cluster2_database_created(self, event: DatabaseCreatedEvent) -> None:
+        # Handle the created database on the cluster named cluster2
+
+        # Create configuration file for app
+        config_file = self._render_app_config_file(
+            event.username,
+            event.password,
+            event.endpoints,
+        )
+        ...
+
+```
+
+### Provider Charm
+
+Following an example of using the DatabaseRequestedEvent, in the context of the
+database charm code:
+
+```python
+from charms.data_platform_libs.v0.data_interfaces import DatabaseProvides
+
+class SampleCharm(CharmBase):
+
+    def __init__(self, *args):
+        super().__init__(*args)
+        # Charm events defined in the database provides charm library.
+        self.provided_database = DatabaseProvides(self, relation_name="database")
+        self.framework.observe(self.provided_database.on.database_requested,
+            self._on_database_requested)
+        # Database generic helper
+        self.database = DatabaseHelper()
+
+    def _on_database_requested(self, event: DatabaseRequestedEvent) -> None:
+        # Handle the event triggered by a new database requested in the relation
+        # Retrieve the database name using the charm library.
+        db_name = event.database
+        # generate a new user credential
+        username = self.database.generate_user()
+        password = self.database.generate_password()
+        # set the credentials for the relation
+        self.provided_database.set_credentials(event.relation.id, username, password)
+        # set other variables for the relation event.set_tls("False")
+```
+As shown above, the library provides a custom event (database_requested) to handle
+the situation when an application charm requests a new database to be created.
+It's preferred to subscribe to this event instead of relation changed event to avoid
+creating a new database when other information other than a database name is
+exchanged in the relation databag.
+
+### Kafka
+
+This library is the interface to use and interact with the Kafka charm. This library contains
+custom events that add convenience to manage Kafka, and provides methods to consume the
+application related data.
+
+#### Requirer Charm
+
+```python
+
+from charms.data_platform_libs.v0.data_interfaces import (
+    BootstrapServerChangedEvent,
+    KafkaRequires,
+    TopicCreatedEvent,
+)
+
+class ApplicationCharm(CharmBase):
+
+    def __init__(self, *args):
+        super().__init__(*args)
+        self.kafka = KafkaRequires(self, "kafka_client", "test-topic")
+        self.framework.observe(
+            self.kafka.on.bootstrap_server_changed, self._on_kafka_bootstrap_server_changed
+        )
+        self.framework.observe(
+            self.kafka.on.topic_created, self._on_kafka_topic_created
+        )
+
+    def _on_kafka_bootstrap_server_changed(self, event: BootstrapServerChangedEvent):
+        # Event triggered when a bootstrap server was changed for this application
+
+        new_bootstrap_server = event.bootstrap_server
+        ...
+
+    def _on_kafka_topic_created(self, event: TopicCreatedEvent):
+        # Event triggered when a topic was created for this application
+        username = event.username
+        password = event.password
+        tls = event.tls
+        tls_ca= event.tls_ca
+        bootstrap_server event.bootstrap_server
+        consumer_group_prefic = event.consumer_group_prefix
+        zookeeper_uris = event.zookeeper_uris
+        ...
+
+```
+
+As shown above, the library provides some custom events to handle specific situations,
+which are listed below:
+
+- topic_created: event emitted when the requested topic is created.
+- bootstrap_server_changed: event emitted when the bootstrap server have changed.
+- credential_changed: event emitted when the credentials of Kafka changed.
+
+### Provider Charm
+
+Following the previous example, this is an example of the provider charm.
+
+```python
+class SampleCharm(CharmBase):
+
+from charms.data_platform_libs.v0.data_interfaces import (
+    KafkaProvides,
+    TopicRequestedEvent,
+)
+
+    def __init__(self, *args):
+        super().__init__(*args)
+
+        # Default charm events.
+        self.framework.observe(self.on.start, self._on_start)
+
+        # Charm events defined in the Kafka Provides charm library.
+        self.kafka_provider = KafkaProvides(self, relation_name="kafka_client")
+        self.framework.observe(self.kafka_provider.on.topic_requested, self._on_topic_requested)
+        # Kafka generic helper
+        self.kafka = KafkaHelper()
+
+    def _on_topic_requested(self, event: TopicRequestedEvent):
+        # Handle the on_topic_requested event.
+
+        topic = event.topic
+        relation_id = event.relation.id
+        # set connection info in the databag relation
+        self.kafka_provider.set_bootstrap_server(relation_id, self.kafka.get_bootstrap_server())
+        self.kafka_provider.set_credentials(relation_id, username=username, password=password)
+        self.kafka_provider.set_consumer_group_prefix(relation_id, ...)
+        self.kafka_provider.set_tls(relation_id, "False")
+        self.kafka_provider.set_zookeeper_uris(relation_id, ...)
+
+```
+As shown above, the library provides a custom event (topic_requested) to handle
+the situation when an application charm requests a new topic to be created.
+It is preferred to subscribe to this event instead of relation changed event to avoid
+creating a new topic when other information other than a topic name is
+exchanged in the relation databag.
+"""
+
+import json
+import logging
+from abc import ABC, abstractmethod
+from collections import namedtuple
+from datetime import datetime
+from typing import List, Optional
+
+from ops.charm import (
+    CharmBase,
+    CharmEvents,
+    RelationChangedEvent,
+    RelationEvent,
+    RelationJoinedEvent,
+)
+from ops.framework import EventSource, Object
+from ops.model import Relation
+
+# The unique Charmhub library identifier, never change it
+LIBID = "6c3e6b6680d64e9c89e611d1a15f65be"
+
+# Increment this major API version when introducing breaking changes
+LIBAPI = 0
+
+# Increment this PATCH version before using `charmcraft publish-lib` or reset
+# to 0 if you are raising the major API version
+LIBPATCH = 7
+
+PYDEPS = ["ops>=2.0.0"]
+
+logger = logging.getLogger(__name__)
+
+Diff = namedtuple("Diff", "added changed deleted")
+Diff.__doc__ = """
+A tuple for storing the diff between two data mappings.
+
+added - keys that were added
+changed - keys that still exist but have new values
+deleted - key that were deleted"""
+
+
+def diff(event: RelationChangedEvent, bucket: str) -> Diff:
+    """Retrieves the diff of the data in the relation changed databag.
+
+    Args:
+        event: relation changed event.
+        bucket: bucket of the databag (app or unit)
+
+    Returns:
+        a Diff instance containing the added, deleted and changed
+            keys from the event relation databag.
+    """
+    # Retrieve the old data from the data key in the application relation databag.
+    old_data = json.loads(event.relation.data[bucket].get("data", "{}"))
+    # Retrieve the new data from the event relation databag.
+    new_data = {
+        key: value for key, value in event.relation.data[event.app].items() if key != "data"
+    }
+
+    # These are the keys that were added to the databag and triggered this event.
+    added = new_data.keys() - old_data.keys()
+    # These are the keys that were removed from the databag and triggered this event.
+    deleted = old_data.keys() - new_data.keys()
+    # These are the keys that already existed in the databag,
+    # but had their values changed.
+    changed = {key for key in old_data.keys() & new_data.keys() if old_data[key] != new_data[key]}
+    # Convert the new_data to a serializable format and save it for a next diff check.
+    event.relation.data[bucket].update({"data": json.dumps(new_data)})
+
+    # Return the diff with all possible changes.
+    return Diff(added, changed, deleted)
+
+
+# Base DataProvides and DataRequires
+
+
+class DataProvides(Object, ABC):
+    """Base provides-side of the data products relation."""
+
+    def __init__(self, charm: CharmBase, relation_name: str) -> None:
+        super().__init__(charm, relation_name)
+        self.charm = charm
+        self.local_app = self.charm.model.app
+        self.local_unit = self.charm.unit
+        self.relation_name = relation_name
+        self.framework.observe(
+            charm.on[relation_name].relation_changed,
+            self._on_relation_changed,
+        )
+
+    def _diff(self, event: RelationChangedEvent) -> Diff:
+        """Retrieves the diff of the data in the relation changed databag.
+
+        Args:
+            event: relation changed event.
+
+        Returns:
+            a Diff instance containing the added, deleted and changed
+                keys from the event relation databag.
+        """
+        return diff(event, self.local_app)
+
+    @abstractmethod
+    def _on_relation_changed(self, event: RelationChangedEvent) -> None:
+        """Event emitted when the relation data has changed."""
+        raise NotImplementedError
+
+    def fetch_relation_data(self) -> dict:
+        """Retrieves data from relation.
+
+        This function can be used to retrieve data from a relation
+        in the charm code when outside an event callback.
+
+        Returns:
+            a dict of the values stored in the relation data bag
+                for all relation instances (indexed by the relation id).
+        """
+        data = {}
+        for relation in self.relations:
+            data[relation.id] = {
+                key: value for key, value in relation.data[relation.app].items() if key != "data"
+            }
+        return data
+
+    def _update_relation_data(self, relation_id: int, data: dict) -> None:
+        """Updates a set of key-value pairs in the relation.
+
+        This function writes in the application data bag, therefore,
+        only the leader unit can call it.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            data: dict containing the key-value pairs
+                that should be updated in the relation.
+        """
+        if self.local_unit.is_leader():
+            relation = self.charm.model.get_relation(self.relation_name, relation_id)
+            relation.data[self.local_app].update(data)
+
+    @property
+    def relations(self) -> List[Relation]:
+        """The list of Relation instances associated with this relation_name."""
+        return list(self.charm.model.relations[self.relation_name])
+
+    def set_credentials(self, relation_id: int, username: str, password: str) -> None:
+        """Set credentials.
+
+        This function writes in the application data bag, therefore,
+        only the leader unit can call it.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            username: user that was created.
+            password: password of the created user.
+        """
+        self._update_relation_data(
+            relation_id,
+            {
+                "username": username,
+                "password": password,
+            },
+        )
+
+    def set_tls(self, relation_id: int, tls: str) -> None:
+        """Set whether TLS is enabled.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            tls: whether tls is enabled (True or False).
+        """
+        self._update_relation_data(relation_id, {"tls": tls})
+
+    def set_tls_ca(self, relation_id: int, tls_ca: str) -> None:
+        """Set the TLS CA in the application relation databag.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            tls_ca: TLS certification authority.
+        """
+        self._update_relation_data(relation_id, {"tls_ca": tls_ca})
+
+
+class DataRequires(Object, ABC):
+    """Requires-side of the relation."""
+
+    def __init__(
+        self,
+        charm,
+        relation_name: str,
+        extra_user_roles: str = None,
+    ):
+        """Manager of base client relations."""
+        super().__init__(charm, relation_name)
+        self.charm = charm
+        self.extra_user_roles = extra_user_roles
+        self.local_app = self.charm.model.app
+        self.local_unit = self.charm.unit
+        self.relation_name = relation_name
+        self.framework.observe(
+            self.charm.on[relation_name].relation_joined, self._on_relation_joined_event
+        )
+        self.framework.observe(
+            self.charm.on[relation_name].relation_changed, self._on_relation_changed_event
+        )
+
+    @abstractmethod
+    def _on_relation_joined_event(self, event: RelationJoinedEvent) -> None:
+        """Event emitted when the application joins the relation."""
+        raise NotImplementedError
+
+    @abstractmethod
+    def _on_relation_changed_event(self, event: RelationChangedEvent) -> None:
+        raise NotImplementedError
+
+    def fetch_relation_data(self) -> dict:
+        """Retrieves data from relation.
+
+        This function can be used to retrieve data from a relation
+        in the charm code when outside an event callback.
+        Function cannot be used in `*-relation-broken` events and will raise an exception.
+
+        Returns:
+            a dict of the values stored in the relation data bag
+                for all relation instances (indexed by the relation ID).
+        """
+        data = {}
+        for relation in self.relations:
+            data[relation.id] = {
+                key: value for key, value in relation.data[relation.app].items() if key != "data"
+            }
+        return data
+
+    def _update_relation_data(self, relation_id: int, data: dict) -> None:
+        """Updates a set of key-value pairs in the relation.
+
+        This function writes in the application data bag, therefore,
+        only the leader unit can call it.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            data: dict containing the key-value pairs
+                that should be updated in the relation.
+        """
+        if self.local_unit.is_leader():
+            relation = self.charm.model.get_relation(self.relation_name, relation_id)
+            relation.data[self.local_app].update(data)
+
+    def _diff(self, event: RelationChangedEvent) -> Diff:
+        """Retrieves the diff of the data in the relation changed databag.
+
+        Args:
+            event: relation changed event.
+
+        Returns:
+            a Diff instance containing the added, deleted and changed
+                keys from the event relation databag.
+        """
+        return diff(event, self.local_unit)
+
+    @property
+    def relations(self) -> List[Relation]:
+        """The list of Relation instances associated with this relation_name."""
+        return [
+            relation
+            for relation in self.charm.model.relations[self.relation_name]
+            if self._is_relation_active(relation)
+        ]
+
+    @staticmethod
+    def _is_relation_active(relation: Relation):
+        try:
+            _ = repr(relation.data)
+            return True
+        except RuntimeError:
+            return False
+
+    @staticmethod
+    def _is_resource_created_for_relation(relation: Relation):
+        return (
+            "username" in relation.data[relation.app] and "password" in relation.data[relation.app]
+        )
+
+    def is_resource_created(self, relation_id: Optional[int] = None) -> bool:
+        """Check if the resource has been created.
+
+        This function can be used to check if the Provider answered with data in the charm code
+        when outside an event callback.
+
+        Args:
+            relation_id (int, optional): When provided the check is done only for the relation id
+                provided, otherwise the check is done for all relations
+
+        Returns:
+            True or False
+
+        Raises:
+            IndexError: If relation_id is provided but that relation does not exist
+        """
+        if relation_id is not None:
+            try:
+                relation = [relation for relation in self.relations if relation.id == relation_id][
+                    0
+                ]
+                return self._is_resource_created_for_relation(relation)
+            except IndexError:
+                raise IndexError(f"relation id {relation_id} cannot be accessed")
+        else:
+            return (
+                all(
+                    [
+                        self._is_resource_created_for_relation(relation)
+                        for relation in self.relations
+                    ]
+                )
+                if self.relations
+                else False
+            )
+
+
+# General events
+
+
+class ExtraRoleEvent(RelationEvent):
+    """Base class for data events."""
+
+    @property
+    def extra_user_roles(self) -> Optional[str]:
+        """Returns the extra user roles that were requested."""
+        return self.relation.data[self.relation.app].get("extra-user-roles")
+
+
+class AuthenticationEvent(RelationEvent):
+    """Base class for authentication fields for events."""
+
+    @property
+    def username(self) -> Optional[str]:
+        """Returns the created username."""
+        return self.relation.data[self.relation.app].get("username")
+
+    @property
+    def password(self) -> Optional[str]:
+        """Returns the password for the created user."""
+        return self.relation.data[self.relation.app].get("password")
+
+    @property
+    def tls(self) -> Optional[str]:
+        """Returns whether TLS is configured."""
+        return self.relation.data[self.relation.app].get("tls")
+
+    @property
+    def tls_ca(self) -> Optional[str]:
+        """Returns TLS CA."""
+        return self.relation.data[self.relation.app].get("tls-ca")
+
+
+# Database related events and fields
+
+
+class DatabaseProvidesEvent(RelationEvent):
+    """Base class for database events."""
+
+    @property
+    def database(self) -> Optional[str]:
+        """Returns the database that was requested."""
+        return self.relation.data[self.relation.app].get("database")
+
+
+class DatabaseRequestedEvent(DatabaseProvidesEvent, ExtraRoleEvent):
+    """Event emitted when a new database is requested for use on this relation."""
+
+
+class DatabaseProvidesEvents(CharmEvents):
+    """Database events.
+
+    This class defines the events that the database can emit.
+    """
+
+    database_requested = EventSource(DatabaseRequestedEvent)
+
+
+class DatabaseRequiresEvent(RelationEvent):
+    """Base class for database events."""
+
+    @property
+    def endpoints(self) -> Optional[str]:
+        """Returns a comma separated list of read/write endpoints."""
+        return self.relation.data[self.relation.app].get("endpoints")
+
+    @property
+    def read_only_endpoints(self) -> Optional[str]:
+        """Returns a comma separated list of read only endpoints."""
+        return self.relation.data[self.relation.app].get("read-only-endpoints")
+
+    @property
+    def replset(self) -> Optional[str]:
+        """Returns the replicaset name.
+
+        MongoDB only.
+        """
+        return self.relation.data[self.relation.app].get("replset")
+
+    @property
+    def uris(self) -> Optional[str]:
+        """Returns the connection URIs.
+
+        MongoDB, Redis, OpenSearch.
+        """
+        return self.relation.data[self.relation.app].get("uris")
+
+    @property
+    def version(self) -> Optional[str]:
+        """Returns the version of the database.
+
+        Version as informed by the database daemon.
+        """
+        return self.relation.data[self.relation.app].get("version")
+
+
+class DatabaseCreatedEvent(AuthenticationEvent, DatabaseRequiresEvent):
+    """Event emitted when a new database is created for use on this relation."""
+
+
+class DatabaseEndpointsChangedEvent(AuthenticationEvent, DatabaseRequiresEvent):
+    """Event emitted when the read/write endpoints are changed."""
+
+
+class DatabaseReadOnlyEndpointsChangedEvent(AuthenticationEvent, DatabaseRequiresEvent):
+    """Event emitted when the read only endpoints are changed."""
+
+
+class DatabaseRequiresEvents(CharmEvents):
+    """Database events.
+
+    This class defines the events that the database can emit.
+    """
+
+    database_created = EventSource(DatabaseCreatedEvent)
+    endpoints_changed = EventSource(DatabaseEndpointsChangedEvent)
+    read_only_endpoints_changed = EventSource(DatabaseReadOnlyEndpointsChangedEvent)
+
+
+# Database Provider and Requires
+
+
+class DatabaseProvides(DataProvides):
+    """Provider-side of the database relations."""
+
+    on = DatabaseProvidesEvents()
+
+    def __init__(self, charm: CharmBase, relation_name: str) -> None:
+        super().__init__(charm, relation_name)
+
+    def _on_relation_changed(self, event: RelationChangedEvent) -> None:
+        """Event emitted when the relation has changed."""
+        # Only the leader should handle this event.
+        if not self.local_unit.is_leader():
+            return
+
+        # Check which data has changed to emit customs events.
+        diff = self._diff(event)
+
+        # Emit a database requested event if the setup key (database name and optional
+        # extra user roles) was added to the relation databag by the application.
+        if "database" in diff.added:
+            self.on.database_requested.emit(event.relation, app=event.app, unit=event.unit)
+
+    def set_endpoints(self, relation_id: int, connection_strings: str) -> None:
+        """Set database primary connections.
+
+        This function writes in the application data bag, therefore,
+        only the leader unit can call it.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            connection_strings: database hosts and ports comma separated list.
+        """
+        self._update_relation_data(relation_id, {"endpoints": connection_strings})
+
+    def set_read_only_endpoints(self, relation_id: int, connection_strings: str) -> None:
+        """Set database replicas connection strings.
+
+        This function writes in the application data bag, therefore,
+        only the leader unit can call it.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            connection_strings: database hosts and ports comma separated list.
+        """
+        self._update_relation_data(relation_id, {"read-only-endpoints": connection_strings})
+
+    def set_replset(self, relation_id: int, replset: str) -> None:
+        """Set replica set name in the application relation databag.
+
+        MongoDB only.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            replset: replica set name.
+        """
+        self._update_relation_data(relation_id, {"replset": replset})
+
+    def set_uris(self, relation_id: int, uris: str) -> None:
+        """Set the database connection URIs in the application relation databag.
+
+        MongoDB, Redis, and OpenSearch only.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            uris: connection URIs.
+        """
+        self._update_relation_data(relation_id, {"uris": uris})
+
+    def set_version(self, relation_id: int, version: str) -> None:
+        """Set the database version in the application relation databag.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            version: database version.
+        """
+        self._update_relation_data(relation_id, {"version": version})
+
+
+class DatabaseRequires(DataRequires):
+    """Requires-side of the database relation."""
+
+    on = DatabaseRequiresEvents()
+
+    def __init__(
+        self,
+        charm,
+        relation_name: str,
+        database_name: str,
+        extra_user_roles: str = None,
+        relations_aliases: List[str] = None,
+    ):
+        """Manager of database client relations."""
+        super().__init__(charm, relation_name, extra_user_roles)
+        self.database = database_name
+        self.relations_aliases = relations_aliases
+
+        # Define custom event names for each alias.
+        if relations_aliases:
+            # Ensure the number of aliases does not exceed the maximum
+            # of connections allowed in the specific relation.
+            relation_connection_limit = self.charm.meta.requires[relation_name].limit
+            if len(relations_aliases) != relation_connection_limit:
+                raise ValueError(
+                    f"The number of aliases must match the maximum number of connections allowed in the relation. "
+                    f"Expected {relation_connection_limit}, got {len(relations_aliases)}"
+                )
+
+            for relation_alias in relations_aliases:
+                self.on.define_event(f"{relation_alias}_database_created", DatabaseCreatedEvent)
+                self.on.define_event(
+                    f"{relation_alias}_endpoints_changed", DatabaseEndpointsChangedEvent
+                )
+                self.on.define_event(
+                    f"{relation_alias}_read_only_endpoints_changed",
+                    DatabaseReadOnlyEndpointsChangedEvent,
+                )
+
+    def _assign_relation_alias(self, relation_id: int) -> None:
+        """Assigns an alias to a relation.
+
+        This function writes in the unit data bag.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+        """
+        # If no aliases were provided, return immediately.
+        if not self.relations_aliases:
+            return
+
+        # Return if an alias was already assigned to this relation
+        # (like when there are more than one unit joining the relation).
+        if (
+            self.charm.model.get_relation(self.relation_name, relation_id)
+            .data[self.local_unit]
+            .get("alias")
+        ):
+            return
+
+        # Retrieve the available aliases (the ones that weren't assigned to any relation).
+        available_aliases = self.relations_aliases[:]
+        for relation in self.charm.model.relations[self.relation_name]:
+            alias = relation.data[self.local_unit].get("alias")
+            if alias:
+                logger.debug("Alias %s was already assigned to relation %d", alias, relation.id)
+                available_aliases.remove(alias)
+
+        # Set the alias in the unit relation databag of the specific relation.
+        relation = self.charm.model.get_relation(self.relation_name, relation_id)
+        relation.data[self.local_unit].update({"alias": available_aliases[0]})
+
+    def _emit_aliased_event(self, event: RelationChangedEvent, event_name: str) -> None:
+        """Emit an aliased event to a particular relation if it has an alias.
+
+        Args:
+            event: the relation changed event that was received.
+            event_name: the name of the event to emit.
+        """
+        alias = self._get_relation_alias(event.relation.id)
+        if alias:
+            getattr(self.on, f"{alias}_{event_name}").emit(
+                event.relation, app=event.app, unit=event.unit
+            )
+
+    def _get_relation_alias(self, relation_id: int) -> Optional[str]:
+        """Returns the relation alias.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+
+        Returns:
+            the relation alias or None if the relation was not found.
+        """
+        for relation in self.charm.model.relations[self.relation_name]:
+            if relation.id == relation_id:
+                return relation.data[self.local_unit].get("alias")
+        return None
+
+    def _on_relation_joined_event(self, event: RelationJoinedEvent) -> None:
+        """Event emitted when the application joins the database relation."""
+        # If relations aliases were provided, assign one to the relation.
+        self._assign_relation_alias(event.relation.id)
+
+        # Sets both database and extra user roles in the relation
+        # if the roles are provided. Otherwise, sets only the database.
+        if self.extra_user_roles:
+            self._update_relation_data(
+                event.relation.id,
+                {
+                    "database": self.database,
+                    "extra-user-roles": self.extra_user_roles,
+                },
+            )
+        else:
+            self._update_relation_data(event.relation.id, {"database": self.database})
+
+    def _on_relation_changed_event(self, event: RelationChangedEvent) -> None:
+        """Event emitted when the database relation has changed."""
+        # Check which data has changed to emit customs events.
+        diff = self._diff(event)
+
+        # Check if the database is created
+        # (the database charm shared the credentials).
+        if "username" in diff.added and "password" in diff.added:
+            # Emit the default event (the one without an alias).
+            logger.info("database created at %s", datetime.now())
+            self.on.database_created.emit(event.relation, app=event.app, unit=event.unit)
+
+            # Emit the aliased event (if any).
+            self._emit_aliased_event(event, "database_created")
+
+            # To avoid unnecessary application restarts do not trigger
+            # â€œendpoints_changed“ event if â€œdatabase_created“ is triggered.
+            return
+
+        # Emit an endpoints changed event if the database
+        # added or changed this info in the relation databag.
+        if "endpoints" in diff.added or "endpoints" in diff.changed:
+            # Emit the default event (the one without an alias).
+            logger.info("endpoints changed on %s", datetime.now())
+            self.on.endpoints_changed.emit(event.relation, app=event.app, unit=event.unit)
+
+            # Emit the aliased event (if any).
+            self._emit_aliased_event(event, "endpoints_changed")
+
+            # To avoid unnecessary application restarts do not trigger
+            # â€œread_only_endpoints_changed“ event if â€œendpoints_changed“ is triggered.
+            return
+
+        # Emit a read only endpoints changed event if the database
+        # added or changed this info in the relation databag.
+        if "read-only-endpoints" in diff.added or "read-only-endpoints" in diff.changed:
+            # Emit the default event (the one without an alias).
+            logger.info("read-only-endpoints changed on %s", datetime.now())
+            self.on.read_only_endpoints_changed.emit(
+                event.relation, app=event.app, unit=event.unit
+            )
+
+            # Emit the aliased event (if any).
+            self._emit_aliased_event(event, "read_only_endpoints_changed")
+
+
+# Kafka related events
+
+
+class KafkaProvidesEvent(RelationEvent):
+    """Base class for Kafka events."""
+
+    @property
+    def topic(self) -> Optional[str]:
+        """Returns the topic that was requested."""
+        return self.relation.data[self.relation.app].get("topic")
+
+
+class TopicRequestedEvent(KafkaProvidesEvent, ExtraRoleEvent):
+    """Event emitted when a new topic is requested for use on this relation."""
+
+
+class KafkaProvidesEvents(CharmEvents):
+    """Kafka events.
+
+    This class defines the events that the Kafka can emit.
+    """
+
+    topic_requested = EventSource(TopicRequestedEvent)
+
+
+class KafkaRequiresEvent(RelationEvent):
+    """Base class for Kafka events."""
+
+    @property
+    def bootstrap_server(self) -> Optional[str]:
+        """Returns a a comma-seperated list of broker uris."""
+        return self.relation.data[self.relation.app].get("endpoints")
+
+    @property
+    def consumer_group_prefix(self) -> Optional[str]:
+        """Returns the consumer-group-prefix."""
+        return self.relation.data[self.relation.app].get("consumer-group-prefix")
+
+    @property
+    def zookeeper_uris(self) -> Optional[str]:
+        """Returns a comma separated list of Zookeeper uris."""
+        return self.relation.data[self.relation.app].get("zookeeper-uris")
+
+
+class TopicCreatedEvent(AuthenticationEvent, KafkaRequiresEvent):
+    """Event emitted when a new topic is created for use on this relation."""
+
+
+class BootstrapServerChangedEvent(AuthenticationEvent, KafkaRequiresEvent):
+    """Event emitted when the bootstrap server is changed."""
+
+
+class KafkaRequiresEvents(CharmEvents):
+    """Kafka events.
+
+    This class defines the events that the Kafka can emit.
+    """
+
+    topic_created = EventSource(TopicCreatedEvent)
+    bootstrap_server_changed = EventSource(BootstrapServerChangedEvent)
+
+
+# Kafka Provides and Requires
+
+
+class KafkaProvides(DataProvides):
+    """Provider-side of the Kafka relation."""
+
+    on = KafkaProvidesEvents()
+
+    def __init__(self, charm: CharmBase, relation_name: str) -> None:
+        super().__init__(charm, relation_name)
+
+    def _on_relation_changed(self, event: RelationChangedEvent) -> None:
+        """Event emitted when the relation has changed."""
+        # Only the leader should handle this event.
+        if not self.local_unit.is_leader():
+            return
+
+        # Check which data has changed to emit customs events.
+        diff = self._diff(event)
+
+        # Emit a topic requested event if the setup key (topic name and optional
+        # extra user roles) was added to the relation databag by the application.
+        if "topic" in diff.added:
+            self.on.topic_requested.emit(event.relation, app=event.app, unit=event.unit)
+
+    def set_bootstrap_server(self, relation_id: int, bootstrap_server: str) -> None:
+        """Set the bootstrap server in the application relation databag.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            bootstrap_server: the bootstrap server address.
+        """
+        self._update_relation_data(relation_id, {"endpoints": bootstrap_server})
+
+    def set_consumer_group_prefix(self, relation_id: int, consumer_group_prefix: str) -> None:
+        """Set the consumer group prefix in the application relation databag.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            consumer_group_prefix: the consumer group prefix string.
+        """
+        self._update_relation_data(relation_id, {"consumer-group-prefix": consumer_group_prefix})
+
+    def set_zookeeper_uris(self, relation_id: int, zookeeper_uris: str) -> None:
+        """Set the zookeeper uris in the application relation databag.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            zookeeper_uris: comma-seperated list of ZooKeeper server uris.
+        """
+        self._update_relation_data(relation_id, {"zookeeper-uris": zookeeper_uris})
+
+
+class KafkaRequires(DataRequires):
+    """Requires-side of the Kafka relation."""
+
+    on = KafkaRequiresEvents()
+
+    def __init__(self, charm, relation_name: str, topic: str, extra_user_roles: str = None):
+        """Manager of Kafka client relations."""
+        # super().__init__(charm, relation_name)
+        super().__init__(charm, relation_name, extra_user_roles)
+        self.charm = charm
+        self.topic = topic
+
+    def _on_relation_joined_event(self, event: RelationJoinedEvent) -> None:
+        """Event emitted when the application joins the Kafka relation."""
+        # Sets both topic and extra user roles in the relation
+        # if the roles are provided. Otherwise, sets only the topic.
+        self._update_relation_data(
+            event.relation.id,
+            {
+                "topic": self.topic,
+                "extra-user-roles": self.extra_user_roles,
+            }
+            if self.extra_user_roles is not None
+            else {"topic": self.topic},
+        )
+
+    def _on_relation_changed_event(self, event: RelationChangedEvent) -> None:
+        """Event emitted when the Kafka relation has changed."""
+        # Check which data has changed to emit customs events.
+        diff = self._diff(event)
+
+        # Check if the topic is created
+        # (the Kafka charm shared the credentials).
+        if "username" in diff.added and "password" in diff.added:
+            # Emit the default event (the one without an alias).
+            logger.info("topic created at %s", datetime.now())
+            self.on.topic_created.emit(event.relation, app=event.app, unit=event.unit)
+
+            # To avoid unnecessary application restarts do not trigger
+            # â€œendpoints_changed“ event if â€œtopic_created“ is triggered.
+            return
+
+        # Emit an endpoints (bootstap-server) changed event if the Kakfa endpoints
+        # added or changed this info in the relation databag.
+        if "endpoints" in diff.added or "endpoints" in diff.changed:
+            # Emit the default event (the one without an alias).
+            logger.info("endpoints changed on %s", datetime.now())
+            self.on.bootstrap_server_changed.emit(
+                event.relation, app=event.app, unit=event.unit
+            )  # here check if this is the right design
+            return
index d739ba6..02d46db 100644 (file)
@@ -235,12 +235,14 @@ wait
 @dataclass
 class SubModule:
     """Represent RO Submodules."""
+
     sub_module_path: str
     container_path: str
 
 
 class HostPath:
     """Represents a hostpath."""
+
     def __init__(self, config: str, container_path: str, submodules: dict = None) -> None:
         mount_path_items = config.split("-")
         mount_path_items.reverse()
@@ -250,13 +252,18 @@ class HostPath:
         if submodules:
             for submodule in submodules.keys():
                 self.sub_module_dict[submodule] = SubModule(
-                    sub_module_path=self.mount_path + "/" + submodule + "/" + submodules[submodule].split("/")[-1],
+                    sub_module_path=self.mount_path
+                    + "/"
+                    + submodule
+                    + "/"
+                    + submodules[submodule].split("/")[-1],
                     container_path=submodules[submodule],
                 )
         else:
             self.container_path = container_path
             self.module_name = container_path.split("/")[-1]
 
+
 class DebugMode(Object):
     """Class to handle the debug-mode."""
 
@@ -432,7 +439,9 @@ class DebugMode(Object):
             logger.debug(f"adding symlink for {hostpath.config}")
             if len(hostpath.sub_module_dict) > 0:
                 for sub_module in hostpath.sub_module_dict.keys():
-                    self.container.exec(["rm", "-rf", hostpath.sub_module_dict[sub_module].container_path]).wait_output()
+                    self.container.exec(
+                        ["rm", "-rf", hostpath.sub_module_dict[sub_module].container_path]
+                    ).wait_output()
                     self.container.exec(
                         [
                             "ln",
@@ -506,7 +515,6 @@ class DebugMode(Object):
     def _delete_hostpath_from_statefulset(self, hostpath: HostPath, statefulset: StatefulSet):
         hostpath_unmounted = False
         for volume in statefulset.spec.template.spec.volumes:
-
             if hostpath.config != volume.name:
                 continue
 
index 79bee5e..d669b65 100644 (file)
@@ -124,9 +124,7 @@ class RoRequires(Object):  # pragma: no cover
         """Get ro hostname."""
         relation: Relation = self.model.get_relation(self._endpoint_name)
         return (
-            relation.data[relation.app].get(RO_HOST_APP_KEY)
-            if relation and relation.app
-            else None
+            relation.data[relation.app].get(RO_HOST_APP_KEY) if relation and relation.app else None
         )
 
     @property
index bd54541..e38e2b5 100644 (file)
@@ -57,7 +57,7 @@ requires:
     interface: kafka
     limit: 1
   mongodb:
-    interface: mongodb
+    interface: mongodb_client
     limit: 1
   ro:
     interface: ro
index d0d4a5b..16cf0f4 100644 (file)
@@ -50,7 +50,3 @@ ignore = ["W503", "E501", "D107"]
 # D100, D101, D102, D103: Ignore missing docstrings in tests
 per-file-ignores = ["tests/*:D100,D101,D102,D103,D104"]
 docstring-convention = "google"
-# Check for properly formatted copyright header in each file
-copyright-check = "True"
-copyright-author = "Canonical Ltd."
-copyright-regexp = "Copyright\\s\\d{4}([-,]\\d{4})*\\s+%(author)s"
index cb303a3..398d4ad 100644 (file)
@@ -17,7 +17,7 @@
 #
 # To get in touch with the maintainers, please contact:
 # osm-charmers@lists.launchpad.net
-ops >= 1.2.0
+ops < 2.2
 lightkube
 lightkube-models
 # git+https://github.com/charmed-osm/config-validator/
index 4a362a6..2ea9086 100755 (executable)
@@ -30,6 +30,7 @@ See more: https://charmhub.io/osm
 import logging
 from typing import Any, Dict
 
+from charms.data_platform_libs.v0.data_interfaces import DatabaseRequires
 from charms.kafka_k8s.v0.kafka import KafkaRequires, _KafkaAvailableEvent
 from charms.osm_libs.v0.utils import (
     CharmError,
@@ -45,8 +46,6 @@ from ops.framework import EventSource, StoredState
 from ops.main import main
 from ops.model import ActiveStatus, Container
 
-from legacy_interfaces import MongoClient
-
 HOSTPATHS = [
     HostPath(
         config="lcm-hostpath",
@@ -84,7 +83,9 @@ class OsmLcmCharm(CharmBase):
         super().__init__(*args)
         self.vca = VcaRequires(self)
         self.kafka = KafkaRequires(self)
-        self.mongodb_client = MongoClient(self, "mongodb")
+        self.mongodb_client = DatabaseRequires(
+            self, "mongodb", database_name="osm", extra_user_roles="admin"
+        )
         self._observe_charm_events()
         self.ro = RoRequires(self)
         self.container: Container = self.unit.get_container(self.container_name)
@@ -176,7 +177,7 @@ class OsmLcmCharm(CharmBase):
             # Relation events
             self.on.kafka_available: self._on_config_changed,
             self.on["kafka"].relation_broken: self._on_required_relation_broken,
-            self.on["mongodb"].relation_changed: self._on_config_changed,
+            self.mongodb_client.on.database_created: self._on_config_changed,
             self.on["mongodb"].relation_broken: self._on_required_relation_broken,
             self.on["ro"].relation_changed: self._on_config_changed,
             self.on["ro"].relation_broken: self._on_required_relation_broken,
@@ -199,7 +200,7 @@ class OsmLcmCharm(CharmBase):
 
         if not self.kafka.host or not self.kafka.port:
             missing_relations.append("kafka")
-        if self.mongodb_client.is_missing_data_in_unit():
+        if not self._is_database_available():
             missing_relations.append("mongodb")
         if not self.ro.host or not self.ro.port:
             missing_relations.append("ro")
@@ -211,6 +212,12 @@ class OsmLcmCharm(CharmBase):
             logger.warning(error_msg)
             raise CharmError(error_msg)
 
+    def _is_database_available(self) -> bool:
+        try:
+            return self.mongodb_client.is_resource_created()
+        except KeyError:
+            return False
+
     def _configure_service(self, container: Container) -> None:
         """Add Pebble layer with the lcm service."""
         logger.debug(f"configuring {self.app.name} service")
@@ -232,13 +239,13 @@ class OsmLcmCharm(CharmBase):
             "OSMLCM_RO_TENANT": "osm",
             # Database configuration
             "OSMLCM_DATABASE_DRIVER": "mongo",
-            "OSMLCM_DATABASE_URI": self.mongodb_client.connection_string,
+            "OSMLCM_DATABASE_URI": self._get_mongodb_uri(),
             "OSMLCM_DATABASE_COMMONKEY": self.config["database-commonkey"],
             # Storage configuration
             "OSMLCM_STORAGE_DRIVER": "mongo",
             "OSMLCM_STORAGE_PATH": "/app/storage",
             "OSMLCM_STORAGE_COLLECTION": "files",
-            "OSMLCM_STORAGE_URI": self.mongodb_client.connection_string,
+            "OSMLCM_STORAGE_URI": self._get_mongodb_uri(),
             "OSMLCM_VCA_HELM_CA_CERTS": self.config["helm-ca-certs"],
             "OSMLCM_VCA_STABLEREPOURL": self.config["helm-stable-repo-url"],
         }
@@ -275,6 +282,9 @@ class OsmLcmCharm(CharmBase):
         }
         return layer_config
 
+    def _get_mongodb_uri(self):
+        return list(self.mongodb_client.fetch_relation_data().values())[0]["uris"]
+
 
 if __name__ == "__main__":  # pragma: no cover
     main(OsmLcmCharm)
index 889e287..a991339 100644 (file)
@@ -23,6 +23,7 @@
 
 import asyncio
 import logging
+import shlex
 from pathlib import Path
 
 import pytest
@@ -50,14 +51,18 @@ APPS = [KAFKA_APP, MONGO_DB_APP, ZOOKEEPER_APP, RO_APP, LCM_APP]
 async def test_lcm_is_deployed(ops_test: OpsTest):
     charm = await ops_test.build_charm(".")
     resources = {"lcm-image": METADATA["resources"]["lcm-image"]["upstream-source"]}
+    ro_deploy_cmd = f"juju deploy {RO_CHARM} {RO_APP} --resource ro-image=opensourcemano/ro:testing-daily --channel=latest/beta --series=focal"
 
     await asyncio.gather(
         ops_test.model.deploy(
             charm, resources=resources, application_name=LCM_APP, series="focal"
         ),
-        ops_test.model.deploy(RO_CHARM, application_name=RO_APP, channel="beta"),
+        # RO charm has to be deployed differently since
+        # bug https://github.com/juju/python-libjuju/issues/822
+        # deploys different charms wrt cli
+        ops_test.run(*shlex.split(ro_deploy_cmd), check=True),
         ops_test.model.deploy(KAFKA_CHARM, application_name=KAFKA_APP, channel="stable"),
-        ops_test.model.deploy(MONGO_DB_CHARM, application_name=MONGO_DB_APP, channel="stable"),
+        ops_test.model.deploy(MONGO_DB_CHARM, application_name=MONGO_DB_APP, channel="edge"),
         ops_test.model.deploy(ZOOKEEPER_CHARM, application_name=ZOOKEEPER_APP, channel="stable"),
     )
 
index 8233d32..41cfb00 100644 (file)
@@ -36,6 +36,7 @@ service_name = "lcm"
 def harness(mocker: MockerFixture):
     harness = Harness(OsmLcmCharm)
     harness.begin()
+    harness.container_pebble_ready(container_name)
     yield harness
     harness.cleanup()
 
@@ -69,7 +70,9 @@ def _add_relations(harness: Harness):
     relation_id = harness.add_relation("mongodb", "mongodb")
     harness.add_relation_unit(relation_id, "mongodb/0")
     harness.update_relation_data(
-        relation_id, "mongodb/0", {"connection_string": "mongodb://:1234"}
+        relation_id,
+        "mongodb",
+        {"uris": "mongodb://:1234", "username": "user", "password": "password"},
     )
     relation_ids.append(relation_id)
     # Add kafka relation
index 71cf2a6..2d95eca 100644 (file)
@@ -29,6 +29,7 @@ tst_path = {toxinidir}/tests/
 all_path = {[vars]src_path} {[vars]tst_path} 
 
 [testenv]
+basepython = python3.8
 setenv =
   PYTHONPATH = {toxinidir}:{toxinidir}/lib:{[vars]src_path}
   PYTHONBREAKPOINT=ipdb.set_trace
@@ -53,14 +54,13 @@ deps =
     black
     flake8
     flake8-docstrings
-    flake8-copyright
     flake8-builtins
     pyproject-flake8
     pep8-naming
     isort
     codespell
 commands =
-    codespell {toxinidir}/. --skip {toxinidir}/.git --skip {toxinidir}/.tox \
+    codespell {toxinidir} --skip {toxinidir}/.git --skip {toxinidir}/.tox \
       --skip {toxinidir}/build --skip {toxinidir}/lib --skip {toxinidir}/venv \
       --skip {toxinidir}/.mypy_cache --skip {toxinidir}/icon.svg
     # pflake8 wrapper supports config from pyproject.toml
@@ -85,7 +85,7 @@ commands =
 description = Run integration tests
 deps =
     pytest
-    juju
+    juju<3
     pytest-operator
     -r{toxinidir}/requirements.txt
 commands =
index 0163151..cb2eb99 100644 (file)
@@ -96,7 +96,7 @@ options:
 
       After enabling the debug-mode, execute the following command to get the information you need
       to start debugging:
-        `juju run-action get-debug-mode-information <unit name> --wait`
+        `juju run-action <unit name> get-debug-mode-information --wait`
 
       The previous command returns the command you need to execute, and the SSH password that was set.
 
diff --git a/installers/charm/osm-mon/lib/charms/data_platform_libs/v0/data_interfaces.py b/installers/charm/osm-mon/lib/charms/data_platform_libs/v0/data_interfaces.py
new file mode 100644 (file)
index 0000000..b3da5aa
--- /dev/null
@@ -0,0 +1,1130 @@
+# Copyright 2023 Canonical Ltd.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Library to manage the relation for the data-platform products.
+
+This library contains the Requires and Provides classes for handling the relation
+between an application and multiple managed application supported by the data-team:
+MySQL, Postgresql, MongoDB, Redis,  and Kakfa.
+
+### Database (MySQL, Postgresql, MongoDB, and Redis)
+
+#### Requires Charm
+This library is a uniform interface to a selection of common database
+metadata, with added custom events that add convenience to database management,
+and methods to consume the application related data.
+
+
+Following an example of using the DatabaseCreatedEvent, in the context of the
+application charm code:
+
+```python
+
+from charms.data_platform_libs.v0.data_interfaces import (
+    DatabaseCreatedEvent,
+    DatabaseRequires,
+)
+
+class ApplicationCharm(CharmBase):
+    # Application charm that connects to database charms.
+
+    def __init__(self, *args):
+        super().__init__(*args)
+
+        # Charm events defined in the database requires charm library.
+        self.database = DatabaseRequires(self, relation_name="database", database_name="database")
+        self.framework.observe(self.database.on.database_created, self._on_database_created)
+
+    def _on_database_created(self, event: DatabaseCreatedEvent) -> None:
+        # Handle the created database
+
+        # Create configuration file for app
+        config_file = self._render_app_config_file(
+            event.username,
+            event.password,
+            event.endpoints,
+        )
+
+        # Start application with rendered configuration
+        self._start_application(config_file)
+
+        # Set active status
+        self.unit.status = ActiveStatus("received database credentials")
+```
+
+As shown above, the library provides some custom events to handle specific situations,
+which are listed below:
+
+-  database_created: event emitted when the requested database is created.
+-  endpoints_changed: event emitted when the read/write endpoints of the database have changed.
+-  read_only_endpoints_changed: event emitted when the read-only endpoints of the database
+  have changed. Event is not triggered if read/write endpoints changed too.
+
+If it is needed to connect multiple database clusters to the same relation endpoint
+the application charm can implement the same code as if it would connect to only
+one database cluster (like the above code example).
+
+To differentiate multiple clusters connected to the same relation endpoint
+the application charm can use the name of the remote application:
+
+```python
+
+def _on_database_created(self, event: DatabaseCreatedEvent) -> None:
+    # Get the remote app name of the cluster that triggered this event
+    cluster = event.relation.app.name
+```
+
+It is also possible to provide an alias for each different database cluster/relation.
+
+So, it is possible to differentiate the clusters in two ways.
+The first is to use the remote application name, i.e., `event.relation.app.name`, as above.
+
+The second way is to use different event handlers to handle each cluster events.
+The implementation would be something like the following code:
+
+```python
+
+from charms.data_platform_libs.v0.data_interfaces import (
+    DatabaseCreatedEvent,
+    DatabaseRequires,
+)
+
+class ApplicationCharm(CharmBase):
+    # Application charm that connects to database charms.
+
+    def __init__(self, *args):
+        super().__init__(*args)
+
+        # Define the cluster aliases and one handler for each cluster database created event.
+        self.database = DatabaseRequires(
+            self,
+            relation_name="database",
+            database_name="database",
+            relations_aliases = ["cluster1", "cluster2"],
+        )
+        self.framework.observe(
+            self.database.on.cluster1_database_created, self._on_cluster1_database_created
+        )
+        self.framework.observe(
+            self.database.on.cluster2_database_created, self._on_cluster2_database_created
+        )
+
+    def _on_cluster1_database_created(self, event: DatabaseCreatedEvent) -> None:
+        # Handle the created database on the cluster named cluster1
+
+        # Create configuration file for app
+        config_file = self._render_app_config_file(
+            event.username,
+            event.password,
+            event.endpoints,
+        )
+        ...
+
+    def _on_cluster2_database_created(self, event: DatabaseCreatedEvent) -> None:
+        # Handle the created database on the cluster named cluster2
+
+        # Create configuration file for app
+        config_file = self._render_app_config_file(
+            event.username,
+            event.password,
+            event.endpoints,
+        )
+        ...
+
+```
+
+### Provider Charm
+
+Following an example of using the DatabaseRequestedEvent, in the context of the
+database charm code:
+
+```python
+from charms.data_platform_libs.v0.data_interfaces import DatabaseProvides
+
+class SampleCharm(CharmBase):
+
+    def __init__(self, *args):
+        super().__init__(*args)
+        # Charm events defined in the database provides charm library.
+        self.provided_database = DatabaseProvides(self, relation_name="database")
+        self.framework.observe(self.provided_database.on.database_requested,
+            self._on_database_requested)
+        # Database generic helper
+        self.database = DatabaseHelper()
+
+    def _on_database_requested(self, event: DatabaseRequestedEvent) -> None:
+        # Handle the event triggered by a new database requested in the relation
+        # Retrieve the database name using the charm library.
+        db_name = event.database
+        # generate a new user credential
+        username = self.database.generate_user()
+        password = self.database.generate_password()
+        # set the credentials for the relation
+        self.provided_database.set_credentials(event.relation.id, username, password)
+        # set other variables for the relation event.set_tls("False")
+```
+As shown above, the library provides a custom event (database_requested) to handle
+the situation when an application charm requests a new database to be created.
+It's preferred to subscribe to this event instead of relation changed event to avoid
+creating a new database when other information other than a database name is
+exchanged in the relation databag.
+
+### Kafka
+
+This library is the interface to use and interact with the Kafka charm. This library contains
+custom events that add convenience to manage Kafka, and provides methods to consume the
+application related data.
+
+#### Requirer Charm
+
+```python
+
+from charms.data_platform_libs.v0.data_interfaces import (
+    BootstrapServerChangedEvent,
+    KafkaRequires,
+    TopicCreatedEvent,
+)
+
+class ApplicationCharm(CharmBase):
+
+    def __init__(self, *args):
+        super().__init__(*args)
+        self.kafka = KafkaRequires(self, "kafka_client", "test-topic")
+        self.framework.observe(
+            self.kafka.on.bootstrap_server_changed, self._on_kafka_bootstrap_server_changed
+        )
+        self.framework.observe(
+            self.kafka.on.topic_created, self._on_kafka_topic_created
+        )
+
+    def _on_kafka_bootstrap_server_changed(self, event: BootstrapServerChangedEvent):
+        # Event triggered when a bootstrap server was changed for this application
+
+        new_bootstrap_server = event.bootstrap_server
+        ...
+
+    def _on_kafka_topic_created(self, event: TopicCreatedEvent):
+        # Event triggered when a topic was created for this application
+        username = event.username
+        password = event.password
+        tls = event.tls
+        tls_ca= event.tls_ca
+        bootstrap_server event.bootstrap_server
+        consumer_group_prefic = event.consumer_group_prefix
+        zookeeper_uris = event.zookeeper_uris
+        ...
+
+```
+
+As shown above, the library provides some custom events to handle specific situations,
+which are listed below:
+
+- topic_created: event emitted when the requested topic is created.
+- bootstrap_server_changed: event emitted when the bootstrap server have changed.
+- credential_changed: event emitted when the credentials of Kafka changed.
+
+### Provider Charm
+
+Following the previous example, this is an example of the provider charm.
+
+```python
+class SampleCharm(CharmBase):
+
+from charms.data_platform_libs.v0.data_interfaces import (
+    KafkaProvides,
+    TopicRequestedEvent,
+)
+
+    def __init__(self, *args):
+        super().__init__(*args)
+
+        # Default charm events.
+        self.framework.observe(self.on.start, self._on_start)
+
+        # Charm events defined in the Kafka Provides charm library.
+        self.kafka_provider = KafkaProvides(self, relation_name="kafka_client")
+        self.framework.observe(self.kafka_provider.on.topic_requested, self._on_topic_requested)
+        # Kafka generic helper
+        self.kafka = KafkaHelper()
+
+    def _on_topic_requested(self, event: TopicRequestedEvent):
+        # Handle the on_topic_requested event.
+
+        topic = event.topic
+        relation_id = event.relation.id
+        # set connection info in the databag relation
+        self.kafka_provider.set_bootstrap_server(relation_id, self.kafka.get_bootstrap_server())
+        self.kafka_provider.set_credentials(relation_id, username=username, password=password)
+        self.kafka_provider.set_consumer_group_prefix(relation_id, ...)
+        self.kafka_provider.set_tls(relation_id, "False")
+        self.kafka_provider.set_zookeeper_uris(relation_id, ...)
+
+```
+As shown above, the library provides a custom event (topic_requested) to handle
+the situation when an application charm requests a new topic to be created.
+It is preferred to subscribe to this event instead of relation changed event to avoid
+creating a new topic when other information other than a topic name is
+exchanged in the relation databag.
+"""
+
+import json
+import logging
+from abc import ABC, abstractmethod
+from collections import namedtuple
+from datetime import datetime
+from typing import List, Optional
+
+from ops.charm import (
+    CharmBase,
+    CharmEvents,
+    RelationChangedEvent,
+    RelationEvent,
+    RelationJoinedEvent,
+)
+from ops.framework import EventSource, Object
+from ops.model import Relation
+
+# The unique Charmhub library identifier, never change it
+LIBID = "6c3e6b6680d64e9c89e611d1a15f65be"
+
+# Increment this major API version when introducing breaking changes
+LIBAPI = 0
+
+# Increment this PATCH version before using `charmcraft publish-lib` or reset
+# to 0 if you are raising the major API version
+LIBPATCH = 7
+
+PYDEPS = ["ops>=2.0.0"]
+
+logger = logging.getLogger(__name__)
+
+Diff = namedtuple("Diff", "added changed deleted")
+Diff.__doc__ = """
+A tuple for storing the diff between two data mappings.
+
+added - keys that were added
+changed - keys that still exist but have new values
+deleted - key that were deleted"""
+
+
+def diff(event: RelationChangedEvent, bucket: str) -> Diff:
+    """Retrieves the diff of the data in the relation changed databag.
+
+    Args:
+        event: relation changed event.
+        bucket: bucket of the databag (app or unit)
+
+    Returns:
+        a Diff instance containing the added, deleted and changed
+            keys from the event relation databag.
+    """
+    # Retrieve the old data from the data key in the application relation databag.
+    old_data = json.loads(event.relation.data[bucket].get("data", "{}"))
+    # Retrieve the new data from the event relation databag.
+    new_data = {
+        key: value for key, value in event.relation.data[event.app].items() if key != "data"
+    }
+
+    # These are the keys that were added to the databag and triggered this event.
+    added = new_data.keys() - old_data.keys()
+    # These are the keys that were removed from the databag and triggered this event.
+    deleted = old_data.keys() - new_data.keys()
+    # These are the keys that already existed in the databag,
+    # but had their values changed.
+    changed = {key for key in old_data.keys() & new_data.keys() if old_data[key] != new_data[key]}
+    # Convert the new_data to a serializable format and save it for a next diff check.
+    event.relation.data[bucket].update({"data": json.dumps(new_data)})
+
+    # Return the diff with all possible changes.
+    return Diff(added, changed, deleted)
+
+
+# Base DataProvides and DataRequires
+
+
+class DataProvides(Object, ABC):
+    """Base provides-side of the data products relation."""
+
+    def __init__(self, charm: CharmBase, relation_name: str) -> None:
+        super().__init__(charm, relation_name)
+        self.charm = charm
+        self.local_app = self.charm.model.app
+        self.local_unit = self.charm.unit
+        self.relation_name = relation_name
+        self.framework.observe(
+            charm.on[relation_name].relation_changed,
+            self._on_relation_changed,
+        )
+
+    def _diff(self, event: RelationChangedEvent) -> Diff:
+        """Retrieves the diff of the data in the relation changed databag.
+
+        Args:
+            event: relation changed event.
+
+        Returns:
+            a Diff instance containing the added, deleted and changed
+                keys from the event relation databag.
+        """
+        return diff(event, self.local_app)
+
+    @abstractmethod
+    def _on_relation_changed(self, event: RelationChangedEvent) -> None:
+        """Event emitted when the relation data has changed."""
+        raise NotImplementedError
+
+    def fetch_relation_data(self) -> dict:
+        """Retrieves data from relation.
+
+        This function can be used to retrieve data from a relation
+        in the charm code when outside an event callback.
+
+        Returns:
+            a dict of the values stored in the relation data bag
+                for all relation instances (indexed by the relation id).
+        """
+        data = {}
+        for relation in self.relations:
+            data[relation.id] = {
+                key: value for key, value in relation.data[relation.app].items() if key != "data"
+            }
+        return data
+
+    def _update_relation_data(self, relation_id: int, data: dict) -> None:
+        """Updates a set of key-value pairs in the relation.
+
+        This function writes in the application data bag, therefore,
+        only the leader unit can call it.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            data: dict containing the key-value pairs
+                that should be updated in the relation.
+        """
+        if self.local_unit.is_leader():
+            relation = self.charm.model.get_relation(self.relation_name, relation_id)
+            relation.data[self.local_app].update(data)
+
+    @property
+    def relations(self) -> List[Relation]:
+        """The list of Relation instances associated with this relation_name."""
+        return list(self.charm.model.relations[self.relation_name])
+
+    def set_credentials(self, relation_id: int, username: str, password: str) -> None:
+        """Set credentials.
+
+        This function writes in the application data bag, therefore,
+        only the leader unit can call it.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            username: user that was created.
+            password: password of the created user.
+        """
+        self._update_relation_data(
+            relation_id,
+            {
+                "username": username,
+                "password": password,
+            },
+        )
+
+    def set_tls(self, relation_id: int, tls: str) -> None:
+        """Set whether TLS is enabled.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            tls: whether tls is enabled (True or False).
+        """
+        self._update_relation_data(relation_id, {"tls": tls})
+
+    def set_tls_ca(self, relation_id: int, tls_ca: str) -> None:
+        """Set the TLS CA in the application relation databag.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            tls_ca: TLS certification authority.
+        """
+        self._update_relation_data(relation_id, {"tls_ca": tls_ca})
+
+
+class DataRequires(Object, ABC):
+    """Requires-side of the relation."""
+
+    def __init__(
+        self,
+        charm,
+        relation_name: str,
+        extra_user_roles: str = None,
+    ):
+        """Manager of base client relations."""
+        super().__init__(charm, relation_name)
+        self.charm = charm
+        self.extra_user_roles = extra_user_roles
+        self.local_app = self.charm.model.app
+        self.local_unit = self.charm.unit
+        self.relation_name = relation_name
+        self.framework.observe(
+            self.charm.on[relation_name].relation_joined, self._on_relation_joined_event
+        )
+        self.framework.observe(
+            self.charm.on[relation_name].relation_changed, self._on_relation_changed_event
+        )
+
+    @abstractmethod
+    def _on_relation_joined_event(self, event: RelationJoinedEvent) -> None:
+        """Event emitted when the application joins the relation."""
+        raise NotImplementedError
+
+    @abstractmethod
+    def _on_relation_changed_event(self, event: RelationChangedEvent) -> None:
+        raise NotImplementedError
+
+    def fetch_relation_data(self) -> dict:
+        """Retrieves data from relation.
+
+        This function can be used to retrieve data from a relation
+        in the charm code when outside an event callback.
+        Function cannot be used in `*-relation-broken` events and will raise an exception.
+
+        Returns:
+            a dict of the values stored in the relation data bag
+                for all relation instances (indexed by the relation ID).
+        """
+        data = {}
+        for relation in self.relations:
+            data[relation.id] = {
+                key: value for key, value in relation.data[relation.app].items() if key != "data"
+            }
+        return data
+
+    def _update_relation_data(self, relation_id: int, data: dict) -> None:
+        """Updates a set of key-value pairs in the relation.
+
+        This function writes in the application data bag, therefore,
+        only the leader unit can call it.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            data: dict containing the key-value pairs
+                that should be updated in the relation.
+        """
+        if self.local_unit.is_leader():
+            relation = self.charm.model.get_relation(self.relation_name, relation_id)
+            relation.data[self.local_app].update(data)
+
+    def _diff(self, event: RelationChangedEvent) -> Diff:
+        """Retrieves the diff of the data in the relation changed databag.
+
+        Args:
+            event: relation changed event.
+
+        Returns:
+            a Diff instance containing the added, deleted and changed
+                keys from the event relation databag.
+        """
+        return diff(event, self.local_unit)
+
+    @property
+    def relations(self) -> List[Relation]:
+        """The list of Relation instances associated with this relation_name."""
+        return [
+            relation
+            for relation in self.charm.model.relations[self.relation_name]
+            if self._is_relation_active(relation)
+        ]
+
+    @staticmethod
+    def _is_relation_active(relation: Relation):
+        try:
+            _ = repr(relation.data)
+            return True
+        except RuntimeError:
+            return False
+
+    @staticmethod
+    def _is_resource_created_for_relation(relation: Relation):
+        return (
+            "username" in relation.data[relation.app] and "password" in relation.data[relation.app]
+        )
+
+    def is_resource_created(self, relation_id: Optional[int] = None) -> bool:
+        """Check if the resource has been created.
+
+        This function can be used to check if the Provider answered with data in the charm code
+        when outside an event callback.
+
+        Args:
+            relation_id (int, optional): When provided the check is done only for the relation id
+                provided, otherwise the check is done for all relations
+
+        Returns:
+            True or False
+
+        Raises:
+            IndexError: If relation_id is provided but that relation does not exist
+        """
+        if relation_id is not None:
+            try:
+                relation = [relation for relation in self.relations if relation.id == relation_id][
+                    0
+                ]
+                return self._is_resource_created_for_relation(relation)
+            except IndexError:
+                raise IndexError(f"relation id {relation_id} cannot be accessed")
+        else:
+            return (
+                all(
+                    [
+                        self._is_resource_created_for_relation(relation)
+                        for relation in self.relations
+                    ]
+                )
+                if self.relations
+                else False
+            )
+
+
+# General events
+
+
+class ExtraRoleEvent(RelationEvent):
+    """Base class for data events."""
+
+    @property
+    def extra_user_roles(self) -> Optional[str]:
+        """Returns the extra user roles that were requested."""
+        return self.relation.data[self.relation.app].get("extra-user-roles")
+
+
+class AuthenticationEvent(RelationEvent):
+    """Base class for authentication fields for events."""
+
+    @property
+    def username(self) -> Optional[str]:
+        """Returns the created username."""
+        return self.relation.data[self.relation.app].get("username")
+
+    @property
+    def password(self) -> Optional[str]:
+        """Returns the password for the created user."""
+        return self.relation.data[self.relation.app].get("password")
+
+    @property
+    def tls(self) -> Optional[str]:
+        """Returns whether TLS is configured."""
+        return self.relation.data[self.relation.app].get("tls")
+
+    @property
+    def tls_ca(self) -> Optional[str]:
+        """Returns TLS CA."""
+        return self.relation.data[self.relation.app].get("tls-ca")
+
+
+# Database related events and fields
+
+
+class DatabaseProvidesEvent(RelationEvent):
+    """Base class for database events."""
+
+    @property
+    def database(self) -> Optional[str]:
+        """Returns the database that was requested."""
+        return self.relation.data[self.relation.app].get("database")
+
+
+class DatabaseRequestedEvent(DatabaseProvidesEvent, ExtraRoleEvent):
+    """Event emitted when a new database is requested for use on this relation."""
+
+
+class DatabaseProvidesEvents(CharmEvents):
+    """Database events.
+
+    This class defines the events that the database can emit.
+    """
+
+    database_requested = EventSource(DatabaseRequestedEvent)
+
+
+class DatabaseRequiresEvent(RelationEvent):
+    """Base class for database events."""
+
+    @property
+    def endpoints(self) -> Optional[str]:
+        """Returns a comma separated list of read/write endpoints."""
+        return self.relation.data[self.relation.app].get("endpoints")
+
+    @property
+    def read_only_endpoints(self) -> Optional[str]:
+        """Returns a comma separated list of read only endpoints."""
+        return self.relation.data[self.relation.app].get("read-only-endpoints")
+
+    @property
+    def replset(self) -> Optional[str]:
+        """Returns the replicaset name.
+
+        MongoDB only.
+        """
+        return self.relation.data[self.relation.app].get("replset")
+
+    @property
+    def uris(self) -> Optional[str]:
+        """Returns the connection URIs.
+
+        MongoDB, Redis, OpenSearch.
+        """
+        return self.relation.data[self.relation.app].get("uris")
+
+    @property
+    def version(self) -> Optional[str]:
+        """Returns the version of the database.
+
+        Version as informed by the database daemon.
+        """
+        return self.relation.data[self.relation.app].get("version")
+
+
+class DatabaseCreatedEvent(AuthenticationEvent, DatabaseRequiresEvent):
+    """Event emitted when a new database is created for use on this relation."""
+
+
+class DatabaseEndpointsChangedEvent(AuthenticationEvent, DatabaseRequiresEvent):
+    """Event emitted when the read/write endpoints are changed."""
+
+
+class DatabaseReadOnlyEndpointsChangedEvent(AuthenticationEvent, DatabaseRequiresEvent):
+    """Event emitted when the read only endpoints are changed."""
+
+
+class DatabaseRequiresEvents(CharmEvents):
+    """Database events.
+
+    This class defines the events that the database can emit.
+    """
+
+    database_created = EventSource(DatabaseCreatedEvent)
+    endpoints_changed = EventSource(DatabaseEndpointsChangedEvent)
+    read_only_endpoints_changed = EventSource(DatabaseReadOnlyEndpointsChangedEvent)
+
+
+# Database Provider and Requires
+
+
+class DatabaseProvides(DataProvides):
+    """Provider-side of the database relations."""
+
+    on = DatabaseProvidesEvents()
+
+    def __init__(self, charm: CharmBase, relation_name: str) -> None:
+        super().__init__(charm, relation_name)
+
+    def _on_relation_changed(self, event: RelationChangedEvent) -> None:
+        """Event emitted when the relation has changed."""
+        # Only the leader should handle this event.
+        if not self.local_unit.is_leader():
+            return
+
+        # Check which data has changed to emit customs events.
+        diff = self._diff(event)
+
+        # Emit a database requested event if the setup key (database name and optional
+        # extra user roles) was added to the relation databag by the application.
+        if "database" in diff.added:
+            self.on.database_requested.emit(event.relation, app=event.app, unit=event.unit)
+
+    def set_endpoints(self, relation_id: int, connection_strings: str) -> None:
+        """Set database primary connections.
+
+        This function writes in the application data bag, therefore,
+        only the leader unit can call it.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            connection_strings: database hosts and ports comma separated list.
+        """
+        self._update_relation_data(relation_id, {"endpoints": connection_strings})
+
+    def set_read_only_endpoints(self, relation_id: int, connection_strings: str) -> None:
+        """Set database replicas connection strings.
+
+        This function writes in the application data bag, therefore,
+        only the leader unit can call it.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            connection_strings: database hosts and ports comma separated list.
+        """
+        self._update_relation_data(relation_id, {"read-only-endpoints": connection_strings})
+
+    def set_replset(self, relation_id: int, replset: str) -> None:
+        """Set replica set name in the application relation databag.
+
+        MongoDB only.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            replset: replica set name.
+        """
+        self._update_relation_data(relation_id, {"replset": replset})
+
+    def set_uris(self, relation_id: int, uris: str) -> None:
+        """Set the database connection URIs in the application relation databag.
+
+        MongoDB, Redis, and OpenSearch only.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            uris: connection URIs.
+        """
+        self._update_relation_data(relation_id, {"uris": uris})
+
+    def set_version(self, relation_id: int, version: str) -> None:
+        """Set the database version in the application relation databag.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            version: database version.
+        """
+        self._update_relation_data(relation_id, {"version": version})
+
+
+class DatabaseRequires(DataRequires):
+    """Requires-side of the database relation."""
+
+    on = DatabaseRequiresEvents()
+
+    def __init__(
+        self,
+        charm,
+        relation_name: str,
+        database_name: str,
+        extra_user_roles: str = None,
+        relations_aliases: List[str] = None,
+    ):
+        """Manager of database client relations."""
+        super().__init__(charm, relation_name, extra_user_roles)
+        self.database = database_name
+        self.relations_aliases = relations_aliases
+
+        # Define custom event names for each alias.
+        if relations_aliases:
+            # Ensure the number of aliases does not exceed the maximum
+            # of connections allowed in the specific relation.
+            relation_connection_limit = self.charm.meta.requires[relation_name].limit
+            if len(relations_aliases) != relation_connection_limit:
+                raise ValueError(
+                    f"The number of aliases must match the maximum number of connections allowed in the relation. "
+                    f"Expected {relation_connection_limit}, got {len(relations_aliases)}"
+                )
+
+            for relation_alias in relations_aliases:
+                self.on.define_event(f"{relation_alias}_database_created", DatabaseCreatedEvent)
+                self.on.define_event(
+                    f"{relation_alias}_endpoints_changed", DatabaseEndpointsChangedEvent
+                )
+                self.on.define_event(
+                    f"{relation_alias}_read_only_endpoints_changed",
+                    DatabaseReadOnlyEndpointsChangedEvent,
+                )
+
+    def _assign_relation_alias(self, relation_id: int) -> None:
+        """Assigns an alias to a relation.
+
+        This function writes in the unit data bag.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+        """
+        # If no aliases were provided, return immediately.
+        if not self.relations_aliases:
+            return
+
+        # Return if an alias was already assigned to this relation
+        # (like when there are more than one unit joining the relation).
+        if (
+            self.charm.model.get_relation(self.relation_name, relation_id)
+            .data[self.local_unit]
+            .get("alias")
+        ):
+            return
+
+        # Retrieve the available aliases (the ones that weren't assigned to any relation).
+        available_aliases = self.relations_aliases[:]
+        for relation in self.charm.model.relations[self.relation_name]:
+            alias = relation.data[self.local_unit].get("alias")
+            if alias:
+                logger.debug("Alias %s was already assigned to relation %d", alias, relation.id)
+                available_aliases.remove(alias)
+
+        # Set the alias in the unit relation databag of the specific relation.
+        relation = self.charm.model.get_relation(self.relation_name, relation_id)
+        relation.data[self.local_unit].update({"alias": available_aliases[0]})
+
+    def _emit_aliased_event(self, event: RelationChangedEvent, event_name: str) -> None:
+        """Emit an aliased event to a particular relation if it has an alias.
+
+        Args:
+            event: the relation changed event that was received.
+            event_name: the name of the event to emit.
+        """
+        alias = self._get_relation_alias(event.relation.id)
+        if alias:
+            getattr(self.on, f"{alias}_{event_name}").emit(
+                event.relation, app=event.app, unit=event.unit
+            )
+
+    def _get_relation_alias(self, relation_id: int) -> Optional[str]:
+        """Returns the relation alias.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+
+        Returns:
+            the relation alias or None if the relation was not found.
+        """
+        for relation in self.charm.model.relations[self.relation_name]:
+            if relation.id == relation_id:
+                return relation.data[self.local_unit].get("alias")
+        return None
+
+    def _on_relation_joined_event(self, event: RelationJoinedEvent) -> None:
+        """Event emitted when the application joins the database relation."""
+        # If relations aliases were provided, assign one to the relation.
+        self._assign_relation_alias(event.relation.id)
+
+        # Sets both database and extra user roles in the relation
+        # if the roles are provided. Otherwise, sets only the database.
+        if self.extra_user_roles:
+            self._update_relation_data(
+                event.relation.id,
+                {
+                    "database": self.database,
+                    "extra-user-roles": self.extra_user_roles,
+                },
+            )
+        else:
+            self._update_relation_data(event.relation.id, {"database": self.database})
+
+    def _on_relation_changed_event(self, event: RelationChangedEvent) -> None:
+        """Event emitted when the database relation has changed."""
+        # Check which data has changed to emit customs events.
+        diff = self._diff(event)
+
+        # Check if the database is created
+        # (the database charm shared the credentials).
+        if "username" in diff.added and "password" in diff.added:
+            # Emit the default event (the one without an alias).
+            logger.info("database created at %s", datetime.now())
+            self.on.database_created.emit(event.relation, app=event.app, unit=event.unit)
+
+            # Emit the aliased event (if any).
+            self._emit_aliased_event(event, "database_created")
+
+            # To avoid unnecessary application restarts do not trigger
+            # â€œendpoints_changed“ event if â€œdatabase_created“ is triggered.
+            return
+
+        # Emit an endpoints changed event if the database
+        # added or changed this info in the relation databag.
+        if "endpoints" in diff.added or "endpoints" in diff.changed:
+            # Emit the default event (the one without an alias).
+            logger.info("endpoints changed on %s", datetime.now())
+            self.on.endpoints_changed.emit(event.relation, app=event.app, unit=event.unit)
+
+            # Emit the aliased event (if any).
+            self._emit_aliased_event(event, "endpoints_changed")
+
+            # To avoid unnecessary application restarts do not trigger
+            # â€œread_only_endpoints_changed“ event if â€œendpoints_changed“ is triggered.
+            return
+
+        # Emit a read only endpoints changed event if the database
+        # added or changed this info in the relation databag.
+        if "read-only-endpoints" in diff.added or "read-only-endpoints" in diff.changed:
+            # Emit the default event (the one without an alias).
+            logger.info("read-only-endpoints changed on %s", datetime.now())
+            self.on.read_only_endpoints_changed.emit(
+                event.relation, app=event.app, unit=event.unit
+            )
+
+            # Emit the aliased event (if any).
+            self._emit_aliased_event(event, "read_only_endpoints_changed")
+
+
+# Kafka related events
+
+
+class KafkaProvidesEvent(RelationEvent):
+    """Base class for Kafka events."""
+
+    @property
+    def topic(self) -> Optional[str]:
+        """Returns the topic that was requested."""
+        return self.relation.data[self.relation.app].get("topic")
+
+
+class TopicRequestedEvent(KafkaProvidesEvent, ExtraRoleEvent):
+    """Event emitted when a new topic is requested for use on this relation."""
+
+
+class KafkaProvidesEvents(CharmEvents):
+    """Kafka events.
+
+    This class defines the events that the Kafka can emit.
+    """
+
+    topic_requested = EventSource(TopicRequestedEvent)
+
+
+class KafkaRequiresEvent(RelationEvent):
+    """Base class for Kafka events."""
+
+    @property
+    def bootstrap_server(self) -> Optional[str]:
+        """Returns a a comma-seperated list of broker uris."""
+        return self.relation.data[self.relation.app].get("endpoints")
+
+    @property
+    def consumer_group_prefix(self) -> Optional[str]:
+        """Returns the consumer-group-prefix."""
+        return self.relation.data[self.relation.app].get("consumer-group-prefix")
+
+    @property
+    def zookeeper_uris(self) -> Optional[str]:
+        """Returns a comma separated list of Zookeeper uris."""
+        return self.relation.data[self.relation.app].get("zookeeper-uris")
+
+
+class TopicCreatedEvent(AuthenticationEvent, KafkaRequiresEvent):
+    """Event emitted when a new topic is created for use on this relation."""
+
+
+class BootstrapServerChangedEvent(AuthenticationEvent, KafkaRequiresEvent):
+    """Event emitted when the bootstrap server is changed."""
+
+
+class KafkaRequiresEvents(CharmEvents):
+    """Kafka events.
+
+    This class defines the events that the Kafka can emit.
+    """
+
+    topic_created = EventSource(TopicCreatedEvent)
+    bootstrap_server_changed = EventSource(BootstrapServerChangedEvent)
+
+
+# Kafka Provides and Requires
+
+
+class KafkaProvides(DataProvides):
+    """Provider-side of the Kafka relation."""
+
+    on = KafkaProvidesEvents()
+
+    def __init__(self, charm: CharmBase, relation_name: str) -> None:
+        super().__init__(charm, relation_name)
+
+    def _on_relation_changed(self, event: RelationChangedEvent) -> None:
+        """Event emitted when the relation has changed."""
+        # Only the leader should handle this event.
+        if not self.local_unit.is_leader():
+            return
+
+        # Check which data has changed to emit customs events.
+        diff = self._diff(event)
+
+        # Emit a topic requested event if the setup key (topic name and optional
+        # extra user roles) was added to the relation databag by the application.
+        if "topic" in diff.added:
+            self.on.topic_requested.emit(event.relation, app=event.app, unit=event.unit)
+
+    def set_bootstrap_server(self, relation_id: int, bootstrap_server: str) -> None:
+        """Set the bootstrap server in the application relation databag.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            bootstrap_server: the bootstrap server address.
+        """
+        self._update_relation_data(relation_id, {"endpoints": bootstrap_server})
+
+    def set_consumer_group_prefix(self, relation_id: int, consumer_group_prefix: str) -> None:
+        """Set the consumer group prefix in the application relation databag.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            consumer_group_prefix: the consumer group prefix string.
+        """
+        self._update_relation_data(relation_id, {"consumer-group-prefix": consumer_group_prefix})
+
+    def set_zookeeper_uris(self, relation_id: int, zookeeper_uris: str) -> None:
+        """Set the zookeeper uris in the application relation databag.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            zookeeper_uris: comma-seperated list of ZooKeeper server uris.
+        """
+        self._update_relation_data(relation_id, {"zookeeper-uris": zookeeper_uris})
+
+
+class KafkaRequires(DataRequires):
+    """Requires-side of the Kafka relation."""
+
+    on = KafkaRequiresEvents()
+
+    def __init__(self, charm, relation_name: str, topic: str, extra_user_roles: str = None):
+        """Manager of Kafka client relations."""
+        # super().__init__(charm, relation_name)
+        super().__init__(charm, relation_name, extra_user_roles)
+        self.charm = charm
+        self.topic = topic
+
+    def _on_relation_joined_event(self, event: RelationJoinedEvent) -> None:
+        """Event emitted when the application joins the Kafka relation."""
+        # Sets both topic and extra user roles in the relation
+        # if the roles are provided. Otherwise, sets only the topic.
+        self._update_relation_data(
+            event.relation.id,
+            {
+                "topic": self.topic,
+                "extra-user-roles": self.extra_user_roles,
+            }
+            if self.extra_user_roles is not None
+            else {"topic": self.topic},
+        )
+
+    def _on_relation_changed_event(self, event: RelationChangedEvent) -> None:
+        """Event emitted when the Kafka relation has changed."""
+        # Check which data has changed to emit customs events.
+        diff = self._diff(event)
+
+        # Check if the topic is created
+        # (the Kafka charm shared the credentials).
+        if "username" in diff.added and "password" in diff.added:
+            # Emit the default event (the one without an alias).
+            logger.info("topic created at %s", datetime.now())
+            self.on.topic_created.emit(event.relation, app=event.app, unit=event.unit)
+
+            # To avoid unnecessary application restarts do not trigger
+            # â€œendpoints_changed“ event if â€œtopic_created“ is triggered.
+            return
+
+        # Emit an endpoints (bootstap-server) changed event if the Kakfa endpoints
+        # added or changed this info in the relation databag.
+        if "endpoints" in diff.added or "endpoints" in diff.changed:
+            # Emit the default event (the one without an alias).
+            logger.info("endpoints changed on %s", datetime.now())
+            self.on.bootstrap_server_changed.emit(
+                event.relation, app=event.app, unit=event.unit
+            )  # here check if this is the right design
+            return
index ee2f2f9..5bd1236 100644 (file)
@@ -58,7 +58,7 @@ requires:
     interface: kafka
     limit: 1
   mongodb:
-    interface: mongodb
+    interface: mongodb_client
     limit: 1
   keystone:
     interface: keystone
index d0d4a5b..16cf0f4 100644 (file)
@@ -50,7 +50,3 @@ ignore = ["W503", "E501", "D107"]
 # D100, D101, D102, D103: Ignore missing docstrings in tests
 per-file-ignores = ["tests/*:D100,D101,D102,D103,D104"]
 docstring-convention = "google"
-# Check for properly formatted copyright header in each file
-copyright-check = "True"
-copyright-author = "Canonical Ltd."
-copyright-regexp = "Copyright\\s\\d{4}([-,]\\d{4})*\\s+%(author)s"
index cb303a3..398d4ad 100644 (file)
@@ -17,7 +17,7 @@
 #
 # To get in touch with the maintainers, please contact:
 # osm-charmers@lists.launchpad.net
-ops >= 1.2.0
+ops < 2.2
 lightkube
 lightkube-models
 # git+https://github.com/charmed-osm/config-validator/
index 176f896..db72dfe 100755 (executable)
@@ -22,7 +22,7 @@
 #
 # Learn more at: https://juju.is/docs/sdk
 
-"""OSM NBI charm.
+"""OSM MON charm.
 
 See more: https://charmhub.io/osm
 """
@@ -30,6 +30,7 @@ See more: https://charmhub.io/osm
 import logging
 from typing import Any, Dict
 
+from charms.data_platform_libs.v0.data_interfaces import DatabaseRequires
 from charms.kafka_k8s.v0.kafka import KafkaRequires, _KafkaAvailableEvent
 from charms.observability_libs.v1.kubernetes_service_patch import KubernetesServicePatch
 from charms.osm_libs.v0.utils import (
@@ -46,7 +47,7 @@ from ops.framework import EventSource, StoredState
 from ops.main import main
 from ops.model import ActiveStatus, Container
 
-from legacy_interfaces import KeystoneClient, MongoClient, PrometheusClient
+from legacy_interfaces import KeystoneClient, PrometheusClient
 
 HOSTPATHS = [
     HostPath(
@@ -85,7 +86,7 @@ class OsmMonCharm(CharmBase):
     def __init__(self, *args):
         super().__init__(*args)
         self.kafka = KafkaRequires(self)
-        self.mongodb_client = MongoClient(self, "mongodb")
+        self.mongodb_client = DatabaseRequires(self, "mongodb", database_name="osm")
         self.prometheus_client = PrometheusClient(self, "prometheus")
         self.keystone_client = KeystoneClient(self, "keystone")
         self.vca = VcaRequires(self)
@@ -151,9 +152,7 @@ class OsmMonCharm(CharmBase):
     def _on_get_debug_mode_information_action(self, event: ActionEvent) -> None:
         """Handler for the get-debug-mode-information action event."""
         if not self.debug_mode.started:
-            event.fail(
-                "debug-mode has not started. Hint: juju config mon debug-mode=true"
-            )
+            event.fail("debug-mode has not started. Hint: juju config mon debug-mode=true")
             return
 
         debug_info = {
@@ -176,20 +175,24 @@ class OsmMonCharm(CharmBase):
             self.on.vca_data_changed: self._on_config_changed,
             self.on.kafka_available: self._on_config_changed,
             self.on["kafka"].relation_broken: self._on_required_relation_broken,
+            self.mongodb_client.on.database_created: self._on_config_changed,
+            self.on["mongodb"].relation_broken: self._on_required_relation_broken,
             # Action events
             self.on.get_debug_mode_information_action: self._on_get_debug_mode_information_action,
         }
-        for relation in [
-            self.on[rel_name] for rel_name in ["mongodb", "prometheus", "keystone"]
-        ]:
+        for relation in [self.on[rel_name] for rel_name in ["prometheus", "keystone"]]:
             event_handler_mapping[relation.relation_changed] = self._on_config_changed
-            event_handler_mapping[
-                relation.relation_broken
-            ] = self._on_required_relation_broken
+            event_handler_mapping[relation.relation_broken] = self._on_required_relation_broken
 
         for event, handler in event_handler_mapping.items():
             self.framework.observe(event, handler)
 
+    def _is_database_available(self) -> bool:
+        try:
+            return self.mongodb_client.is_resource_created()
+        except KeyError:
+            return False
+
     def _validate_config(self) -> None:
         """Validate charm configuration.
 
@@ -209,7 +212,7 @@ class OsmMonCharm(CharmBase):
 
         if not self.kafka.host or not self.kafka.port:
             missing_relations.append("kafka")
-        if self.mongodb_client.is_missing_data_in_unit():
+        if not self._is_database_available():
             missing_relations.append("mongodb")
         if self.prometheus_client.is_missing_data_in_app():
             missing_relations.append("prometheus")
@@ -219,9 +222,7 @@ class OsmMonCharm(CharmBase):
         if missing_relations:
             relations_str = ", ".join(missing_relations)
             one_relation_missing = len(missing_relations) == 1
-            error_msg = (
-                f'need {relations_str} relation{"" if one_relation_missing else "s"}'
-            )
+            error_msg = f'need {relations_str} relation{"" if one_relation_missing else "s"}'
             logger.warning(error_msg)
             raise CharmError(error_msg)
 
@@ -236,9 +237,7 @@ class OsmMonCharm(CharmBase):
         environment = {
             # General configuration
             "OSMMON_GLOBAL_LOGLEVEL": self.config["log-level"],
-            "OSMMON_OPENSTACK_DEFAULT_GRANULARITY": self.config[
-                "openstack-default-granularity"
-            ],
+            "OSMMON_OPENSTACK_DEFAULT_GRANULARITY": self.config["openstack-default-granularity"],
             "OSMMON_GLOBAL_REQUEST_TIMEOUT": self.config["global-request-timeout"],
             "OSMMON_COLLECTOR_INTERVAL": self.config["collector-interval"],
             "OSMMON_EVALUATOR_INTERVAL": self.config["evaluator-interval"],
@@ -249,7 +248,7 @@ class OsmMonCharm(CharmBase):
             "OSMMON_MESSAGE_PORT": self.kafka.port,
             # Database configuration
             "OSMMON_DATABASE_DRIVER": "mongo",
-            "OSMMON_DATABASE_URI": self.mongodb_client.connection_string,
+            "OSMMON_DATABASE_URI": self._get_mongodb_uri(),
             "OSMMON_DATABASE_COMMONKEY": self.config["database-commonkey"],
             # Prometheus/grafana configuration
             "OSMMON_PROMETHEUS_URL": f"http://{self.prometheus_client.hostname}:{self.prometheus_client.port}",
@@ -288,6 +287,9 @@ class OsmMonCharm(CharmBase):
             },
         }
 
+    def _get_mongodb_uri(self):
+        return list(self.mongodb_client.fetch_relation_data().values())[0]["uris"]
+
     def _patch_k8s_service(self) -> None:
         port = ServicePort(SERVICE_PORT, name=f"{self.app.name}")
         self.service_patcher = KubernetesServicePatch(self, [port])
diff --git a/installers/charm/osm-mon/tests/integration/test_charm.py b/installers/charm/osm-mon/tests/integration/test_charm.py
new file mode 100644 (file)
index 0000000..c5807e9
--- /dev/null
@@ -0,0 +1,209 @@
+#!/usr/bin/env python3
+# Copyright 2022 Canonical Ltd.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+#
+# For those usages not covered by the Apache License, Version 2.0 please
+# contact: legal@canonical.com
+#
+# To get in touch with the maintainers, please contact:
+# osm-charmers@lists.launchpad.net
+#
+# Learn more about testing at: https://juju.is/docs/sdk/testing
+
+import asyncio
+import logging
+import shlex
+from pathlib import Path
+
+import pytest
+import yaml
+from pytest_operator.plugin import OpsTest
+
+logger = logging.getLogger(__name__)
+
+METADATA = yaml.safe_load(Path("./metadata.yaml").read_text())
+MON_APP = METADATA["name"]
+KAFKA_CHARM = "kafka-k8s"
+KAFKA_APP = "kafka"
+KEYSTONE_CHARM = "osm-keystone"
+KEYSTONE_APP = "keystone"
+MARIADB_CHARM = "charmed-osm-mariadb-k8s"
+MARIADB_APP = "mariadb"
+MONGO_DB_CHARM = "mongodb-k8s"
+MONGO_DB_APP = "mongodb"
+PROMETHEUS_CHARM = "osm-prometheus"
+PROMETHEUS_APP = "prometheus"
+ZOOKEEPER_CHARM = "zookeeper-k8s"
+ZOOKEEPER_APP = "zookeeper"
+VCA_CHARM = "osm-vca-integrator"
+VCA_APP = "vca"
+APPS = [KAFKA_APP, ZOOKEEPER_APP, KEYSTONE_APP, MONGO_DB_APP, MARIADB_APP, PROMETHEUS_APP, MON_APP]
+
+
+@pytest.mark.abort_on_fail
+async def test_mon_is_deployed(ops_test: OpsTest):
+    charm = await ops_test.build_charm(".")
+    resources = {"mon-image": METADATA["resources"]["mon-image"]["upstream-source"]}
+
+    await asyncio.gather(
+        ops_test.model.deploy(
+            charm, resources=resources, application_name=MON_APP, series="focal"
+        ),
+        ops_test.model.deploy(KAFKA_CHARM, application_name=KAFKA_APP, channel="stable"),
+        ops_test.model.deploy(MONGO_DB_CHARM, application_name=MONGO_DB_APP, channel="edge"),
+        ops_test.model.deploy(MARIADB_CHARM, application_name=MARIADB_APP, channel="stable"),
+        ops_test.model.deploy(PROMETHEUS_CHARM, application_name=PROMETHEUS_APP, channel="stable"),
+        ops_test.model.deploy(ZOOKEEPER_CHARM, application_name=ZOOKEEPER_APP, channel="stable"),
+    )
+    cmd = f"juju deploy {KEYSTONE_CHARM} {KEYSTONE_APP} --resource keystone-image=opensourcemano/keystone:12"
+    await ops_test.run(*shlex.split(cmd), check=True)
+
+    async with ops_test.fast_forward():
+        await ops_test.model.wait_for_idle(
+            apps=APPS,
+        )
+    assert ops_test.model.applications[MON_APP].status == "blocked"
+    unit = ops_test.model.applications[MON_APP].units[0]
+    assert unit.workload_status_message == "need kafka, mongodb, prometheus, keystone relations"
+
+    logger.info("Adding relations for other components")
+    await ops_test.model.add_relation(KAFKA_APP, ZOOKEEPER_APP)
+    await ops_test.model.add_relation(MARIADB_APP, KEYSTONE_APP)
+
+    logger.info("Adding relations")
+    await ops_test.model.add_relation(MON_APP, MONGO_DB_APP)
+    await ops_test.model.add_relation(MON_APP, KAFKA_APP)
+    await ops_test.model.add_relation(MON_APP, KEYSTONE_APP)
+    await ops_test.model.add_relation(MON_APP, PROMETHEUS_APP)
+
+    async with ops_test.fast_forward():
+        await ops_test.model.wait_for_idle(
+            apps=APPS,
+            status="active",
+        )
+
+
+@pytest.mark.abort_on_fail
+async def test_mon_scales_up(ops_test: OpsTest):
+    logger.info("Scaling up osm-mon")
+    expected_units = 3
+    assert len(ops_test.model.applications[MON_APP].units) == 1
+    await ops_test.model.applications[MON_APP].scale(expected_units)
+    async with ops_test.fast_forward():
+        await ops_test.model.wait_for_idle(
+            apps=[MON_APP], status="active", wait_for_exact_units=expected_units
+        )
+
+
+@pytest.mark.abort_on_fail
+@pytest.mark.parametrize(
+    "relation_to_remove", [KAFKA_APP, MONGO_DB_APP, PROMETHEUS_APP, KEYSTONE_APP]
+)
+async def test_mon_blocks_without_relation(ops_test: OpsTest, relation_to_remove):
+    logger.info("Removing relation: %s", relation_to_remove)
+    # mongoDB relation is named "database"
+    local_relation = relation_to_remove
+    if relation_to_remove == MONGO_DB_APP:
+        local_relation = "database"
+    await asyncio.gather(
+        ops_test.model.applications[relation_to_remove].remove_relation(local_relation, MON_APP)
+    )
+    async with ops_test.fast_forward():
+        await ops_test.model.wait_for_idle(apps=[MON_APP])
+    assert ops_test.model.applications[MON_APP].status == "blocked"
+    for unit in ops_test.model.applications[MON_APP].units:
+        assert unit.workload_status_message == f"need {relation_to_remove} relation"
+    await ops_test.model.add_relation(MON_APP, relation_to_remove)
+    async with ops_test.fast_forward():
+        await ops_test.model.wait_for_idle(
+            apps=APPS,
+            status="active",
+        )
+
+
+@pytest.mark.abort_on_fail
+async def test_mon_action_debug_mode_disabled(ops_test: OpsTest):
+    async with ops_test.fast_forward():
+        await ops_test.model.wait_for_idle(
+            apps=APPS,
+            status="active",
+        )
+    logger.info("Running action 'get-debug-mode-information'")
+    action = (
+        await ops_test.model.applications[MON_APP]
+        .units[0]
+        .run_action("get-debug-mode-information")
+    )
+    async with ops_test.fast_forward():
+        await ops_test.model.wait_for_idle(apps=[MON_APP])
+    status = await ops_test.model.get_action_status(uuid_or_prefix=action.entity_id)
+    assert status[action.entity_id] == "failed"
+
+
+@pytest.mark.abort_on_fail
+async def test_mon_action_debug_mode_enabled(ops_test: OpsTest):
+    await ops_test.model.applications[MON_APP].set_config({"debug-mode": "true"})
+    async with ops_test.fast_forward():
+        await ops_test.model.wait_for_idle(
+            apps=APPS,
+            status="active",
+        )
+    logger.info("Running action 'get-debug-mode-information'")
+    # list of units is not ordered
+    unit_id = list(
+        filter(
+            lambda x: (x.entity_id == f"{MON_APP}/0"), ops_test.model.applications[MON_APP].units
+        )
+    )[0]
+    action = await unit_id.run_action("get-debug-mode-information")
+    async with ops_test.fast_forward():
+        await ops_test.model.wait_for_idle(apps=[MON_APP])
+    status = await ops_test.model.get_action_status(uuid_or_prefix=action.entity_id)
+    message = await ops_test.model.get_action_output(action_uuid=action.entity_id)
+    assert status[action.entity_id] == "completed"
+    assert "command" in message
+    assert "password" in message
+
+
+@pytest.mark.abort_on_fail
+async def test_mon_integration_vca(ops_test: OpsTest):
+    await asyncio.gather(
+        ops_test.model.deploy(VCA_CHARM, application_name=VCA_APP, channel="beta"),
+    )
+    async with ops_test.fast_forward():
+        await ops_test.model.wait_for_idle(
+            apps=[VCA_APP],
+        )
+    controllers = (Path.home() / ".local/share/juju/controllers.yaml").read_text()
+    accounts = (Path.home() / ".local/share/juju/accounts.yaml").read_text()
+    public_key = (Path.home() / ".local/share/juju/ssh/juju_id_rsa.pub").read_text()
+    await ops_test.model.applications[VCA_APP].set_config(
+        {
+            "controllers": controllers,
+            "accounts": accounts,
+            "public-key": public_key,
+            "k8s-cloud": "microk8s",
+        }
+    )
+    async with ops_test.fast_forward():
+        await ops_test.model.wait_for_idle(
+            apps=APPS + [VCA_APP],
+            status="active",
+        )
+    await ops_test.model.add_relation(MON_APP, VCA_APP)
+    async with ops_test.fast_forward():
+        await ops_test.model.wait_for_idle(
+            apps=APPS + [VCA_APP],
+            status="active",
+        )
index 3ea173a..33598fe 100644 (file)
@@ -37,6 +37,7 @@ def harness(mocker: MockerFixture):
     mocker.patch("charm.KubernetesServicePatch", lambda x, y: None)
     harness = Harness(OsmMonCharm)
     harness.begin()
+    harness.container_pebble_ready(container_name)
     yield harness
     harness.cleanup()
 
@@ -71,19 +72,21 @@ def _add_relations(harness: Harness):
     relation_id = harness.add_relation("mongodb", "mongodb")
     harness.add_relation_unit(relation_id, "mongodb/0")
     harness.update_relation_data(
-        relation_id, "mongodb/0", {"connection_string": "mongodb://:1234"}
+        relation_id,
+        "mongodb",
+        {"uris": "mongodb://:1234", "username": "user", "password": "password"},
     )
     relation_ids.append(relation_id)
     # Add kafka relation
     relation_id = harness.add_relation("kafka", "kafka")
     harness.add_relation_unit(relation_id, "kafka/0")
-    harness.update_relation_data(relation_id, "kafka", {"host": "kafka", "port": 9092})
+    harness.update_relation_data(relation_id, "kafka", {"host": "kafka", "port": "9092"})
     relation_ids.append(relation_id)
     # Add prometheus relation
     relation_id = harness.add_relation("prometheus", "prometheus")
     harness.add_relation_unit(relation_id, "prometheus/0")
     harness.update_relation_data(
-        relation_id, "prometheus", {"hostname": "prometheus", "port": 9090}
+        relation_id, "prometheus", {"hostname": "prometheus", "port": "9090"}
     )
     relation_ids.append(relation_id)
     # Add keystone relation
index 56c095b..64bab10 100644 (file)
@@ -21,7 +21,7 @@
 [tox]
 skipsdist=True
 skip_missing_interpreters = True
-envlist = lint, unit
+envlist = lint, unit, integration
 
 [vars]
 src_path = {toxinidir}/src/
@@ -29,6 +29,7 @@ tst_path = {toxinidir}/tests/
 all_path = {[vars]src_path} {[vars]tst_path}
 
 [testenv]
+basepython = python3.8
 setenv =
   PYTHONPATH = {toxinidir}:{toxinidir}/lib:{[vars]src_path}
   PYTHONBREAKPOINT=ipdb.set_trace
@@ -53,14 +54,13 @@ deps =
     black
     flake8
     flake8-docstrings
-    flake8-copyright
     flake8-builtins
     pyproject-flake8
     pep8-naming
     isort
     codespell
 commands =
-    codespell {toxinidir}/. --skip {toxinidir}/.git --skip {toxinidir}/.tox \
+    codespell {toxinidir} --skip {toxinidir}/.git --skip {toxinidir}/.tox \
       --skip {toxinidir}/build --skip {toxinidir}/lib --skip {toxinidir}/venv \
       --skip {toxinidir}/.mypy_cache --skip {toxinidir}/icon.svg
     # pflake8 wrapper supports config from pyproject.toml
@@ -85,8 +85,8 @@ commands =
 description = Run integration tests
 deps =
     pytest
-    juju
+    juju<3
     pytest-operator
     -r{toxinidir}/requirements.txt
 commands =
-    pytest -v --tb native --ignore={[vars]tst_path}unit --log-cli-level=INFO -s {posargs}
+    pytest -v --tb native --ignore={[vars]tst_path}unit --log-cli-level=INFO -s {posargs} --cloud microk8s
index cd049ec..85e637a 100644 (file)
@@ -72,14 +72,14 @@ options:
     type: boolean
     description: |
       Great for OSM Developers! (Not recommended for production deployments)
-        
+
       This action activates the Debug Mode, which sets up the container to be ready for debugging.
       As part of the setup, SSH is enabled and a VSCode workspace file is automatically populated.
 
       After enabling the debug-mode, execute the following command to get the information you need
       to start debugging:
-        `juju run-action get-debug-mode-information <unit name> --wait`
-      
+        `juju run-action <unit name> get-debug-mode-information --wait`
+
       The previous command returns the command you need to execute, and the SSH password that was set.
 
       See also:
@@ -96,7 +96,7 @@ options:
         $ git clone "https://osm.etsi.org/gerrit/osm/NBI" /home/ubuntu/NBI
         $ juju config nbi nbi-hostpath=/home/ubuntu/NBI
 
-      This configuration only applies if option `debug-mode` is set to true. 
+      This configuration only applies if option `debug-mode` is set to true.
 
   common-hostpath:
     type: string
@@ -108,4 +108,4 @@ options:
         $ git clone "https://osm.etsi.org/gerrit/osm/common" /home/ubuntu/common
         $ juju config nbi common-hostpath=/home/ubuntu/common
 
-      This configuration only applies if option `debug-mode` is set to true. 
+      This configuration only applies if option `debug-mode` is set to true.
diff --git a/installers/charm/osm-nbi/lib/charms/data_platform_libs/v0/data_interfaces.py b/installers/charm/osm-nbi/lib/charms/data_platform_libs/v0/data_interfaces.py
new file mode 100644 (file)
index 0000000..b3da5aa
--- /dev/null
@@ -0,0 +1,1130 @@
+# Copyright 2023 Canonical Ltd.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Library to manage the relation for the data-platform products.
+
+This library contains the Requires and Provides classes for handling the relation
+between an application and multiple managed application supported by the data-team:
+MySQL, Postgresql, MongoDB, Redis,  and Kakfa.
+
+### Database (MySQL, Postgresql, MongoDB, and Redis)
+
+#### Requires Charm
+This library is a uniform interface to a selection of common database
+metadata, with added custom events that add convenience to database management,
+and methods to consume the application related data.
+
+
+Following an example of using the DatabaseCreatedEvent, in the context of the
+application charm code:
+
+```python
+
+from charms.data_platform_libs.v0.data_interfaces import (
+    DatabaseCreatedEvent,
+    DatabaseRequires,
+)
+
+class ApplicationCharm(CharmBase):
+    # Application charm that connects to database charms.
+
+    def __init__(self, *args):
+        super().__init__(*args)
+
+        # Charm events defined in the database requires charm library.
+        self.database = DatabaseRequires(self, relation_name="database", database_name="database")
+        self.framework.observe(self.database.on.database_created, self._on_database_created)
+
+    def _on_database_created(self, event: DatabaseCreatedEvent) -> None:
+        # Handle the created database
+
+        # Create configuration file for app
+        config_file = self._render_app_config_file(
+            event.username,
+            event.password,
+            event.endpoints,
+        )
+
+        # Start application with rendered configuration
+        self._start_application(config_file)
+
+        # Set active status
+        self.unit.status = ActiveStatus("received database credentials")
+```
+
+As shown above, the library provides some custom events to handle specific situations,
+which are listed below:
+
+-  database_created: event emitted when the requested database is created.
+-  endpoints_changed: event emitted when the read/write endpoints of the database have changed.
+-  read_only_endpoints_changed: event emitted when the read-only endpoints of the database
+  have changed. Event is not triggered if read/write endpoints changed too.
+
+If it is needed to connect multiple database clusters to the same relation endpoint
+the application charm can implement the same code as if it would connect to only
+one database cluster (like the above code example).
+
+To differentiate multiple clusters connected to the same relation endpoint
+the application charm can use the name of the remote application:
+
+```python
+
+def _on_database_created(self, event: DatabaseCreatedEvent) -> None:
+    # Get the remote app name of the cluster that triggered this event
+    cluster = event.relation.app.name
+```
+
+It is also possible to provide an alias for each different database cluster/relation.
+
+So, it is possible to differentiate the clusters in two ways.
+The first is to use the remote application name, i.e., `event.relation.app.name`, as above.
+
+The second way is to use different event handlers to handle each cluster events.
+The implementation would be something like the following code:
+
+```python
+
+from charms.data_platform_libs.v0.data_interfaces import (
+    DatabaseCreatedEvent,
+    DatabaseRequires,
+)
+
+class ApplicationCharm(CharmBase):
+    # Application charm that connects to database charms.
+
+    def __init__(self, *args):
+        super().__init__(*args)
+
+        # Define the cluster aliases and one handler for each cluster database created event.
+        self.database = DatabaseRequires(
+            self,
+            relation_name="database",
+            database_name="database",
+            relations_aliases = ["cluster1", "cluster2"],
+        )
+        self.framework.observe(
+            self.database.on.cluster1_database_created, self._on_cluster1_database_created
+        )
+        self.framework.observe(
+            self.database.on.cluster2_database_created, self._on_cluster2_database_created
+        )
+
+    def _on_cluster1_database_created(self, event: DatabaseCreatedEvent) -> None:
+        # Handle the created database on the cluster named cluster1
+
+        # Create configuration file for app
+        config_file = self._render_app_config_file(
+            event.username,
+            event.password,
+            event.endpoints,
+        )
+        ...
+
+    def _on_cluster2_database_created(self, event: DatabaseCreatedEvent) -> None:
+        # Handle the created database on the cluster named cluster2
+
+        # Create configuration file for app
+        config_file = self._render_app_config_file(
+            event.username,
+            event.password,
+            event.endpoints,
+        )
+        ...
+
+```
+
+### Provider Charm
+
+Following an example of using the DatabaseRequestedEvent, in the context of the
+database charm code:
+
+```python
+from charms.data_platform_libs.v0.data_interfaces import DatabaseProvides
+
+class SampleCharm(CharmBase):
+
+    def __init__(self, *args):
+        super().__init__(*args)
+        # Charm events defined in the database provides charm library.
+        self.provided_database = DatabaseProvides(self, relation_name="database")
+        self.framework.observe(self.provided_database.on.database_requested,
+            self._on_database_requested)
+        # Database generic helper
+        self.database = DatabaseHelper()
+
+    def _on_database_requested(self, event: DatabaseRequestedEvent) -> None:
+        # Handle the event triggered by a new database requested in the relation
+        # Retrieve the database name using the charm library.
+        db_name = event.database
+        # generate a new user credential
+        username = self.database.generate_user()
+        password = self.database.generate_password()
+        # set the credentials for the relation
+        self.provided_database.set_credentials(event.relation.id, username, password)
+        # set other variables for the relation event.set_tls("False")
+```
+As shown above, the library provides a custom event (database_requested) to handle
+the situation when an application charm requests a new database to be created.
+It's preferred to subscribe to this event instead of relation changed event to avoid
+creating a new database when other information other than a database name is
+exchanged in the relation databag.
+
+### Kafka
+
+This library is the interface to use and interact with the Kafka charm. This library contains
+custom events that add convenience to manage Kafka, and provides methods to consume the
+application related data.
+
+#### Requirer Charm
+
+```python
+
+from charms.data_platform_libs.v0.data_interfaces import (
+    BootstrapServerChangedEvent,
+    KafkaRequires,
+    TopicCreatedEvent,
+)
+
+class ApplicationCharm(CharmBase):
+
+    def __init__(self, *args):
+        super().__init__(*args)
+        self.kafka = KafkaRequires(self, "kafka_client", "test-topic")
+        self.framework.observe(
+            self.kafka.on.bootstrap_server_changed, self._on_kafka_bootstrap_server_changed
+        )
+        self.framework.observe(
+            self.kafka.on.topic_created, self._on_kafka_topic_created
+        )
+
+    def _on_kafka_bootstrap_server_changed(self, event: BootstrapServerChangedEvent):
+        # Event triggered when a bootstrap server was changed for this application
+
+        new_bootstrap_server = event.bootstrap_server
+        ...
+
+    def _on_kafka_topic_created(self, event: TopicCreatedEvent):
+        # Event triggered when a topic was created for this application
+        username = event.username
+        password = event.password
+        tls = event.tls
+        tls_ca= event.tls_ca
+        bootstrap_server event.bootstrap_server
+        consumer_group_prefic = event.consumer_group_prefix
+        zookeeper_uris = event.zookeeper_uris
+        ...
+
+```
+
+As shown above, the library provides some custom events to handle specific situations,
+which are listed below:
+
+- topic_created: event emitted when the requested topic is created.
+- bootstrap_server_changed: event emitted when the bootstrap server have changed.
+- credential_changed: event emitted when the credentials of Kafka changed.
+
+### Provider Charm
+
+Following the previous example, this is an example of the provider charm.
+
+```python
+class SampleCharm(CharmBase):
+
+from charms.data_platform_libs.v0.data_interfaces import (
+    KafkaProvides,
+    TopicRequestedEvent,
+)
+
+    def __init__(self, *args):
+        super().__init__(*args)
+
+        # Default charm events.
+        self.framework.observe(self.on.start, self._on_start)
+
+        # Charm events defined in the Kafka Provides charm library.
+        self.kafka_provider = KafkaProvides(self, relation_name="kafka_client")
+        self.framework.observe(self.kafka_provider.on.topic_requested, self._on_topic_requested)
+        # Kafka generic helper
+        self.kafka = KafkaHelper()
+
+    def _on_topic_requested(self, event: TopicRequestedEvent):
+        # Handle the on_topic_requested event.
+
+        topic = event.topic
+        relation_id = event.relation.id
+        # set connection info in the databag relation
+        self.kafka_provider.set_bootstrap_server(relation_id, self.kafka.get_bootstrap_server())
+        self.kafka_provider.set_credentials(relation_id, username=username, password=password)
+        self.kafka_provider.set_consumer_group_prefix(relation_id, ...)
+        self.kafka_provider.set_tls(relation_id, "False")
+        self.kafka_provider.set_zookeeper_uris(relation_id, ...)
+
+```
+As shown above, the library provides a custom event (topic_requested) to handle
+the situation when an application charm requests a new topic to be created.
+It is preferred to subscribe to this event instead of relation changed event to avoid
+creating a new topic when other information other than a topic name is
+exchanged in the relation databag.
+"""
+
+import json
+import logging
+from abc import ABC, abstractmethod
+from collections import namedtuple
+from datetime import datetime
+from typing import List, Optional
+
+from ops.charm import (
+    CharmBase,
+    CharmEvents,
+    RelationChangedEvent,
+    RelationEvent,
+    RelationJoinedEvent,
+)
+from ops.framework import EventSource, Object
+from ops.model import Relation
+
+# The unique Charmhub library identifier, never change it
+LIBID = "6c3e6b6680d64e9c89e611d1a15f65be"
+
+# Increment this major API version when introducing breaking changes
+LIBAPI = 0
+
+# Increment this PATCH version before using `charmcraft publish-lib` or reset
+# to 0 if you are raising the major API version
+LIBPATCH = 7
+
+PYDEPS = ["ops>=2.0.0"]
+
+logger = logging.getLogger(__name__)
+
+Diff = namedtuple("Diff", "added changed deleted")
+Diff.__doc__ = """
+A tuple for storing the diff between two data mappings.
+
+added - keys that were added
+changed - keys that still exist but have new values
+deleted - key that were deleted"""
+
+
+def diff(event: RelationChangedEvent, bucket: str) -> Diff:
+    """Retrieves the diff of the data in the relation changed databag.
+
+    Args:
+        event: relation changed event.
+        bucket: bucket of the databag (app or unit)
+
+    Returns:
+        a Diff instance containing the added, deleted and changed
+            keys from the event relation databag.
+    """
+    # Retrieve the old data from the data key in the application relation databag.
+    old_data = json.loads(event.relation.data[bucket].get("data", "{}"))
+    # Retrieve the new data from the event relation databag.
+    new_data = {
+        key: value for key, value in event.relation.data[event.app].items() if key != "data"
+    }
+
+    # These are the keys that were added to the databag and triggered this event.
+    added = new_data.keys() - old_data.keys()
+    # These are the keys that were removed from the databag and triggered this event.
+    deleted = old_data.keys() - new_data.keys()
+    # These are the keys that already existed in the databag,
+    # but had their values changed.
+    changed = {key for key in old_data.keys() & new_data.keys() if old_data[key] != new_data[key]}
+    # Convert the new_data to a serializable format and save it for a next diff check.
+    event.relation.data[bucket].update({"data": json.dumps(new_data)})
+
+    # Return the diff with all possible changes.
+    return Diff(added, changed, deleted)
+
+
+# Base DataProvides and DataRequires
+
+
+class DataProvides(Object, ABC):
+    """Base provides-side of the data products relation."""
+
+    def __init__(self, charm: CharmBase, relation_name: str) -> None:
+        super().__init__(charm, relation_name)
+        self.charm = charm
+        self.local_app = self.charm.model.app
+        self.local_unit = self.charm.unit
+        self.relation_name = relation_name
+        self.framework.observe(
+            charm.on[relation_name].relation_changed,
+            self._on_relation_changed,
+        )
+
+    def _diff(self, event: RelationChangedEvent) -> Diff:
+        """Retrieves the diff of the data in the relation changed databag.
+
+        Args:
+            event: relation changed event.
+
+        Returns:
+            a Diff instance containing the added, deleted and changed
+                keys from the event relation databag.
+        """
+        return diff(event, self.local_app)
+
+    @abstractmethod
+    def _on_relation_changed(self, event: RelationChangedEvent) -> None:
+        """Event emitted when the relation data has changed."""
+        raise NotImplementedError
+
+    def fetch_relation_data(self) -> dict:
+        """Retrieves data from relation.
+
+        This function can be used to retrieve data from a relation
+        in the charm code when outside an event callback.
+
+        Returns:
+            a dict of the values stored in the relation data bag
+                for all relation instances (indexed by the relation id).
+        """
+        data = {}
+        for relation in self.relations:
+            data[relation.id] = {
+                key: value for key, value in relation.data[relation.app].items() if key != "data"
+            }
+        return data
+
+    def _update_relation_data(self, relation_id: int, data: dict) -> None:
+        """Updates a set of key-value pairs in the relation.
+
+        This function writes in the application data bag, therefore,
+        only the leader unit can call it.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            data: dict containing the key-value pairs
+                that should be updated in the relation.
+        """
+        if self.local_unit.is_leader():
+            relation = self.charm.model.get_relation(self.relation_name, relation_id)
+            relation.data[self.local_app].update(data)
+
+    @property
+    def relations(self) -> List[Relation]:
+        """The list of Relation instances associated with this relation_name."""
+        return list(self.charm.model.relations[self.relation_name])
+
+    def set_credentials(self, relation_id: int, username: str, password: str) -> None:
+        """Set credentials.
+
+        This function writes in the application data bag, therefore,
+        only the leader unit can call it.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            username: user that was created.
+            password: password of the created user.
+        """
+        self._update_relation_data(
+            relation_id,
+            {
+                "username": username,
+                "password": password,
+            },
+        )
+
+    def set_tls(self, relation_id: int, tls: str) -> None:
+        """Set whether TLS is enabled.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            tls: whether tls is enabled (True or False).
+        """
+        self._update_relation_data(relation_id, {"tls": tls})
+
+    def set_tls_ca(self, relation_id: int, tls_ca: str) -> None:
+        """Set the TLS CA in the application relation databag.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            tls_ca: TLS certification authority.
+        """
+        self._update_relation_data(relation_id, {"tls_ca": tls_ca})
+
+
+class DataRequires(Object, ABC):
+    """Requires-side of the relation."""
+
+    def __init__(
+        self,
+        charm,
+        relation_name: str,
+        extra_user_roles: str = None,
+    ):
+        """Manager of base client relations."""
+        super().__init__(charm, relation_name)
+        self.charm = charm
+        self.extra_user_roles = extra_user_roles
+        self.local_app = self.charm.model.app
+        self.local_unit = self.charm.unit
+        self.relation_name = relation_name
+        self.framework.observe(
+            self.charm.on[relation_name].relation_joined, self._on_relation_joined_event
+        )
+        self.framework.observe(
+            self.charm.on[relation_name].relation_changed, self._on_relation_changed_event
+        )
+
+    @abstractmethod
+    def _on_relation_joined_event(self, event: RelationJoinedEvent) -> None:
+        """Event emitted when the application joins the relation."""
+        raise NotImplementedError
+
+    @abstractmethod
+    def _on_relation_changed_event(self, event: RelationChangedEvent) -> None:
+        raise NotImplementedError
+
+    def fetch_relation_data(self) -> dict:
+        """Retrieves data from relation.
+
+        This function can be used to retrieve data from a relation
+        in the charm code when outside an event callback.
+        Function cannot be used in `*-relation-broken` events and will raise an exception.
+
+        Returns:
+            a dict of the values stored in the relation data bag
+                for all relation instances (indexed by the relation ID).
+        """
+        data = {}
+        for relation in self.relations:
+            data[relation.id] = {
+                key: value for key, value in relation.data[relation.app].items() if key != "data"
+            }
+        return data
+
+    def _update_relation_data(self, relation_id: int, data: dict) -> None:
+        """Updates a set of key-value pairs in the relation.
+
+        This function writes in the application data bag, therefore,
+        only the leader unit can call it.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            data: dict containing the key-value pairs
+                that should be updated in the relation.
+        """
+        if self.local_unit.is_leader():
+            relation = self.charm.model.get_relation(self.relation_name, relation_id)
+            relation.data[self.local_app].update(data)
+
+    def _diff(self, event: RelationChangedEvent) -> Diff:
+        """Retrieves the diff of the data in the relation changed databag.
+
+        Args:
+            event: relation changed event.
+
+        Returns:
+            a Diff instance containing the added, deleted and changed
+                keys from the event relation databag.
+        """
+        return diff(event, self.local_unit)
+
+    @property
+    def relations(self) -> List[Relation]:
+        """The list of Relation instances associated with this relation_name."""
+        return [
+            relation
+            for relation in self.charm.model.relations[self.relation_name]
+            if self._is_relation_active(relation)
+        ]
+
+    @staticmethod
+    def _is_relation_active(relation: Relation):
+        try:
+            _ = repr(relation.data)
+            return True
+        except RuntimeError:
+            return False
+
+    @staticmethod
+    def _is_resource_created_for_relation(relation: Relation):
+        return (
+            "username" in relation.data[relation.app] and "password" in relation.data[relation.app]
+        )
+
+    def is_resource_created(self, relation_id: Optional[int] = None) -> bool:
+        """Check if the resource has been created.
+
+        This function can be used to check if the Provider answered with data in the charm code
+        when outside an event callback.
+
+        Args:
+            relation_id (int, optional): When provided the check is done only for the relation id
+                provided, otherwise the check is done for all relations
+
+        Returns:
+            True or False
+
+        Raises:
+            IndexError: If relation_id is provided but that relation does not exist
+        """
+        if relation_id is not None:
+            try:
+                relation = [relation for relation in self.relations if relation.id == relation_id][
+                    0
+                ]
+                return self._is_resource_created_for_relation(relation)
+            except IndexError:
+                raise IndexError(f"relation id {relation_id} cannot be accessed")
+        else:
+            return (
+                all(
+                    [
+                        self._is_resource_created_for_relation(relation)
+                        for relation in self.relations
+                    ]
+                )
+                if self.relations
+                else False
+            )
+
+
+# General events
+
+
+class ExtraRoleEvent(RelationEvent):
+    """Base class for data events."""
+
+    @property
+    def extra_user_roles(self) -> Optional[str]:
+        """Returns the extra user roles that were requested."""
+        return self.relation.data[self.relation.app].get("extra-user-roles")
+
+
+class AuthenticationEvent(RelationEvent):
+    """Base class for authentication fields for events."""
+
+    @property
+    def username(self) -> Optional[str]:
+        """Returns the created username."""
+        return self.relation.data[self.relation.app].get("username")
+
+    @property
+    def password(self) -> Optional[str]:
+        """Returns the password for the created user."""
+        return self.relation.data[self.relation.app].get("password")
+
+    @property
+    def tls(self) -> Optional[str]:
+        """Returns whether TLS is configured."""
+        return self.relation.data[self.relation.app].get("tls")
+
+    @property
+    def tls_ca(self) -> Optional[str]:
+        """Returns TLS CA."""
+        return self.relation.data[self.relation.app].get("tls-ca")
+
+
+# Database related events and fields
+
+
+class DatabaseProvidesEvent(RelationEvent):
+    """Base class for database events."""
+
+    @property
+    def database(self) -> Optional[str]:
+        """Returns the database that was requested."""
+        return self.relation.data[self.relation.app].get("database")
+
+
+class DatabaseRequestedEvent(DatabaseProvidesEvent, ExtraRoleEvent):
+    """Event emitted when a new database is requested for use on this relation."""
+
+
+class DatabaseProvidesEvents(CharmEvents):
+    """Database events.
+
+    This class defines the events that the database can emit.
+    """
+
+    database_requested = EventSource(DatabaseRequestedEvent)
+
+
+class DatabaseRequiresEvent(RelationEvent):
+    """Base class for database events."""
+
+    @property
+    def endpoints(self) -> Optional[str]:
+        """Returns a comma separated list of read/write endpoints."""
+        return self.relation.data[self.relation.app].get("endpoints")
+
+    @property
+    def read_only_endpoints(self) -> Optional[str]:
+        """Returns a comma separated list of read only endpoints."""
+        return self.relation.data[self.relation.app].get("read-only-endpoints")
+
+    @property
+    def replset(self) -> Optional[str]:
+        """Returns the replicaset name.
+
+        MongoDB only.
+        """
+        return self.relation.data[self.relation.app].get("replset")
+
+    @property
+    def uris(self) -> Optional[str]:
+        """Returns the connection URIs.
+
+        MongoDB, Redis, OpenSearch.
+        """
+        return self.relation.data[self.relation.app].get("uris")
+
+    @property
+    def version(self) -> Optional[str]:
+        """Returns the version of the database.
+
+        Version as informed by the database daemon.
+        """
+        return self.relation.data[self.relation.app].get("version")
+
+
+class DatabaseCreatedEvent(AuthenticationEvent, DatabaseRequiresEvent):
+    """Event emitted when a new database is created for use on this relation."""
+
+
+class DatabaseEndpointsChangedEvent(AuthenticationEvent, DatabaseRequiresEvent):
+    """Event emitted when the read/write endpoints are changed."""
+
+
+class DatabaseReadOnlyEndpointsChangedEvent(AuthenticationEvent, DatabaseRequiresEvent):
+    """Event emitted when the read only endpoints are changed."""
+
+
+class DatabaseRequiresEvents(CharmEvents):
+    """Database events.
+
+    This class defines the events that the database can emit.
+    """
+
+    database_created = EventSource(DatabaseCreatedEvent)
+    endpoints_changed = EventSource(DatabaseEndpointsChangedEvent)
+    read_only_endpoints_changed = EventSource(DatabaseReadOnlyEndpointsChangedEvent)
+
+
+# Database Provider and Requires
+
+
+class DatabaseProvides(DataProvides):
+    """Provider-side of the database relations."""
+
+    on = DatabaseProvidesEvents()
+
+    def __init__(self, charm: CharmBase, relation_name: str) -> None:
+        super().__init__(charm, relation_name)
+
+    def _on_relation_changed(self, event: RelationChangedEvent) -> None:
+        """Event emitted when the relation has changed."""
+        # Only the leader should handle this event.
+        if not self.local_unit.is_leader():
+            return
+
+        # Check which data has changed to emit customs events.
+        diff = self._diff(event)
+
+        # Emit a database requested event if the setup key (database name and optional
+        # extra user roles) was added to the relation databag by the application.
+        if "database" in diff.added:
+            self.on.database_requested.emit(event.relation, app=event.app, unit=event.unit)
+
+    def set_endpoints(self, relation_id: int, connection_strings: str) -> None:
+        """Set database primary connections.
+
+        This function writes in the application data bag, therefore,
+        only the leader unit can call it.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            connection_strings: database hosts and ports comma separated list.
+        """
+        self._update_relation_data(relation_id, {"endpoints": connection_strings})
+
+    def set_read_only_endpoints(self, relation_id: int, connection_strings: str) -> None:
+        """Set database replicas connection strings.
+
+        This function writes in the application data bag, therefore,
+        only the leader unit can call it.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            connection_strings: database hosts and ports comma separated list.
+        """
+        self._update_relation_data(relation_id, {"read-only-endpoints": connection_strings})
+
+    def set_replset(self, relation_id: int, replset: str) -> None:
+        """Set replica set name in the application relation databag.
+
+        MongoDB only.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            replset: replica set name.
+        """
+        self._update_relation_data(relation_id, {"replset": replset})
+
+    def set_uris(self, relation_id: int, uris: str) -> None:
+        """Set the database connection URIs in the application relation databag.
+
+        MongoDB, Redis, and OpenSearch only.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            uris: connection URIs.
+        """
+        self._update_relation_data(relation_id, {"uris": uris})
+
+    def set_version(self, relation_id: int, version: str) -> None:
+        """Set the database version in the application relation databag.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            version: database version.
+        """
+        self._update_relation_data(relation_id, {"version": version})
+
+
+class DatabaseRequires(DataRequires):
+    """Requires-side of the database relation."""
+
+    on = DatabaseRequiresEvents()
+
+    def __init__(
+        self,
+        charm,
+        relation_name: str,
+        database_name: str,
+        extra_user_roles: str = None,
+        relations_aliases: List[str] = None,
+    ):
+        """Manager of database client relations."""
+        super().__init__(charm, relation_name, extra_user_roles)
+        self.database = database_name
+        self.relations_aliases = relations_aliases
+
+        # Define custom event names for each alias.
+        if relations_aliases:
+            # Ensure the number of aliases does not exceed the maximum
+            # of connections allowed in the specific relation.
+            relation_connection_limit = self.charm.meta.requires[relation_name].limit
+            if len(relations_aliases) != relation_connection_limit:
+                raise ValueError(
+                    f"The number of aliases must match the maximum number of connections allowed in the relation. "
+                    f"Expected {relation_connection_limit}, got {len(relations_aliases)}"
+                )
+
+            for relation_alias in relations_aliases:
+                self.on.define_event(f"{relation_alias}_database_created", DatabaseCreatedEvent)
+                self.on.define_event(
+                    f"{relation_alias}_endpoints_changed", DatabaseEndpointsChangedEvent
+                )
+                self.on.define_event(
+                    f"{relation_alias}_read_only_endpoints_changed",
+                    DatabaseReadOnlyEndpointsChangedEvent,
+                )
+
+    def _assign_relation_alias(self, relation_id: int) -> None:
+        """Assigns an alias to a relation.
+
+        This function writes in the unit data bag.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+        """
+        # If no aliases were provided, return immediately.
+        if not self.relations_aliases:
+            return
+
+        # Return if an alias was already assigned to this relation
+        # (like when there are more than one unit joining the relation).
+        if (
+            self.charm.model.get_relation(self.relation_name, relation_id)
+            .data[self.local_unit]
+            .get("alias")
+        ):
+            return
+
+        # Retrieve the available aliases (the ones that weren't assigned to any relation).
+        available_aliases = self.relations_aliases[:]
+        for relation in self.charm.model.relations[self.relation_name]:
+            alias = relation.data[self.local_unit].get("alias")
+            if alias:
+                logger.debug("Alias %s was already assigned to relation %d", alias, relation.id)
+                available_aliases.remove(alias)
+
+        # Set the alias in the unit relation databag of the specific relation.
+        relation = self.charm.model.get_relation(self.relation_name, relation_id)
+        relation.data[self.local_unit].update({"alias": available_aliases[0]})
+
+    def _emit_aliased_event(self, event: RelationChangedEvent, event_name: str) -> None:
+        """Emit an aliased event to a particular relation if it has an alias.
+
+        Args:
+            event: the relation changed event that was received.
+            event_name: the name of the event to emit.
+        """
+        alias = self._get_relation_alias(event.relation.id)
+        if alias:
+            getattr(self.on, f"{alias}_{event_name}").emit(
+                event.relation, app=event.app, unit=event.unit
+            )
+
+    def _get_relation_alias(self, relation_id: int) -> Optional[str]:
+        """Returns the relation alias.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+
+        Returns:
+            the relation alias or None if the relation was not found.
+        """
+        for relation in self.charm.model.relations[self.relation_name]:
+            if relation.id == relation_id:
+                return relation.data[self.local_unit].get("alias")
+        return None
+
+    def _on_relation_joined_event(self, event: RelationJoinedEvent) -> None:
+        """Event emitted when the application joins the database relation."""
+        # If relations aliases were provided, assign one to the relation.
+        self._assign_relation_alias(event.relation.id)
+
+        # Sets both database and extra user roles in the relation
+        # if the roles are provided. Otherwise, sets only the database.
+        if self.extra_user_roles:
+            self._update_relation_data(
+                event.relation.id,
+                {
+                    "database": self.database,
+                    "extra-user-roles": self.extra_user_roles,
+                },
+            )
+        else:
+            self._update_relation_data(event.relation.id, {"database": self.database})
+
+    def _on_relation_changed_event(self, event: RelationChangedEvent) -> None:
+        """Event emitted when the database relation has changed."""
+        # Check which data has changed to emit customs events.
+        diff = self._diff(event)
+
+        # Check if the database is created
+        # (the database charm shared the credentials).
+        if "username" in diff.added and "password" in diff.added:
+            # Emit the default event (the one without an alias).
+            logger.info("database created at %s", datetime.now())
+            self.on.database_created.emit(event.relation, app=event.app, unit=event.unit)
+
+            # Emit the aliased event (if any).
+            self._emit_aliased_event(event, "database_created")
+
+            # To avoid unnecessary application restarts do not trigger
+            # â€œendpoints_changed“ event if â€œdatabase_created“ is triggered.
+            return
+
+        # Emit an endpoints changed event if the database
+        # added or changed this info in the relation databag.
+        if "endpoints" in diff.added or "endpoints" in diff.changed:
+            # Emit the default event (the one without an alias).
+            logger.info("endpoints changed on %s", datetime.now())
+            self.on.endpoints_changed.emit(event.relation, app=event.app, unit=event.unit)
+
+            # Emit the aliased event (if any).
+            self._emit_aliased_event(event, "endpoints_changed")
+
+            # To avoid unnecessary application restarts do not trigger
+            # â€œread_only_endpoints_changed“ event if â€œendpoints_changed“ is triggered.
+            return
+
+        # Emit a read only endpoints changed event if the database
+        # added or changed this info in the relation databag.
+        if "read-only-endpoints" in diff.added or "read-only-endpoints" in diff.changed:
+            # Emit the default event (the one without an alias).
+            logger.info("read-only-endpoints changed on %s", datetime.now())
+            self.on.read_only_endpoints_changed.emit(
+                event.relation, app=event.app, unit=event.unit
+            )
+
+            # Emit the aliased event (if any).
+            self._emit_aliased_event(event, "read_only_endpoints_changed")
+
+
+# Kafka related events
+
+
+class KafkaProvidesEvent(RelationEvent):
+    """Base class for Kafka events."""
+
+    @property
+    def topic(self) -> Optional[str]:
+        """Returns the topic that was requested."""
+        return self.relation.data[self.relation.app].get("topic")
+
+
+class TopicRequestedEvent(KafkaProvidesEvent, ExtraRoleEvent):
+    """Event emitted when a new topic is requested for use on this relation."""
+
+
+class KafkaProvidesEvents(CharmEvents):
+    """Kafka events.
+
+    This class defines the events that the Kafka can emit.
+    """
+
+    topic_requested = EventSource(TopicRequestedEvent)
+
+
+class KafkaRequiresEvent(RelationEvent):
+    """Base class for Kafka events."""
+
+    @property
+    def bootstrap_server(self) -> Optional[str]:
+        """Returns a a comma-seperated list of broker uris."""
+        return self.relation.data[self.relation.app].get("endpoints")
+
+    @property
+    def consumer_group_prefix(self) -> Optional[str]:
+        """Returns the consumer-group-prefix."""
+        return self.relation.data[self.relation.app].get("consumer-group-prefix")
+
+    @property
+    def zookeeper_uris(self) -> Optional[str]:
+        """Returns a comma separated list of Zookeeper uris."""
+        return self.relation.data[self.relation.app].get("zookeeper-uris")
+
+
+class TopicCreatedEvent(AuthenticationEvent, KafkaRequiresEvent):
+    """Event emitted when a new topic is created for use on this relation."""
+
+
+class BootstrapServerChangedEvent(AuthenticationEvent, KafkaRequiresEvent):
+    """Event emitted when the bootstrap server is changed."""
+
+
+class KafkaRequiresEvents(CharmEvents):
+    """Kafka events.
+
+    This class defines the events that the Kafka can emit.
+    """
+
+    topic_created = EventSource(TopicCreatedEvent)
+    bootstrap_server_changed = EventSource(BootstrapServerChangedEvent)
+
+
+# Kafka Provides and Requires
+
+
+class KafkaProvides(DataProvides):
+    """Provider-side of the Kafka relation."""
+
+    on = KafkaProvidesEvents()
+
+    def __init__(self, charm: CharmBase, relation_name: str) -> None:
+        super().__init__(charm, relation_name)
+
+    def _on_relation_changed(self, event: RelationChangedEvent) -> None:
+        """Event emitted when the relation has changed."""
+        # Only the leader should handle this event.
+        if not self.local_unit.is_leader():
+            return
+
+        # Check which data has changed to emit customs events.
+        diff = self._diff(event)
+
+        # Emit a topic requested event if the setup key (topic name and optional
+        # extra user roles) was added to the relation databag by the application.
+        if "topic" in diff.added:
+            self.on.topic_requested.emit(event.relation, app=event.app, unit=event.unit)
+
+    def set_bootstrap_server(self, relation_id: int, bootstrap_server: str) -> None:
+        """Set the bootstrap server in the application relation databag.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            bootstrap_server: the bootstrap server address.
+        """
+        self._update_relation_data(relation_id, {"endpoints": bootstrap_server})
+
+    def set_consumer_group_prefix(self, relation_id: int, consumer_group_prefix: str) -> None:
+        """Set the consumer group prefix in the application relation databag.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            consumer_group_prefix: the consumer group prefix string.
+        """
+        self._update_relation_data(relation_id, {"consumer-group-prefix": consumer_group_prefix})
+
+    def set_zookeeper_uris(self, relation_id: int, zookeeper_uris: str) -> None:
+        """Set the zookeeper uris in the application relation databag.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            zookeeper_uris: comma-seperated list of ZooKeeper server uris.
+        """
+        self._update_relation_data(relation_id, {"zookeeper-uris": zookeeper_uris})
+
+
+class KafkaRequires(DataRequires):
+    """Requires-side of the Kafka relation."""
+
+    on = KafkaRequiresEvents()
+
+    def __init__(self, charm, relation_name: str, topic: str, extra_user_roles: str = None):
+        """Manager of Kafka client relations."""
+        # super().__init__(charm, relation_name)
+        super().__init__(charm, relation_name, extra_user_roles)
+        self.charm = charm
+        self.topic = topic
+
+    def _on_relation_joined_event(self, event: RelationJoinedEvent) -> None:
+        """Event emitted when the application joins the Kafka relation."""
+        # Sets both topic and extra user roles in the relation
+        # if the roles are provided. Otherwise, sets only the topic.
+        self._update_relation_data(
+            event.relation.id,
+            {
+                "topic": self.topic,
+                "extra-user-roles": self.extra_user_roles,
+            }
+            if self.extra_user_roles is not None
+            else {"topic": self.topic},
+        )
+
+    def _on_relation_changed_event(self, event: RelationChangedEvent) -> None:
+        """Event emitted when the Kafka relation has changed."""
+        # Check which data has changed to emit customs events.
+        diff = self._diff(event)
+
+        # Check if the topic is created
+        # (the Kafka charm shared the credentials).
+        if "username" in diff.added and "password" in diff.added:
+            # Emit the default event (the one without an alias).
+            logger.info("topic created at %s", datetime.now())
+            self.on.topic_created.emit(event.relation, app=event.app, unit=event.unit)
+
+            # To avoid unnecessary application restarts do not trigger
+            # â€œendpoints_changed“ event if â€œtopic_created“ is triggered.
+            return
+
+        # Emit an endpoints (bootstap-server) changed event if the Kakfa endpoints
+        # added or changed this info in the relation databag.
+        if "endpoints" in diff.added or "endpoints" in diff.changed:
+            # Emit the default event (the one without an alias).
+            logger.info("endpoints changed on %s", datetime.now())
+            self.on.bootstrap_server_changed.emit(
+                event.relation, app=event.app, unit=event.unit
+            )  # here check if this is the right design
+            return
index d739ba6..02d46db 100644 (file)
@@ -235,12 +235,14 @@ wait
 @dataclass
 class SubModule:
     """Represent RO Submodules."""
+
     sub_module_path: str
     container_path: str
 
 
 class HostPath:
     """Represents a hostpath."""
+
     def __init__(self, config: str, container_path: str, submodules: dict = None) -> None:
         mount_path_items = config.split("-")
         mount_path_items.reverse()
@@ -250,13 +252,18 @@ class HostPath:
         if submodules:
             for submodule in submodules.keys():
                 self.sub_module_dict[submodule] = SubModule(
-                    sub_module_path=self.mount_path + "/" + submodule + "/" + submodules[submodule].split("/")[-1],
+                    sub_module_path=self.mount_path
+                    + "/"
+                    + submodule
+                    + "/"
+                    + submodules[submodule].split("/")[-1],
                     container_path=submodules[submodule],
                 )
         else:
             self.container_path = container_path
             self.module_name = container_path.split("/")[-1]
 
+
 class DebugMode(Object):
     """Class to handle the debug-mode."""
 
@@ -432,7 +439,9 @@ class DebugMode(Object):
             logger.debug(f"adding symlink for {hostpath.config}")
             if len(hostpath.sub_module_dict) > 0:
                 for sub_module in hostpath.sub_module_dict.keys():
-                    self.container.exec(["rm", "-rf", hostpath.sub_module_dict[sub_module].container_path]).wait_output()
+                    self.container.exec(
+                        ["rm", "-rf", hostpath.sub_module_dict[sub_module].container_path]
+                    ).wait_output()
                     self.container.exec(
                         [
                             "ln",
@@ -506,7 +515,6 @@ class DebugMode(Object):
     def _delete_hostpath_from_statefulset(self, hostpath: HostPath, statefulset: StatefulSet):
         hostpath_unmounted = False
         for volume in statefulset.spec.template.spec.volumes:
-
             if hostpath.config != volume.name:
                 continue
 
index 0e564e5..3b737ba 100644 (file)
@@ -62,7 +62,7 @@ requires:
     interface: kafka
     limit: 1
   mongodb:
-    interface: mongodb
+    interface: mongodb_client
     limit: 1
   keystone:
     interface: keystone
index d0d4a5b..16cf0f4 100644 (file)
@@ -50,7 +50,3 @@ ignore = ["W503", "E501", "D107"]
 # D100, D101, D102, D103: Ignore missing docstrings in tests
 per-file-ignores = ["tests/*:D100,D101,D102,D103,D104"]
 docstring-convention = "google"
-# Check for properly formatted copyright header in each file
-copyright-check = "True"
-copyright-author = "Canonical Ltd."
-copyright-regexp = "Copyright\\s\\d{4}([-,]\\d{4})*\\s+%(author)s"
index 5ee9d5c..761edd8 100644 (file)
@@ -17,7 +17,7 @@
 #
 # To get in touch with the maintainers, please contact:
 # osm-charmers@lists.launchpad.net
-ops >= 1.2.0
+ops < 2.2
 lightkube
 lightkube-models
 git+https://github.com/charmed-osm/config-validator/
index 77c8a18..484841a 100755 (executable)
@@ -30,6 +30,7 @@ See more: https://charmhub.io/osm
 import logging
 from typing import Any, Dict
 
+from charms.data_platform_libs.v0.data_interfaces import DatabaseRequires
 from charms.kafka_k8s.v0.kafka import KafkaEvents, KafkaRequires
 from charms.nginx_ingress_integrator.v0.ingress import IngressRequires
 from charms.observability_libs.v1.kubernetes_service_patch import KubernetesServicePatch
@@ -48,7 +49,7 @@ from ops.framework import StoredState
 from ops.main import main
 from ops.model import ActiveStatus, Container
 
-from legacy_interfaces import KeystoneClient, MongoClient, PrometheusClient
+from legacy_interfaces import KeystoneClient, PrometheusClient
 
 HOSTPATHS = [
     HostPath(
@@ -84,7 +85,9 @@ class OsmNbiCharm(CharmBase):
         self.kafka = KafkaRequires(self)
         self.nbi = NbiProvides(self)
         self.temporal = TemporalRequires(self)
-        self.mongodb_client = MongoClient(self, "mongodb")
+        self.mongodb_client = DatabaseRequires(
+            self, "mongodb", database_name="osm", extra_user_roles="admin"
+        )
         self.prometheus_client = PrometheusClient(self, "prometheus")
         self.keystone_client = KeystoneClient(self, "keystone")
         self._observe_charm_events()
@@ -181,19 +184,27 @@ class OsmNbiCharm(CharmBase):
             # Relation events
             self.on.kafka_available: self._on_config_changed,
             self.on["kafka"].relation_broken: self._on_required_relation_broken,
+            self.mongodb_client.on.database_created: self._on_config_changed,
+            self.on["mongodb"].relation_broken: self._on_required_relation_broken,
             # Action events
             self.on.get_debug_mode_information_action: self._on_get_debug_mode_information_action,
             self.on.nbi_relation_joined: self._update_nbi_relation,
             self.on["temporal"].relation_changed: self._on_config_changed,
             self.on["temporal"].relation_broken: self._on_required_relation_broken,
         }
-        for relation in [self.on[rel_name] for rel_name in ["mongodb", "prometheus", "keystone"]]:
+        for relation in [self.on[rel_name] for rel_name in ["prometheus", "keystone"]]:
             event_handler_mapping[relation.relation_changed] = self._on_config_changed
             event_handler_mapping[relation.relation_broken] = self._on_required_relation_broken
 
         for event, handler in event_handler_mapping.items():
             self.framework.observe(event, handler)
 
+    def _is_database_available(self) -> bool:
+        try:
+            return self.mongodb_client.is_resource_created()
+        except KeyError:
+            return False
+
     def _validate_config(self) -> None:
         """Validate charm configuration.
 
@@ -213,7 +224,7 @@ class OsmNbiCharm(CharmBase):
 
         if not self.kafka.host or not self.kafka.port:
             missing_relations.append("kafka")
-        if self.mongodb_client.is_missing_data_in_unit():
+        if not self._is_database_available():
             missing_relations.append("mongodb")
         if self.prometheus_client.is_missing_data_in_app():
             missing_relations.append("prometheus")
@@ -269,13 +280,13 @@ class OsmNbiCharm(CharmBase):
                         "OSMNBI_MESSAGE_DRIVER": "kafka",
                         # Database configuration
                         "OSMNBI_DATABASE_DRIVER": "mongo",
-                        "OSMNBI_DATABASE_URI": self.mongodb_client.connection_string,
+                        "OSMNBI_DATABASE_URI": self._get_mongodb_uri(),
                         "OSMNBI_DATABASE_COMMONKEY": self.config["database-commonkey"],
                         # Storage configuration
                         "OSMNBI_STORAGE_DRIVER": "mongo",
                         "OSMNBI_STORAGE_PATH": "/app/storage",
                         "OSMNBI_STORAGE_COLLECTION": "files",
-                        "OSMNBI_STORAGE_URI": self.mongodb_client.connection_string,
+                        "OSMNBI_STORAGE_URI": self._get_mongodb_uri(),
                         # Prometheus configuration
                         "OSMNBI_PROMETHEUS_HOST": self.prometheus_client.hostname,
                         "OSMNBI_PROMETHEUS_PORT": self.prometheus_client.port,
@@ -303,6 +314,9 @@ class OsmNbiCharm(CharmBase):
             },
         }
 
+    def _get_mongodb_uri(self):
+        return list(self.mongodb_client.fetch_relation_data().values())[0]["uris"]
+
 
 if __name__ == "__main__":  # pragma: no cover
     main(OsmNbiCharm)
index ac35ea6..b853bb5 100644 (file)
@@ -42,13 +42,24 @@ MONGO_DB_CHARM = "mongodb-k8s"
 MONGO_DB_APP = "mongodb"
 KEYSTONE_CHARM = "osm-keystone"
 KEYSTONE_APP = "keystone"
+TEMPORAL_CHARM = "osm-temporal"
+TEMPORAL_APP = "temporal"
 PROMETHEUS_CHARM = "osm-prometheus"
 PROMETHEUS_APP = "prometheus"
 ZOOKEEPER_CHARM = "zookeeper-k8s"
 ZOOKEEPER_APP = "zookeeper"
 INGRESS_CHARM = "nginx-ingress-integrator"
 INGRESS_APP = "ingress"
-APPS = [KAFKA_APP, MONGO_DB_APP, MARIADB_APP, ZOOKEEPER_APP, KEYSTONE_APP, PROMETHEUS_APP, NBI_APP]
+APPS = [
+    KAFKA_APP,
+    MONGO_DB_APP,
+    MARIADB_APP,
+    ZOOKEEPER_APP,
+    KEYSTONE_APP,
+    TEMPORAL_APP,
+    PROMETHEUS_APP,
+    NBI_APP,
+]
 
 
 @pytest.mark.abort_on_fail
@@ -61,7 +72,7 @@ async def test_nbi_is_deployed(ops_test: OpsTest):
             charm, resources=resources, application_name=NBI_APP, series="focal"
         ),
         ops_test.model.deploy(KAFKA_CHARM, application_name=KAFKA_APP, channel="stable"),
-        ops_test.model.deploy(MONGO_DB_CHARM, application_name=MONGO_DB_APP, channel="stable"),
+        ops_test.model.deploy(MONGO_DB_CHARM, application_name=MONGO_DB_APP, channel="edge"),
         ops_test.model.deploy(MARIADB_CHARM, application_name=MARIADB_APP, channel="stable"),
         ops_test.model.deploy(ZOOKEEPER_CHARM, application_name=ZOOKEEPER_APP, channel="stable"),
         ops_test.model.deploy(PROMETHEUS_CHARM, application_name=PROMETHEUS_APP, channel="stable"),
@@ -71,6 +82,8 @@ async def test_nbi_is_deployed(ops_test: OpsTest):
     # prevents setting correctly the resources
     cmd = f"juju deploy {KEYSTONE_CHARM} {KEYSTONE_APP} --resource keystone-image=opensourcemano/keystone:12"
     await ops_test.run(*shlex.split(cmd), check=True)
+    cmd = f"juju deploy {TEMPORAL_CHARM} {TEMPORAL_APP} --resource temporal-server-image=temporalio/auto-setup:1.20 --series focal --channel=latest/edge"
+    await ops_test.run(*shlex.split(cmd), check=True)
 
     async with ops_test.fast_forward():
         await ops_test.model.wait_for_idle(
@@ -78,17 +91,22 @@ async def test_nbi_is_deployed(ops_test: OpsTest):
         )
     assert ops_test.model.applications[NBI_APP].status == "blocked"
     unit = ops_test.model.applications[NBI_APP].units[0]
-    assert unit.workload_status_message == "need kafka, mongodb, prometheus, keystone relations"
+    assert (
+        unit.workload_status_message
+        == "need kafka, mongodb, prometheus, keystone, temporal relations"
+    )
 
     logger.info("Adding relations for other components")
     await ops_test.model.add_relation(KAFKA_APP, ZOOKEEPER_APP)
     await ops_test.model.add_relation(MARIADB_APP, KEYSTONE_APP)
+    await ops_test.model.add_relation(MARIADB_APP, f"{TEMPORAL_APP}:db")
 
     logger.info("Adding relations")
     await ops_test.model.add_relation(NBI_APP, MONGO_DB_APP)
     await ops_test.model.add_relation(NBI_APP, KAFKA_APP)
     await ops_test.model.add_relation(NBI_APP, PROMETHEUS_APP)
     await ops_test.model.add_relation(NBI_APP, KEYSTONE_APP)
+    await ops_test.model.add_relation(NBI_APP, TEMPORAL_APP)
 
     async with ops_test.fast_forward():
         await ops_test.model.wait_for_idle(
@@ -111,7 +129,7 @@ async def test_nbi_scales_up(ops_test: OpsTest):
 
 @pytest.mark.abort_on_fail
 @pytest.mark.parametrize(
-    "relation_to_remove", [KAFKA_APP, MONGO_DB_APP, PROMETHEUS_APP, KEYSTONE_APP]
+    "relation_to_remove", [KAFKA_APP, MONGO_DB_APP, PROMETHEUS_APP, KEYSTONE_APP, TEMPORAL_APP]
 )
 async def test_nbi_blocks_without_relation(ops_test: OpsTest, relation_to_remove):
     logger.info("Removing relation: %s", relation_to_remove)
index 87afafa..f4d10c7 100644 (file)
@@ -37,6 +37,7 @@ def harness(mocker: MockerFixture):
     mocker.patch("charm.KubernetesServicePatch", lambda x, y: None)
     harness = Harness(OsmNbiCharm)
     harness.begin()
+    harness.container_pebble_ready(container_name)
     yield harness
     harness.cleanup()
 
@@ -46,7 +47,7 @@ def test_missing_relations(harness: Harness):
     assert type(harness.charm.unit.status) == BlockedStatus
     assert all(
         relation in harness.charm.unit.status.message
-        for relation in ["mongodb", "kafka", "prometheus", "keystone"]
+        for relation in ["mongodb", "kafka", "prometheus", "keystone", "temporal"]
     )
 
 
@@ -81,7 +82,9 @@ def _add_relations(harness: Harness):
     relation_id = harness.add_relation("mongodb", "mongodb")
     harness.add_relation_unit(relation_id, "mongodb/0")
     harness.update_relation_data(
-        relation_id, "mongodb/0", {"connection_string": "mongodb://:1234"}
+        relation_id,
+        "mongodb",
+        {"uris": "mongodb://:1234", "username": "user", "password": "password"},
     )
     relation_ids.append(relation_id)
     # Add kafka relation
@@ -118,4 +121,9 @@ def _add_relations(harness: Harness):
         },
     )
     relation_ids.append(relation_id)
+    # Add temporal relation
+    relation_id = harness.add_relation("temporal", "temporal")
+    harness.add_relation_unit(relation_id, "temporal/0")
+    harness.update_relation_data(relation_id, "temporal", {"host": "temporal", "port": "7233"})
+    relation_ids.append(relation_id)
     return relation_ids
index c1bada0..07ea16d 100644 (file)
@@ -30,6 +30,7 @@ lib_path = {toxinidir}/lib/charms/osm_nbi
 all_path = {[vars]src_path} {[vars]tst_path} 
 
 [testenv]
+basepython = python3.8
 setenv =
   PYTHONPATH = {toxinidir}:{toxinidir}/lib:{[vars]src_path}
   PYTHONBREAKPOINT=ipdb.set_trace
@@ -54,7 +55,6 @@ deps =
     black
     flake8
     flake8-docstrings
-    flake8-copyright
     flake8-builtins
     pyproject-flake8
     pep8-naming
@@ -63,7 +63,7 @@ deps =
 commands =
     # uncomment the following line if this charm owns a lib
     codespell {[vars]lib_path}
-    codespell {toxinidir}/. --skip {toxinidir}/.git --skip {toxinidir}/.tox \
+    codespell {toxinidir} --skip {toxinidir}/.git --skip {toxinidir}/.tox \
       --skip {toxinidir}/build --skip {toxinidir}/lib --skip {toxinidir}/venv \
       --skip {toxinidir}/.mypy_cache --skip {toxinidir}/icon.svg
     # pflake8 wrapper supports config from pyproject.toml
@@ -88,7 +88,7 @@ commands =
 description = Run integration tests
 deps =
     pytest
-    juju
+    juju<3
     pytest-operator
     -r{toxinidir}/requirements.txt
 commands =
index d0d4a5b..16cf0f4 100644 (file)
@@ -50,7 +50,3 @@ ignore = ["W503", "E501", "D107"]
 # D100, D101, D102, D103: Ignore missing docstrings in tests
 per-file-ignores = ["tests/*:D100,D101,D102,D103,D104"]
 docstring-convention = "google"
-# Check for properly formatted copyright header in each file
-copyright-check = "True"
-copyright-author = "Canonical Ltd."
-copyright-regexp = "Copyright\\s\\d{4}([-,]\\d{4})*\\s+%(author)s"
index 5ee9d5c..761edd8 100644 (file)
@@ -17,7 +17,7 @@
 #
 # To get in touch with the maintainers, please contact:
 # osm-charmers@lists.launchpad.net
-ops >= 1.2.0
+ops < 2.2
 lightkube
 lightkube-models
 git+https://github.com/charmed-osm/config-validator/
index 235461f..ca517b3 100755 (executable)
@@ -43,7 +43,7 @@ from lightkube.models.core_v1 import ServicePort
 from ops.charm import CharmBase
 from ops.framework import StoredState
 from ops.main import main
-from ops.model import ActiveStatus, Container
+from ops.model import ActiveStatus, BlockedStatus, Container
 
 SERVICE_PORT = 80
 
@@ -113,8 +113,8 @@ class OsmNgUiCharm(CharmBase):
             logger.debug(e.message)
             self.unit.status = e.status
 
-    def _on_required_relation_broken(self, _) -> None:
-        """Handler for the kafka-broken event."""
+    def _on_nbi_relation_broken(self, _) -> None:
+        """Handler for the nbi relation broken event."""
         # Check Pebble has started in the container
         try:
             check_container_ready(self.container)
@@ -124,7 +124,7 @@ class OsmNgUiCharm(CharmBase):
         except CharmError:
             pass
         finally:
-            self._on_update_status()
+            self.unit.status = BlockedStatus("need nbi relation")
 
     # ---------------------------------------------------------------------------
     #   Validation and configuration and more
@@ -142,7 +142,7 @@ class OsmNgUiCharm(CharmBase):
             self.on.update_status: self._on_update_status,
             # Relation events
             self.on["nbi"].relation_changed: self._on_config_changed,
-            self.on["nbi"].relation_broken: self._on_required_relation_broken,
+            self.on["nbi"].relation_broken: self._on_nbi_relation_broken,
         }
         for event, handler in event_handler_mapping.items():
             self.framework.observe(event, handler)
diff --git a/installers/charm/osm-ng-ui/tests/integration/test_charm.py b/installers/charm/osm-ng-ui/tests/integration/test_charm.py
new file mode 100644 (file)
index 0000000..b9aa910
--- /dev/null
@@ -0,0 +1,155 @@
+#!/usr/bin/env python3
+# Copyright 2023 Canonical Ltd.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+#
+# For those usages not covered by the Apache License, Version 2.0 please
+# contact: legal@canonical.com
+#
+# To get in touch with the maintainers, please contact:
+# osm-charmers@lists.launchpad.net
+#
+# Learn more about testing at: https://juju.is/docs/sdk/testing
+
+import asyncio
+import logging
+import shlex
+from pathlib import Path
+
+import pytest
+import yaml
+from pytest_operator.plugin import OpsTest
+
+logger = logging.getLogger(__name__)
+
+METADATA = yaml.safe_load(Path("./metadata.yaml").read_text())
+NG_UI_APP = METADATA["name"]
+
+# Required charms (needed by NG UI)
+NBI_CHARM = "osm-nbi"
+NBI_APP = "nbi"
+KAFKA_CHARM = "kafka-k8s"
+KAFKA_APP = "kafka"
+MONGO_DB_CHARM = "mongodb-k8s"
+MONGO_DB_APP = "mongodb"
+PROMETHEUS_CHARM = "osm-prometheus"
+PROMETHEUS_APP = "prometheus"
+KEYSTONE_CHARM = "osm-keystone"
+KEYSTONE_APP = "keystone"
+MYSQL_CHARM = "charmed-osm-mariadb-k8s"
+MYSQL_APP = "mysql"
+ZOOKEEPER_CHARM = "zookeeper-k8s"
+ZOOKEEPER_APP = "zookeeper"
+
+INGRESS_CHARM = "nginx-ingress-integrator"
+INGRESS_APP = "ingress"
+
+ALL_APPS = [
+    NBI_APP,
+    NG_UI_APP,
+    KAFKA_APP,
+    MONGO_DB_APP,
+    PROMETHEUS_APP,
+    KEYSTONE_APP,
+    MYSQL_APP,
+    ZOOKEEPER_APP,
+]
+
+
+@pytest.mark.abort_on_fail
+async def test_ng_ui_is_deployed(ops_test: OpsTest):
+    ng_ui_charm = await ops_test.build_charm(".")
+    ng_ui_resources = {"ng-ui-image": METADATA["resources"]["ng-ui-image"]["upstream-source"]}
+    keystone_deploy_cmd = f"juju deploy -m {ops_test.model_full_name} {KEYSTONE_CHARM} {KEYSTONE_APP} --resource keystone-image=opensourcemano/keystone:testing-daily"
+
+    await asyncio.gather(
+        ops_test.model.deploy(
+            ng_ui_charm, resources=ng_ui_resources, application_name=NG_UI_APP, series="focal"
+        ),
+        ops_test.model.deploy(NBI_CHARM, application_name=NBI_APP, channel="beta"),
+        ops_test.model.deploy(KAFKA_CHARM, application_name=KAFKA_APP, channel="stable"),
+        ops_test.model.deploy(
+            MONGO_DB_CHARM, application_name=MONGO_DB_APP, channel="latest/stable"
+        ),
+        ops_test.model.deploy(
+            PROMETHEUS_CHARM, application_name=PROMETHEUS_APP, channel="latest/edge"
+        ),
+        ops_test.model.deploy(ZOOKEEPER_CHARM, application_name=ZOOKEEPER_APP, channel="stable"),
+        ops_test.model.deploy(MYSQL_CHARM, application_name=MYSQL_APP, channel="stable"),
+        # Keystone is deployed separately because the juju python library has a bug where resources
+        # are not properly deployed. See https://github.com/juju/python-libjuju/issues/766
+        ops_test.run(*shlex.split(keystone_deploy_cmd), check=True),
+    )
+
+    async with ops_test.fast_forward():
+        await ops_test.model.wait_for_idle(apps=ALL_APPS, timeout=300)
+    logger.info("Adding relations for other components")
+    await asyncio.gather(
+        ops_test.model.relate(MYSQL_APP, KEYSTONE_APP),
+        ops_test.model.relate(KAFKA_APP, ZOOKEEPER_APP),
+        ops_test.model.relate(KEYSTONE_APP, NBI_APP),
+        ops_test.model.relate(KAFKA_APP, NBI_APP),
+        ops_test.model.relate(MONGO_DB_APP, NBI_APP),
+        ops_test.model.relate(PROMETHEUS_APP, NBI_APP),
+    )
+
+    async with ops_test.fast_forward():
+        await ops_test.model.wait_for_idle(apps=ALL_APPS, timeout=300)
+
+    assert ops_test.model.applications[NG_UI_APP].status == "blocked"
+    unit = ops_test.model.applications[NG_UI_APP].units[0]
+    assert unit.workload_status_message == "need nbi relation"
+
+    logger.info("Adding relations")
+    await ops_test.model.relate(NG_UI_APP, NBI_APP)
+
+    async with ops_test.fast_forward():
+        await ops_test.model.wait_for_idle(apps=ALL_APPS, status="active", timeout=300)
+
+
+@pytest.mark.abort_on_fail
+async def test_ng_ui_scales_up(ops_test: OpsTest):
+    logger.info("Scaling up osm-ng-ui")
+    expected_units = 3
+    assert len(ops_test.model.applications[NG_UI_APP].units) == 1
+    await ops_test.model.applications[NG_UI_APP].scale(expected_units)
+    async with ops_test.fast_forward():
+        await ops_test.model.wait_for_idle(
+            apps=[NG_UI_APP], status="active", wait_for_exact_units=expected_units
+        )
+
+
+@pytest.mark.abort_on_fail
+async def test_ng_ui_blocks_without_relation(ops_test: OpsTest):
+    await asyncio.gather(ops_test.model.applications[NBI_APP].remove_relation(NBI_APP, NG_UI_APP))
+    async with ops_test.fast_forward():
+        await ops_test.model.wait_for_idle(apps=[NG_UI_APP])
+    assert ops_test.model.applications[NG_UI_APP].status == "blocked"
+    for unit in ops_test.model.applications[NG_UI_APP].units:
+        assert unit.workload_status_message == "need nbi relation"
+    await ops_test.model.relate(NG_UI_APP, NBI_APP)
+    async with ops_test.fast_forward():
+        await ops_test.model.wait_for_idle(apps=ALL_APPS, status="active")
+
+
+@pytest.mark.abort_on_fail
+async def test_ng_ui_integration_ingress(ops_test: OpsTest):
+    await asyncio.gather(
+        ops_test.model.deploy(INGRESS_CHARM, application_name=INGRESS_APP, channel="beta"),
+    )
+    async with ops_test.fast_forward():
+        await ops_test.model.wait_for_idle(apps=ALL_APPS + [INGRESS_APP])
+
+    await ops_test.model.relate(NG_UI_APP, INGRESS_APP)
+    async with ops_test.fast_forward():
+        await ops_test.model.wait_for_idle(apps=ALL_APPS + [INGRESS_APP], status="active")
index 006da99..f4d4571 100644 (file)
@@ -57,9 +57,9 @@ def harness(mocker: MockerFixture):
     mocker.patch("charm.KubernetesServicePatch", lambda x, y: None)
     harness = Harness(OsmNgUiCharm)
     harness.begin()
-    harness.charm.unit.get_container("ng-ui").push(
-        "/etc/nginx/sites-available/default", sites_default, make_dirs=True
-    )
+    container = harness.charm.unit.get_container("ng-ui")
+    harness.set_can_connect(container, True)
+    container.push("/etc/nginx/sites-available/default", sites_default, make_dirs=True)
     yield harness
     harness.cleanup()
 
@@ -71,23 +71,24 @@ def test_missing_relations(harness: Harness):
 
 
 def test_ready(harness: Harness):
-    _add_relation(harness)
+    _add_nbi_relation(harness)
     assert harness.charm.unit.status == ActiveStatus()
 
 
 def test_container_stops_after_relation_broken(harness: Harness):
     harness.charm.on[container_name].pebble_ready.emit(container_name)
     container = harness.charm.unit.get_container(container_name)
-    relation_id = _add_relation(harness)
+    relation_id = _add_nbi_relation(harness)
     check_service_active(container, service_name)
     harness.remove_relation(relation_id)
     with pytest.raises(CharmError):
         check_service_active(container, service_name)
+    assert type(harness.charm.unit.status) == BlockedStatus
+    assert harness.charm.unit.status.message == "need nbi relation"
 
 
-def _add_relation(harness: Harness):
-    # Add nbi relation
+def _add_nbi_relation(harness: Harness):
     relation_id = harness.add_relation("nbi", "nbi")
     harness.add_relation_unit(relation_id, "nbi/0")
-    harness.update_relation_data(relation_id, "nbi", {"host": "nbi", "port": 9999})
+    harness.update_relation_data(relation_id, "nbi", {"host": "nbi", "port": "9999"})
     return relation_id
index 13c9735..8c614b8 100644 (file)
 [tox]
 skipsdist=True
 skip_missing_interpreters = True
-envlist = lint, unit
+envlist = lint, unit, integration
 
 [vars]
-src_path = {toxinidir}/src/
-tst_path = {toxinidir}/tests/
-lib_path = {toxinidir}/lib/charms/osm_ng_ui
-all_path = {[vars]src_path} {[vars]tst_path} 
+src_path = {toxinidir}/src
+tst_path = {toxinidir}/tests
+all_path = {[vars]src_path} {[vars]tst_path}
 
 [testenv]
+basepython = python3.8
 setenv =
   PYTHONPATH = {toxinidir}:{toxinidir}/lib:{[vars]src_path}
   PYTHONBREAKPOINT=ipdb.set_trace
@@ -54,7 +54,6 @@ deps =
     black
     flake8
     flake8-docstrings
-    flake8-copyright
     flake8-builtins
     pyproject-flake8
     pep8-naming
@@ -62,8 +61,7 @@ deps =
     codespell
 commands =
     # uncomment the following line if this charm owns a lib
-    codespell {[vars]lib_path}
-    codespell {toxinidir}/. --skip {toxinidir}/.git --skip {toxinidir}/.tox \
+    codespell {toxinidir} --skip {toxinidir}/.git --skip {toxinidir}/.tox \
       --skip {toxinidir}/build --skip {toxinidir}/lib --skip {toxinidir}/venv \
       --skip {toxinidir}/.mypy_cache --skip {toxinidir}/icon.svg
     # pflake8 wrapper supports config from pyproject.toml
@@ -79,17 +77,17 @@ deps =
     coverage[toml]
     -r{toxinidir}/requirements.txt
 commands =
-    coverage run --source={[vars]src_path},{[vars]lib_path} \
-        -m pytest --ignore={[vars]tst_path}integration -v --tb native -s {posargs}
+    coverage run --source={[vars]src_path} \
+        -m pytest {[vars]tst_path}/unit -v --tb native -s {posargs}
     coverage report
     coverage xml
 
 [testenv:integration]
 description = Run integration tests
 deps =
+    juju<3.0.0
     pytest
-    juju
     pytest-operator
     -r{toxinidir}/requirements.txt
 commands =
-    pytest -v --tb native --ignore={[vars]tst_path}unit --log-cli-level=INFO -s {posargs}
+    pytest -v --tb native {[vars]tst_path}/integration --log-cli-level=INFO -s {posargs} --cloud microk8s
diff --git a/installers/charm/osm-nglcm/lib/charms/data_platform_libs/v0/data_interfaces.py b/installers/charm/osm-nglcm/lib/charms/data_platform_libs/v0/data_interfaces.py
new file mode 100644 (file)
index 0000000..b3da5aa
--- /dev/null
@@ -0,0 +1,1130 @@
+# Copyright 2023 Canonical Ltd.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Library to manage the relation for the data-platform products.
+
+This library contains the Requires and Provides classes for handling the relation
+between an application and multiple managed application supported by the data-team:
+MySQL, Postgresql, MongoDB, Redis,  and Kakfa.
+
+### Database (MySQL, Postgresql, MongoDB, and Redis)
+
+#### Requires Charm
+This library is a uniform interface to a selection of common database
+metadata, with added custom events that add convenience to database management,
+and methods to consume the application related data.
+
+
+Following an example of using the DatabaseCreatedEvent, in the context of the
+application charm code:
+
+```python
+
+from charms.data_platform_libs.v0.data_interfaces import (
+    DatabaseCreatedEvent,
+    DatabaseRequires,
+)
+
+class ApplicationCharm(CharmBase):
+    # Application charm that connects to database charms.
+
+    def __init__(self, *args):
+        super().__init__(*args)
+
+        # Charm events defined in the database requires charm library.
+        self.database = DatabaseRequires(self, relation_name="database", database_name="database")
+        self.framework.observe(self.database.on.database_created, self._on_database_created)
+
+    def _on_database_created(self, event: DatabaseCreatedEvent) -> None:
+        # Handle the created database
+
+        # Create configuration file for app
+        config_file = self._render_app_config_file(
+            event.username,
+            event.password,
+            event.endpoints,
+        )
+
+        # Start application with rendered configuration
+        self._start_application(config_file)
+
+        # Set active status
+        self.unit.status = ActiveStatus("received database credentials")
+```
+
+As shown above, the library provides some custom events to handle specific situations,
+which are listed below:
+
+-  database_created: event emitted when the requested database is created.
+-  endpoints_changed: event emitted when the read/write endpoints of the database have changed.
+-  read_only_endpoints_changed: event emitted when the read-only endpoints of the database
+  have changed. Event is not triggered if read/write endpoints changed too.
+
+If it is needed to connect multiple database clusters to the same relation endpoint
+the application charm can implement the same code as if it would connect to only
+one database cluster (like the above code example).
+
+To differentiate multiple clusters connected to the same relation endpoint
+the application charm can use the name of the remote application:
+
+```python
+
+def _on_database_created(self, event: DatabaseCreatedEvent) -> None:
+    # Get the remote app name of the cluster that triggered this event
+    cluster = event.relation.app.name
+```
+
+It is also possible to provide an alias for each different database cluster/relation.
+
+So, it is possible to differentiate the clusters in two ways.
+The first is to use the remote application name, i.e., `event.relation.app.name`, as above.
+
+The second way is to use different event handlers to handle each cluster events.
+The implementation would be something like the following code:
+
+```python
+
+from charms.data_platform_libs.v0.data_interfaces import (
+    DatabaseCreatedEvent,
+    DatabaseRequires,
+)
+
+class ApplicationCharm(CharmBase):
+    # Application charm that connects to database charms.
+
+    def __init__(self, *args):
+        super().__init__(*args)
+
+        # Define the cluster aliases and one handler for each cluster database created event.
+        self.database = DatabaseRequires(
+            self,
+            relation_name="database",
+            database_name="database",
+            relations_aliases = ["cluster1", "cluster2"],
+        )
+        self.framework.observe(
+            self.database.on.cluster1_database_created, self._on_cluster1_database_created
+        )
+        self.framework.observe(
+            self.database.on.cluster2_database_created, self._on_cluster2_database_created
+        )
+
+    def _on_cluster1_database_created(self, event: DatabaseCreatedEvent) -> None:
+        # Handle the created database on the cluster named cluster1
+
+        # Create configuration file for app
+        config_file = self._render_app_config_file(
+            event.username,
+            event.password,
+            event.endpoints,
+        )
+        ...
+
+    def _on_cluster2_database_created(self, event: DatabaseCreatedEvent) -> None:
+        # Handle the created database on the cluster named cluster2
+
+        # Create configuration file for app
+        config_file = self._render_app_config_file(
+            event.username,
+            event.password,
+            event.endpoints,
+        )
+        ...
+
+```
+
+### Provider Charm
+
+Following an example of using the DatabaseRequestedEvent, in the context of the
+database charm code:
+
+```python
+from charms.data_platform_libs.v0.data_interfaces import DatabaseProvides
+
+class SampleCharm(CharmBase):
+
+    def __init__(self, *args):
+        super().__init__(*args)
+        # Charm events defined in the database provides charm library.
+        self.provided_database = DatabaseProvides(self, relation_name="database")
+        self.framework.observe(self.provided_database.on.database_requested,
+            self._on_database_requested)
+        # Database generic helper
+        self.database = DatabaseHelper()
+
+    def _on_database_requested(self, event: DatabaseRequestedEvent) -> None:
+        # Handle the event triggered by a new database requested in the relation
+        # Retrieve the database name using the charm library.
+        db_name = event.database
+        # generate a new user credential
+        username = self.database.generate_user()
+        password = self.database.generate_password()
+        # set the credentials for the relation
+        self.provided_database.set_credentials(event.relation.id, username, password)
+        # set other variables for the relation event.set_tls("False")
+```
+As shown above, the library provides a custom event (database_requested) to handle
+the situation when an application charm requests a new database to be created.
+It's preferred to subscribe to this event instead of relation changed event to avoid
+creating a new database when other information other than a database name is
+exchanged in the relation databag.
+
+### Kafka
+
+This library is the interface to use and interact with the Kafka charm. This library contains
+custom events that add convenience to manage Kafka, and provides methods to consume the
+application related data.
+
+#### Requirer Charm
+
+```python
+
+from charms.data_platform_libs.v0.data_interfaces import (
+    BootstrapServerChangedEvent,
+    KafkaRequires,
+    TopicCreatedEvent,
+)
+
+class ApplicationCharm(CharmBase):
+
+    def __init__(self, *args):
+        super().__init__(*args)
+        self.kafka = KafkaRequires(self, "kafka_client", "test-topic")
+        self.framework.observe(
+            self.kafka.on.bootstrap_server_changed, self._on_kafka_bootstrap_server_changed
+        )
+        self.framework.observe(
+            self.kafka.on.topic_created, self._on_kafka_topic_created
+        )
+
+    def _on_kafka_bootstrap_server_changed(self, event: BootstrapServerChangedEvent):
+        # Event triggered when a bootstrap server was changed for this application
+
+        new_bootstrap_server = event.bootstrap_server
+        ...
+
+    def _on_kafka_topic_created(self, event: TopicCreatedEvent):
+        # Event triggered when a topic was created for this application
+        username = event.username
+        password = event.password
+        tls = event.tls
+        tls_ca= event.tls_ca
+        bootstrap_server event.bootstrap_server
+        consumer_group_prefic = event.consumer_group_prefix
+        zookeeper_uris = event.zookeeper_uris
+        ...
+
+```
+
+As shown above, the library provides some custom events to handle specific situations,
+which are listed below:
+
+- topic_created: event emitted when the requested topic is created.
+- bootstrap_server_changed: event emitted when the bootstrap server have changed.
+- credential_changed: event emitted when the credentials of Kafka changed.
+
+### Provider Charm
+
+Following the previous example, this is an example of the provider charm.
+
+```python
+class SampleCharm(CharmBase):
+
+from charms.data_platform_libs.v0.data_interfaces import (
+    KafkaProvides,
+    TopicRequestedEvent,
+)
+
+    def __init__(self, *args):
+        super().__init__(*args)
+
+        # Default charm events.
+        self.framework.observe(self.on.start, self._on_start)
+
+        # Charm events defined in the Kafka Provides charm library.
+        self.kafka_provider = KafkaProvides(self, relation_name="kafka_client")
+        self.framework.observe(self.kafka_provider.on.topic_requested, self._on_topic_requested)
+        # Kafka generic helper
+        self.kafka = KafkaHelper()
+
+    def _on_topic_requested(self, event: TopicRequestedEvent):
+        # Handle the on_topic_requested event.
+
+        topic = event.topic
+        relation_id = event.relation.id
+        # set connection info in the databag relation
+        self.kafka_provider.set_bootstrap_server(relation_id, self.kafka.get_bootstrap_server())
+        self.kafka_provider.set_credentials(relation_id, username=username, password=password)
+        self.kafka_provider.set_consumer_group_prefix(relation_id, ...)
+        self.kafka_provider.set_tls(relation_id, "False")
+        self.kafka_provider.set_zookeeper_uris(relation_id, ...)
+
+```
+As shown above, the library provides a custom event (topic_requested) to handle
+the situation when an application charm requests a new topic to be created.
+It is preferred to subscribe to this event instead of relation changed event to avoid
+creating a new topic when other information other than a topic name is
+exchanged in the relation databag.
+"""
+
+import json
+import logging
+from abc import ABC, abstractmethod
+from collections import namedtuple
+from datetime import datetime
+from typing import List, Optional
+
+from ops.charm import (
+    CharmBase,
+    CharmEvents,
+    RelationChangedEvent,
+    RelationEvent,
+    RelationJoinedEvent,
+)
+from ops.framework import EventSource, Object
+from ops.model import Relation
+
+# The unique Charmhub library identifier, never change it
+LIBID = "6c3e6b6680d64e9c89e611d1a15f65be"
+
+# Increment this major API version when introducing breaking changes
+LIBAPI = 0
+
+# Increment this PATCH version before using `charmcraft publish-lib` or reset
+# to 0 if you are raising the major API version
+LIBPATCH = 7
+
+PYDEPS = ["ops>=2.0.0"]
+
+logger = logging.getLogger(__name__)
+
+Diff = namedtuple("Diff", "added changed deleted")
+Diff.__doc__ = """
+A tuple for storing the diff between two data mappings.
+
+added - keys that were added
+changed - keys that still exist but have new values
+deleted - key that were deleted"""
+
+
+def diff(event: RelationChangedEvent, bucket: str) -> Diff:
+    """Retrieves the diff of the data in the relation changed databag.
+
+    Args:
+        event: relation changed event.
+        bucket: bucket of the databag (app or unit)
+
+    Returns:
+        a Diff instance containing the added, deleted and changed
+            keys from the event relation databag.
+    """
+    # Retrieve the old data from the data key in the application relation databag.
+    old_data = json.loads(event.relation.data[bucket].get("data", "{}"))
+    # Retrieve the new data from the event relation databag.
+    new_data = {
+        key: value for key, value in event.relation.data[event.app].items() if key != "data"
+    }
+
+    # These are the keys that were added to the databag and triggered this event.
+    added = new_data.keys() - old_data.keys()
+    # These are the keys that were removed from the databag and triggered this event.
+    deleted = old_data.keys() - new_data.keys()
+    # These are the keys that already existed in the databag,
+    # but had their values changed.
+    changed = {key for key in old_data.keys() & new_data.keys() if old_data[key] != new_data[key]}
+    # Convert the new_data to a serializable format and save it for a next diff check.
+    event.relation.data[bucket].update({"data": json.dumps(new_data)})
+
+    # Return the diff with all possible changes.
+    return Diff(added, changed, deleted)
+
+
+# Base DataProvides and DataRequires
+
+
+class DataProvides(Object, ABC):
+    """Base provides-side of the data products relation."""
+
+    def __init__(self, charm: CharmBase, relation_name: str) -> None:
+        super().__init__(charm, relation_name)
+        self.charm = charm
+        self.local_app = self.charm.model.app
+        self.local_unit = self.charm.unit
+        self.relation_name = relation_name
+        self.framework.observe(
+            charm.on[relation_name].relation_changed,
+            self._on_relation_changed,
+        )
+
+    def _diff(self, event: RelationChangedEvent) -> Diff:
+        """Retrieves the diff of the data in the relation changed databag.
+
+        Args:
+            event: relation changed event.
+
+        Returns:
+            a Diff instance containing the added, deleted and changed
+                keys from the event relation databag.
+        """
+        return diff(event, self.local_app)
+
+    @abstractmethod
+    def _on_relation_changed(self, event: RelationChangedEvent) -> None:
+        """Event emitted when the relation data has changed."""
+        raise NotImplementedError
+
+    def fetch_relation_data(self) -> dict:
+        """Retrieves data from relation.
+
+        This function can be used to retrieve data from a relation
+        in the charm code when outside an event callback.
+
+        Returns:
+            a dict of the values stored in the relation data bag
+                for all relation instances (indexed by the relation id).
+        """
+        data = {}
+        for relation in self.relations:
+            data[relation.id] = {
+                key: value for key, value in relation.data[relation.app].items() if key != "data"
+            }
+        return data
+
+    def _update_relation_data(self, relation_id: int, data: dict) -> None:
+        """Updates a set of key-value pairs in the relation.
+
+        This function writes in the application data bag, therefore,
+        only the leader unit can call it.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            data: dict containing the key-value pairs
+                that should be updated in the relation.
+        """
+        if self.local_unit.is_leader():
+            relation = self.charm.model.get_relation(self.relation_name, relation_id)
+            relation.data[self.local_app].update(data)
+
+    @property
+    def relations(self) -> List[Relation]:
+        """The list of Relation instances associated with this relation_name."""
+        return list(self.charm.model.relations[self.relation_name])
+
+    def set_credentials(self, relation_id: int, username: str, password: str) -> None:
+        """Set credentials.
+
+        This function writes in the application data bag, therefore,
+        only the leader unit can call it.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            username: user that was created.
+            password: password of the created user.
+        """
+        self._update_relation_data(
+            relation_id,
+            {
+                "username": username,
+                "password": password,
+            },
+        )
+
+    def set_tls(self, relation_id: int, tls: str) -> None:
+        """Set whether TLS is enabled.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            tls: whether tls is enabled (True or False).
+        """
+        self._update_relation_data(relation_id, {"tls": tls})
+
+    def set_tls_ca(self, relation_id: int, tls_ca: str) -> None:
+        """Set the TLS CA in the application relation databag.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            tls_ca: TLS certification authority.
+        """
+        self._update_relation_data(relation_id, {"tls_ca": tls_ca})
+
+
+class DataRequires(Object, ABC):
+    """Requires-side of the relation."""
+
+    def __init__(
+        self,
+        charm,
+        relation_name: str,
+        extra_user_roles: str = None,
+    ):
+        """Manager of base client relations."""
+        super().__init__(charm, relation_name)
+        self.charm = charm
+        self.extra_user_roles = extra_user_roles
+        self.local_app = self.charm.model.app
+        self.local_unit = self.charm.unit
+        self.relation_name = relation_name
+        self.framework.observe(
+            self.charm.on[relation_name].relation_joined, self._on_relation_joined_event
+        )
+        self.framework.observe(
+            self.charm.on[relation_name].relation_changed, self._on_relation_changed_event
+        )
+
+    @abstractmethod
+    def _on_relation_joined_event(self, event: RelationJoinedEvent) -> None:
+        """Event emitted when the application joins the relation."""
+        raise NotImplementedError
+
+    @abstractmethod
+    def _on_relation_changed_event(self, event: RelationChangedEvent) -> None:
+        raise NotImplementedError
+
+    def fetch_relation_data(self) -> dict:
+        """Retrieves data from relation.
+
+        This function can be used to retrieve data from a relation
+        in the charm code when outside an event callback.
+        Function cannot be used in `*-relation-broken` events and will raise an exception.
+
+        Returns:
+            a dict of the values stored in the relation data bag
+                for all relation instances (indexed by the relation ID).
+        """
+        data = {}
+        for relation in self.relations:
+            data[relation.id] = {
+                key: value for key, value in relation.data[relation.app].items() if key != "data"
+            }
+        return data
+
+    def _update_relation_data(self, relation_id: int, data: dict) -> None:
+        """Updates a set of key-value pairs in the relation.
+
+        This function writes in the application data bag, therefore,
+        only the leader unit can call it.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            data: dict containing the key-value pairs
+                that should be updated in the relation.
+        """
+        if self.local_unit.is_leader():
+            relation = self.charm.model.get_relation(self.relation_name, relation_id)
+            relation.data[self.local_app].update(data)
+
+    def _diff(self, event: RelationChangedEvent) -> Diff:
+        """Retrieves the diff of the data in the relation changed databag.
+
+        Args:
+            event: relation changed event.
+
+        Returns:
+            a Diff instance containing the added, deleted and changed
+                keys from the event relation databag.
+        """
+        return diff(event, self.local_unit)
+
+    @property
+    def relations(self) -> List[Relation]:
+        """The list of Relation instances associated with this relation_name."""
+        return [
+            relation
+            for relation in self.charm.model.relations[self.relation_name]
+            if self._is_relation_active(relation)
+        ]
+
+    @staticmethod
+    def _is_relation_active(relation: Relation):
+        try:
+            _ = repr(relation.data)
+            return True
+        except RuntimeError:
+            return False
+
+    @staticmethod
+    def _is_resource_created_for_relation(relation: Relation):
+        return (
+            "username" in relation.data[relation.app] and "password" in relation.data[relation.app]
+        )
+
+    def is_resource_created(self, relation_id: Optional[int] = None) -> bool:
+        """Check if the resource has been created.
+
+        This function can be used to check if the Provider answered with data in the charm code
+        when outside an event callback.
+
+        Args:
+            relation_id (int, optional): When provided the check is done only for the relation id
+                provided, otherwise the check is done for all relations
+
+        Returns:
+            True or False
+
+        Raises:
+            IndexError: If relation_id is provided but that relation does not exist
+        """
+        if relation_id is not None:
+            try:
+                relation = [relation for relation in self.relations if relation.id == relation_id][
+                    0
+                ]
+                return self._is_resource_created_for_relation(relation)
+            except IndexError:
+                raise IndexError(f"relation id {relation_id} cannot be accessed")
+        else:
+            return (
+                all(
+                    [
+                        self._is_resource_created_for_relation(relation)
+                        for relation in self.relations
+                    ]
+                )
+                if self.relations
+                else False
+            )
+
+
+# General events
+
+
+class ExtraRoleEvent(RelationEvent):
+    """Base class for data events."""
+
+    @property
+    def extra_user_roles(self) -> Optional[str]:
+        """Returns the extra user roles that were requested."""
+        return self.relation.data[self.relation.app].get("extra-user-roles")
+
+
+class AuthenticationEvent(RelationEvent):
+    """Base class for authentication fields for events."""
+
+    @property
+    def username(self) -> Optional[str]:
+        """Returns the created username."""
+        return self.relation.data[self.relation.app].get("username")
+
+    @property
+    def password(self) -> Optional[str]:
+        """Returns the password for the created user."""
+        return self.relation.data[self.relation.app].get("password")
+
+    @property
+    def tls(self) -> Optional[str]:
+        """Returns whether TLS is configured."""
+        return self.relation.data[self.relation.app].get("tls")
+
+    @property
+    def tls_ca(self) -> Optional[str]:
+        """Returns TLS CA."""
+        return self.relation.data[self.relation.app].get("tls-ca")
+
+
+# Database related events and fields
+
+
+class DatabaseProvidesEvent(RelationEvent):
+    """Base class for database events."""
+
+    @property
+    def database(self) -> Optional[str]:
+        """Returns the database that was requested."""
+        return self.relation.data[self.relation.app].get("database")
+
+
+class DatabaseRequestedEvent(DatabaseProvidesEvent, ExtraRoleEvent):
+    """Event emitted when a new database is requested for use on this relation."""
+
+
+class DatabaseProvidesEvents(CharmEvents):
+    """Database events.
+
+    This class defines the events that the database can emit.
+    """
+
+    database_requested = EventSource(DatabaseRequestedEvent)
+
+
+class DatabaseRequiresEvent(RelationEvent):
+    """Base class for database events."""
+
+    @property
+    def endpoints(self) -> Optional[str]:
+        """Returns a comma separated list of read/write endpoints."""
+        return self.relation.data[self.relation.app].get("endpoints")
+
+    @property
+    def read_only_endpoints(self) -> Optional[str]:
+        """Returns a comma separated list of read only endpoints."""
+        return self.relation.data[self.relation.app].get("read-only-endpoints")
+
+    @property
+    def replset(self) -> Optional[str]:
+        """Returns the replicaset name.
+
+        MongoDB only.
+        """
+        return self.relation.data[self.relation.app].get("replset")
+
+    @property
+    def uris(self) -> Optional[str]:
+        """Returns the connection URIs.
+
+        MongoDB, Redis, OpenSearch.
+        """
+        return self.relation.data[self.relation.app].get("uris")
+
+    @property
+    def version(self) -> Optional[str]:
+        """Returns the version of the database.
+
+        Version as informed by the database daemon.
+        """
+        return self.relation.data[self.relation.app].get("version")
+
+
+class DatabaseCreatedEvent(AuthenticationEvent, DatabaseRequiresEvent):
+    """Event emitted when a new database is created for use on this relation."""
+
+
+class DatabaseEndpointsChangedEvent(AuthenticationEvent, DatabaseRequiresEvent):
+    """Event emitted when the read/write endpoints are changed."""
+
+
+class DatabaseReadOnlyEndpointsChangedEvent(AuthenticationEvent, DatabaseRequiresEvent):
+    """Event emitted when the read only endpoints are changed."""
+
+
+class DatabaseRequiresEvents(CharmEvents):
+    """Database events.
+
+    This class defines the events that the database can emit.
+    """
+
+    database_created = EventSource(DatabaseCreatedEvent)
+    endpoints_changed = EventSource(DatabaseEndpointsChangedEvent)
+    read_only_endpoints_changed = EventSource(DatabaseReadOnlyEndpointsChangedEvent)
+
+
+# Database Provider and Requires
+
+
+class DatabaseProvides(DataProvides):
+    """Provider-side of the database relations."""
+
+    on = DatabaseProvidesEvents()
+
+    def __init__(self, charm: CharmBase, relation_name: str) -> None:
+        super().__init__(charm, relation_name)
+
+    def _on_relation_changed(self, event: RelationChangedEvent) -> None:
+        """Event emitted when the relation has changed."""
+        # Only the leader should handle this event.
+        if not self.local_unit.is_leader():
+            return
+
+        # Check which data has changed to emit customs events.
+        diff = self._diff(event)
+
+        # Emit a database requested event if the setup key (database name and optional
+        # extra user roles) was added to the relation databag by the application.
+        if "database" in diff.added:
+            self.on.database_requested.emit(event.relation, app=event.app, unit=event.unit)
+
+    def set_endpoints(self, relation_id: int, connection_strings: str) -> None:
+        """Set database primary connections.
+
+        This function writes in the application data bag, therefore,
+        only the leader unit can call it.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            connection_strings: database hosts and ports comma separated list.
+        """
+        self._update_relation_data(relation_id, {"endpoints": connection_strings})
+
+    def set_read_only_endpoints(self, relation_id: int, connection_strings: str) -> None:
+        """Set database replicas connection strings.
+
+        This function writes in the application data bag, therefore,
+        only the leader unit can call it.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            connection_strings: database hosts and ports comma separated list.
+        """
+        self._update_relation_data(relation_id, {"read-only-endpoints": connection_strings})
+
+    def set_replset(self, relation_id: int, replset: str) -> None:
+        """Set replica set name in the application relation databag.
+
+        MongoDB only.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            replset: replica set name.
+        """
+        self._update_relation_data(relation_id, {"replset": replset})
+
+    def set_uris(self, relation_id: int, uris: str) -> None:
+        """Set the database connection URIs in the application relation databag.
+
+        MongoDB, Redis, and OpenSearch only.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            uris: connection URIs.
+        """
+        self._update_relation_data(relation_id, {"uris": uris})
+
+    def set_version(self, relation_id: int, version: str) -> None:
+        """Set the database version in the application relation databag.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            version: database version.
+        """
+        self._update_relation_data(relation_id, {"version": version})
+
+
+class DatabaseRequires(DataRequires):
+    """Requires-side of the database relation."""
+
+    on = DatabaseRequiresEvents()
+
+    def __init__(
+        self,
+        charm,
+        relation_name: str,
+        database_name: str,
+        extra_user_roles: str = None,
+        relations_aliases: List[str] = None,
+    ):
+        """Manager of database client relations."""
+        super().__init__(charm, relation_name, extra_user_roles)
+        self.database = database_name
+        self.relations_aliases = relations_aliases
+
+        # Define custom event names for each alias.
+        if relations_aliases:
+            # Ensure the number of aliases does not exceed the maximum
+            # of connections allowed in the specific relation.
+            relation_connection_limit = self.charm.meta.requires[relation_name].limit
+            if len(relations_aliases) != relation_connection_limit:
+                raise ValueError(
+                    f"The number of aliases must match the maximum number of connections allowed in the relation. "
+                    f"Expected {relation_connection_limit}, got {len(relations_aliases)}"
+                )
+
+            for relation_alias in relations_aliases:
+                self.on.define_event(f"{relation_alias}_database_created", DatabaseCreatedEvent)
+                self.on.define_event(
+                    f"{relation_alias}_endpoints_changed", DatabaseEndpointsChangedEvent
+                )
+                self.on.define_event(
+                    f"{relation_alias}_read_only_endpoints_changed",
+                    DatabaseReadOnlyEndpointsChangedEvent,
+                )
+
+    def _assign_relation_alias(self, relation_id: int) -> None:
+        """Assigns an alias to a relation.
+
+        This function writes in the unit data bag.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+        """
+        # If no aliases were provided, return immediately.
+        if not self.relations_aliases:
+            return
+
+        # Return if an alias was already assigned to this relation
+        # (like when there are more than one unit joining the relation).
+        if (
+            self.charm.model.get_relation(self.relation_name, relation_id)
+            .data[self.local_unit]
+            .get("alias")
+        ):
+            return
+
+        # Retrieve the available aliases (the ones that weren't assigned to any relation).
+        available_aliases = self.relations_aliases[:]
+        for relation in self.charm.model.relations[self.relation_name]:
+            alias = relation.data[self.local_unit].get("alias")
+            if alias:
+                logger.debug("Alias %s was already assigned to relation %d", alias, relation.id)
+                available_aliases.remove(alias)
+
+        # Set the alias in the unit relation databag of the specific relation.
+        relation = self.charm.model.get_relation(self.relation_name, relation_id)
+        relation.data[self.local_unit].update({"alias": available_aliases[0]})
+
+    def _emit_aliased_event(self, event: RelationChangedEvent, event_name: str) -> None:
+        """Emit an aliased event to a particular relation if it has an alias.
+
+        Args:
+            event: the relation changed event that was received.
+            event_name: the name of the event to emit.
+        """
+        alias = self._get_relation_alias(event.relation.id)
+        if alias:
+            getattr(self.on, f"{alias}_{event_name}").emit(
+                event.relation, app=event.app, unit=event.unit
+            )
+
+    def _get_relation_alias(self, relation_id: int) -> Optional[str]:
+        """Returns the relation alias.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+
+        Returns:
+            the relation alias or None if the relation was not found.
+        """
+        for relation in self.charm.model.relations[self.relation_name]:
+            if relation.id == relation_id:
+                return relation.data[self.local_unit].get("alias")
+        return None
+
+    def _on_relation_joined_event(self, event: RelationJoinedEvent) -> None:
+        """Event emitted when the application joins the database relation."""
+        # If relations aliases were provided, assign one to the relation.
+        self._assign_relation_alias(event.relation.id)
+
+        # Sets both database and extra user roles in the relation
+        # if the roles are provided. Otherwise, sets only the database.
+        if self.extra_user_roles:
+            self._update_relation_data(
+                event.relation.id,
+                {
+                    "database": self.database,
+                    "extra-user-roles": self.extra_user_roles,
+                },
+            )
+        else:
+            self._update_relation_data(event.relation.id, {"database": self.database})
+
+    def _on_relation_changed_event(self, event: RelationChangedEvent) -> None:
+        """Event emitted when the database relation has changed."""
+        # Check which data has changed to emit customs events.
+        diff = self._diff(event)
+
+        # Check if the database is created
+        # (the database charm shared the credentials).
+        if "username" in diff.added and "password" in diff.added:
+            # Emit the default event (the one without an alias).
+            logger.info("database created at %s", datetime.now())
+            self.on.database_created.emit(event.relation, app=event.app, unit=event.unit)
+
+            # Emit the aliased event (if any).
+            self._emit_aliased_event(event, "database_created")
+
+            # To avoid unnecessary application restarts do not trigger
+            # â€œendpoints_changed“ event if â€œdatabase_created“ is triggered.
+            return
+
+        # Emit an endpoints changed event if the database
+        # added or changed this info in the relation databag.
+        if "endpoints" in diff.added or "endpoints" in diff.changed:
+            # Emit the default event (the one without an alias).
+            logger.info("endpoints changed on %s", datetime.now())
+            self.on.endpoints_changed.emit(event.relation, app=event.app, unit=event.unit)
+
+            # Emit the aliased event (if any).
+            self._emit_aliased_event(event, "endpoints_changed")
+
+            # To avoid unnecessary application restarts do not trigger
+            # â€œread_only_endpoints_changed“ event if â€œendpoints_changed“ is triggered.
+            return
+
+        # Emit a read only endpoints changed event if the database
+        # added or changed this info in the relation databag.
+        if "read-only-endpoints" in diff.added or "read-only-endpoints" in diff.changed:
+            # Emit the default event (the one without an alias).
+            logger.info("read-only-endpoints changed on %s", datetime.now())
+            self.on.read_only_endpoints_changed.emit(
+                event.relation, app=event.app, unit=event.unit
+            )
+
+            # Emit the aliased event (if any).
+            self._emit_aliased_event(event, "read_only_endpoints_changed")
+
+
+# Kafka related events
+
+
+class KafkaProvidesEvent(RelationEvent):
+    """Base class for Kafka events."""
+
+    @property
+    def topic(self) -> Optional[str]:
+        """Returns the topic that was requested."""
+        return self.relation.data[self.relation.app].get("topic")
+
+
+class TopicRequestedEvent(KafkaProvidesEvent, ExtraRoleEvent):
+    """Event emitted when a new topic is requested for use on this relation."""
+
+
+class KafkaProvidesEvents(CharmEvents):
+    """Kafka events.
+
+    This class defines the events that the Kafka can emit.
+    """
+
+    topic_requested = EventSource(TopicRequestedEvent)
+
+
+class KafkaRequiresEvent(RelationEvent):
+    """Base class for Kafka events."""
+
+    @property
+    def bootstrap_server(self) -> Optional[str]:
+        """Returns a a comma-seperated list of broker uris."""
+        return self.relation.data[self.relation.app].get("endpoints")
+
+    @property
+    def consumer_group_prefix(self) -> Optional[str]:
+        """Returns the consumer-group-prefix."""
+        return self.relation.data[self.relation.app].get("consumer-group-prefix")
+
+    @property
+    def zookeeper_uris(self) -> Optional[str]:
+        """Returns a comma separated list of Zookeeper uris."""
+        return self.relation.data[self.relation.app].get("zookeeper-uris")
+
+
+class TopicCreatedEvent(AuthenticationEvent, KafkaRequiresEvent):
+    """Event emitted when a new topic is created for use on this relation."""
+
+
+class BootstrapServerChangedEvent(AuthenticationEvent, KafkaRequiresEvent):
+    """Event emitted when the bootstrap server is changed."""
+
+
+class KafkaRequiresEvents(CharmEvents):
+    """Kafka events.
+
+    This class defines the events that the Kafka can emit.
+    """
+
+    topic_created = EventSource(TopicCreatedEvent)
+    bootstrap_server_changed = EventSource(BootstrapServerChangedEvent)
+
+
+# Kafka Provides and Requires
+
+
+class KafkaProvides(DataProvides):
+    """Provider-side of the Kafka relation."""
+
+    on = KafkaProvidesEvents()
+
+    def __init__(self, charm: CharmBase, relation_name: str) -> None:
+        super().__init__(charm, relation_name)
+
+    def _on_relation_changed(self, event: RelationChangedEvent) -> None:
+        """Event emitted when the relation has changed."""
+        # Only the leader should handle this event.
+        if not self.local_unit.is_leader():
+            return
+
+        # Check which data has changed to emit customs events.
+        diff = self._diff(event)
+
+        # Emit a topic requested event if the setup key (topic name and optional
+        # extra user roles) was added to the relation databag by the application.
+        if "topic" in diff.added:
+            self.on.topic_requested.emit(event.relation, app=event.app, unit=event.unit)
+
+    def set_bootstrap_server(self, relation_id: int, bootstrap_server: str) -> None:
+        """Set the bootstrap server in the application relation databag.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            bootstrap_server: the bootstrap server address.
+        """
+        self._update_relation_data(relation_id, {"endpoints": bootstrap_server})
+
+    def set_consumer_group_prefix(self, relation_id: int, consumer_group_prefix: str) -> None:
+        """Set the consumer group prefix in the application relation databag.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            consumer_group_prefix: the consumer group prefix string.
+        """
+        self._update_relation_data(relation_id, {"consumer-group-prefix": consumer_group_prefix})
+
+    def set_zookeeper_uris(self, relation_id: int, zookeeper_uris: str) -> None:
+        """Set the zookeeper uris in the application relation databag.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            zookeeper_uris: comma-seperated list of ZooKeeper server uris.
+        """
+        self._update_relation_data(relation_id, {"zookeeper-uris": zookeeper_uris})
+
+
+class KafkaRequires(DataRequires):
+    """Requires-side of the Kafka relation."""
+
+    on = KafkaRequiresEvents()
+
+    def __init__(self, charm, relation_name: str, topic: str, extra_user_roles: str = None):
+        """Manager of Kafka client relations."""
+        # super().__init__(charm, relation_name)
+        super().__init__(charm, relation_name, extra_user_roles)
+        self.charm = charm
+        self.topic = topic
+
+    def _on_relation_joined_event(self, event: RelationJoinedEvent) -> None:
+        """Event emitted when the application joins the Kafka relation."""
+        # Sets both topic and extra user roles in the relation
+        # if the roles are provided. Otherwise, sets only the topic.
+        self._update_relation_data(
+            event.relation.id,
+            {
+                "topic": self.topic,
+                "extra-user-roles": self.extra_user_roles,
+            }
+            if self.extra_user_roles is not None
+            else {"topic": self.topic},
+        )
+
+    def _on_relation_changed_event(self, event: RelationChangedEvent) -> None:
+        """Event emitted when the Kafka relation has changed."""
+        # Check which data has changed to emit customs events.
+        diff = self._diff(event)
+
+        # Check if the topic is created
+        # (the Kafka charm shared the credentials).
+        if "username" in diff.added and "password" in diff.added:
+            # Emit the default event (the one without an alias).
+            logger.info("topic created at %s", datetime.now())
+            self.on.topic_created.emit(event.relation, app=event.app, unit=event.unit)
+
+            # To avoid unnecessary application restarts do not trigger
+            # â€œendpoints_changed“ event if â€œtopic_created“ is triggered.
+            return
+
+        # Emit an endpoints (bootstap-server) changed event if the Kakfa endpoints
+        # added or changed this info in the relation databag.
+        if "endpoints" in diff.added or "endpoints" in diff.changed:
+            # Emit the default event (the one without an alias).
+            logger.info("endpoints changed on %s", datetime.now())
+            self.on.bootstrap_server_changed.emit(
+                event.relation, app=event.app, unit=event.unit
+            )  # here check if this is the right design
+            return
index df3da94..efc6d74 100644 (file)
@@ -235,12 +235,14 @@ wait
 @dataclass
 class SubModule:
     """Represent RO Submodules."""
+
     sub_module_path: str
     container_path: str
 
 
 class HostPath:
     """Represents a hostpath."""
+
     def __init__(self, config: str, container_path: str, submodules: dict = None) -> None:
         mount_path_items = config.split("-")
         mount_path_items.reverse()
@@ -257,6 +259,7 @@ class HostPath:
             self.container_path = container_path
             self.module_name = container_path.split("/")[-1]
 
+
 class DebugMode(Object):
     """Class to handle the debug-mode."""
 
@@ -432,7 +435,9 @@ class DebugMode(Object):
             logger.debug(f"adding symlink for {hostpath.config}")
             if len(hostpath.sub_module_dict) > 0:
                 for sub_module in hostpath.sub_module_dict.keys():
-                    self.container.exec(["rm", "-rf", hostpath.sub_module_dict[sub_module].container_path]).wait_output()
+                    self.container.exec(
+                        ["rm", "-rf", hostpath.sub_module_dict[sub_module].container_path]
+                    ).wait_output()
                     self.container.exec(
                         [
                             "ln",
@@ -506,7 +511,6 @@ class DebugMode(Object):
     def _delete_hostpath_from_statefulset(self, hostpath: HostPath, statefulset: StatefulSet):
         hostpath_unmounted = False
         for volume in statefulset.spec.template.spec.volumes:
-
             if hostpath.config != volume.name:
                 continue
 
index 638d13e..65fc37d 100644 (file)
@@ -54,7 +54,7 @@ resources:
 
 requires:
   mongodb:
-    interface: mongodb
+    interface: mongodb_client
     limit: 1
   temporal:
     interface: frontend
index cb303a3..398d4ad 100644 (file)
@@ -17,7 +17,7 @@
 #
 # To get in touch with the maintainers, please contact:
 # osm-charmers@lists.launchpad.net
-ops >= 1.2.0
+ops < 2.2
 lightkube
 lightkube-models
 # git+https://github.com/charmed-osm/config-validator/
index feec54f..4169894 100755 (executable)
@@ -30,6 +30,7 @@ See more: https://charmhub.io/osm
 import logging
 from typing import Any, Dict
 
+from charms.data_platform_libs.v0.data_interfaces import DatabaseRequires
 from charms.osm_libs.v0.utils import (
     CharmError,
     DebugMode,
@@ -44,8 +45,6 @@ from ops.framework import EventSource, StoredState
 from ops.main import main
 from ops.model import ActiveStatus, Container
 
-from legacy_interfaces import MongoClient
-
 HOSTPATHS = [
     HostPath(
         config="lcm-hostpath",
@@ -82,7 +81,9 @@ class OsmNGLcmCharm(CharmBase):
         super().__init__(*args)
         self.vca = VcaRequires(self)
         self.temporal = TemporalRequires(self)
-        self.mongodb_client = MongoClient(self, "mongodb")
+        self.mongodb_client = DatabaseRequires(
+            self, "mongodb", database_name="osm", extra_user_roles="admin"
+        )
         self._observe_charm_events()
         self.container: Container = self.unit.get_container(self.container_name)
         self.debug_mode = DebugMode(self, self._stored, self.container, HOSTPATHS)
@@ -171,7 +172,7 @@ class OsmNGLcmCharm(CharmBase):
             self.on.config_changed: self._on_config_changed,
             self.on.update_status: self._on_update_status,
             # Relation events
-            self.on["mongodb"].relation_changed: self._on_config_changed,
+            self.mongodb_client.on.database_created: self._on_config_changed,
             self.on["mongodb"].relation_broken: self._on_required_relation_broken,
             self.on["temporal"].relation_changed: self._on_config_changed,
             self.on["temporal"].relation_broken: self._on_required_relation_broken,
@@ -192,7 +193,7 @@ class OsmNGLcmCharm(CharmBase):
         logger.debug("check for missing relations")
         missing_relations = []
 
-        if self.mongodb_client.is_missing_data_in_unit():
+        if not self._is_database_available():
             missing_relations.append("mongodb")
         if not self.temporal.host or not self.temporal.port:
             missing_relations.append("temporal")
@@ -204,6 +205,12 @@ class OsmNGLcmCharm(CharmBase):
             logger.warning(error_msg)
             raise CharmError(error_msg)
 
+    def _is_database_available(self) -> bool:
+        try:
+            return self.mongodb_client.is_resource_created()
+        except KeyError:
+            return False
+
     def _configure_service(self, container: Container) -> None:
         """Add Pebble layer with the lcm service."""
         logger.debug(f"configuring {self.app.name} service")
@@ -217,13 +224,13 @@ class OsmNGLcmCharm(CharmBase):
             "OSMLCM_GLOBAL_LOGLEVEL": self.config["log-level"].upper(),
             # Database configuration
             "OSMLCM_DATABASE_DRIVER": "mongo",
-            "OSMLCM_DATABASE_URI": self.mongodb_client.connection_string,
+            "OSMLCM_DATABASE_URI": self._get_mongodb_uri(),
             "OSMLCM_DATABASE_COMMONKEY": self.config["database-commonkey"],
             # Storage configuration
             "OSMLCM_STORAGE_DRIVER": "mongo",
             "OSMLCM_STORAGE_PATH": "/app/storage",
             "OSMLCM_STORAGE_COLLECTION": "files",
-            "OSMLCM_STORAGE_URI": self.mongodb_client.connection_string,
+            "OSMLCM_STORAGE_URI": self._get_mongodb_uri(),
             "OSMLCM_VCA_HELM_CA_CERTS": self.config["helm-ca-certs"],
             "OSMLCM_VCA_STABLEREPOURL": self.config["helm-stable-repo-url"],
             # Temporal configuration
@@ -264,6 +271,9 @@ class OsmNGLcmCharm(CharmBase):
         logger.info(f"Layer: {layer_config}")
         return layer_config
 
+    def _get_mongodb_uri(self):
+        return list(self.mongodb_client.fetch_relation_data().values())[0]["uris"]
+
 
 if __name__ == "__main__":  # pragma: no cover
     main(OsmNGLcmCharm)
diff --git a/installers/charm/osm-nglcm/src/legacy_interfaces.py b/installers/charm/osm-nglcm/src/legacy_interfaces.py
deleted file mode 100644 (file)
index d56f31d..0000000
+++ /dev/null
@@ -1,107 +0,0 @@
-#!/usr/bin/env python3
-# Copyright 2022 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-#
-# flake8: noqa
-
-import ops
-
-
-class BaseRelationClient(ops.framework.Object):
-    """Requires side of a Kafka Endpoint"""
-
-    def __init__(
-        self, charm: ops.charm.CharmBase, relation_name: str, mandatory_fields: list = []
-    ):
-        super().__init__(charm, relation_name)
-        self.relation_name = relation_name
-        self.mandatory_fields = mandatory_fields
-        self._update_relation()
-
-    def get_data_from_unit(self, key: str):
-        if not self.relation:
-            # This update relation doesn't seem to be needed, but I added it because apparently
-            # the data is empty in the unit tests.
-            # In reality, the constructor is called in every hook.
-            # In the unit tests when doing an update_relation_data, apparently it is not called.
-            self._update_relation()
-        if self.relation:
-            for unit in self.relation.units:
-                data = self.relation.data[unit].get(key)
-                if data:
-                    return data
-
-    def get_data_from_app(self, key: str):
-        if not self.relation or self.relation.app not in self.relation.data:
-            # This update relation doesn't seem to be needed, but I added it because apparently
-            # the data is empty in the unit tests.
-            # In reality, the constructor is called in every hook.
-            # In the unit tests when doing an update_relation_data, apparently it is not called.
-            self._update_relation()
-        if self.relation and self.relation.app in self.relation.data:
-            data = self.relation.data[self.relation.app].get(key)
-            if data:
-                return data
-
-    def is_missing_data_in_unit(self):
-        return not all([self.get_data_from_unit(field) for field in self.mandatory_fields])
-
-    def is_missing_data_in_app(self):
-        return not all([self.get_data_from_app(field) for field in self.mandatory_fields])
-
-    def _update_relation(self):
-        self.relation = self.framework.model.get_relation(self.relation_name)
-
-
-class MongoClient(BaseRelationClient):
-    """Requires side of a Mongo Endpoint"""
-
-    mandatory_fields_mapping = {
-        "reactive": ["connection_string"],
-        "ops": ["replica_set_uri", "replica_set_name"],
-    }
-
-    def __init__(self, charm: ops.charm.CharmBase, relation_name: str):
-        super().__init__(charm, relation_name, mandatory_fields=[])
-
-    @property
-    def connection_string(self):
-        if self.is_opts():
-            replica_set_uri = self.get_data_from_unit("replica_set_uri")
-            replica_set_name = self.get_data_from_unit("replica_set_name")
-            return f"{replica_set_uri}?replicaSet={replica_set_name}"
-        else:
-            return self.get_data_from_unit("connection_string")
-
-    def is_opts(self):
-        return not self.is_missing_data_in_unit_ops()
-
-    def is_missing_data_in_unit(self):
-        return self.is_missing_data_in_unit_ops() and self.is_missing_data_in_unit_reactive()
-
-    def is_missing_data_in_unit_ops(self):
-        return not all(
-            [self.get_data_from_unit(field) for field in self.mandatory_fields_mapping["ops"]]
-        )
-
-    def is_missing_data_in_unit_reactive(self):
-        return not all(
-            [self.get_data_from_unit(field) for field in self.mandatory_fields_mapping["reactive"]]
-        )
index 56c7ab8..78f9b1b 100644 (file)
@@ -26,15 +26,15 @@ from ops.model import ActiveStatus, BlockedStatus
 from ops.testing import Harness
 from pytest_mock import MockerFixture
 
-from charm import CharmError, OsmLcmCharm, check_service_active
+from charm import CharmError, OsmNGLcmCharm, check_service_active
 
-container_name = "lcm"
-service_name = "lcm"
+container_name = "nglcm"
+service_name = "nglcm"
 
 
 @pytest.fixture
 def harness(mocker: MockerFixture):
-    harness = Harness(OsmLcmCharm)
+    harness = Harness(OsmNGLcmCharm)
     harness.begin()
     yield harness
     harness.cleanup()
@@ -44,7 +44,7 @@ def test_missing_relations(harness: Harness):
     harness.charm.on.config_changed.emit()
     assert type(harness.charm.unit.status) == BlockedStatus
     assert all(
-        relation in harness.charm.unit.status.message for relation in ["mongodb", "kafka", "ro"]
+        relation in harness.charm.unit.status.message for relation in ["mongodb", "temporal"]
     )
 
 
@@ -69,17 +69,14 @@ def _add_relations(harness: Harness):
     relation_id = harness.add_relation("mongodb", "mongodb")
     harness.add_relation_unit(relation_id, "mongodb/0")
     harness.update_relation_data(
-        relation_id, "mongodb/0", {"connection_string": "mongodb://:1234"}
+        relation_id,
+        "mongodb",
+        {"uris": "mongodb://:1234", "username": "user", "password": "password"},
     )
     relation_ids.append(relation_id)
-    # Add kafka relation
-    relation_id = harness.add_relation("kafka", "kafka")
-    harness.add_relation_unit(relation_id, "kafka/0")
-    harness.update_relation_data(relation_id, "kafka", {"host": "kafka", "port": 9092})
-    relation_ids.append(relation_id)
-    # Add ro relation
-    relation_id = harness.add_relation("ro", "ro")
-    harness.add_relation_unit(relation_id, "ro/0")
-    harness.update_relation_data(relation_id, "ro", {"host": "ro", "port": 9090})
+    # Add temporal relation
+    relation_id = harness.add_relation("temporal", "temporal")
+    harness.add_relation_unit(relation_id, "temporal/0")
+    harness.update_relation_data(relation_id, "temporal", {"host": "temporal", "port": "7233"})
     relation_ids.append(relation_id)
     return relation_ids
index 275137c..2d95eca 100644 (file)
@@ -21,7 +21,7 @@
 [tox]
 skipsdist=True
 skip_missing_interpreters = True
-envlist = lint, unit
+envlist = lint, unit, integration
 
 [vars]
 src_path = {toxinidir}/src/
@@ -29,6 +29,7 @@ tst_path = {toxinidir}/tests/
 all_path = {[vars]src_path} {[vars]tst_path} 
 
 [testenv]
+basepython = python3.8
 setenv =
   PYTHONPATH = {toxinidir}:{toxinidir}/lib:{[vars]src_path}
   PYTHONBREAKPOINT=ipdb.set_trace
@@ -53,14 +54,13 @@ deps =
     black
     flake8
     flake8-docstrings
-    flake8-copyright
     flake8-builtins
     pyproject-flake8
     pep8-naming
     isort
     codespell
 commands =
-    codespell {toxinidir}/. --skip {toxinidir}/.git --skip {toxinidir}/.tox \
+    codespell {toxinidir} --skip {toxinidir}/.git --skip {toxinidir}/.tox \
       --skip {toxinidir}/build --skip {toxinidir}/lib --skip {toxinidir}/venv \
       --skip {toxinidir}/.mypy_cache --skip {toxinidir}/icon.svg
     # pflake8 wrapper supports config from pyproject.toml
@@ -85,8 +85,8 @@ commands =
 description = Run integration tests
 deps =
     pytest
-    juju
+    juju<3
     pytest-operator
     -r{toxinidir}/requirements.txt
 commands =
-    pytest -v --tb native --ignore={[vars]tst_path}unit --log-cli-level=INFO -s {posargs}
+    pytest -v --tb native --ignore={[vars]tst_path}unit --log-cli-level=INFO -s {posargs} --cloud microk8s
index df275c9..a92100d 100644 (file)
@@ -41,7 +41,7 @@ options:
     description: |
       Mysql URI with the following format:
         mysql://<user>:<password>@<mysql_host>:<mysql_port>/<database>
-      
+
       This should be removed after the mysql-integrator charm is made.
 
       If provided, this config will override the mysql relation.
@@ -51,21 +51,21 @@ options:
     type: boolean
     description: |
       Great for OSM Developers! (Not recommended for production deployments)
-        
+
       This action activates the Debug Mode, which sets up the container to be ready for debugging.
       As part of the setup, SSH is enabled and a VSCode workspace file is automatically populated.
 
       After enabling the debug-mode, execute the following command to get the information you need
       to start debugging:
-        `juju run-action get-debug-mode-information <unit name> --wait`
-      
+        `juju run-action <unit name> get-debug-mode-information --wait`
+
       The previous command returns the command you need to execute, and the SSH password that was set.
 
       See also:
         - https://charmhub.io/osm-pol/configure#pol-hostpath
         - https://charmhub.io/osm-pol/configure#common-hostpath
     default: false
-  
+
   pol-hostpath:
     type: string
     description: |
@@ -76,7 +76,7 @@ options:
         $ git clone "https://osm.etsi.org/gerrit/osm/POL" /home/ubuntu/POL
         $ juju config pol pol-hostpath=/home/ubuntu/POL
 
-      This configuration only applies if option `debug-mode` is set to true. 
+      This configuration only applies if option `debug-mode` is set to true.
 
   common-hostpath:
     type: string
@@ -88,4 +88,4 @@ options:
         $ git clone "https://osm.etsi.org/gerrit/osm/common" /home/ubuntu/common
         $ juju config pol common-hostpath=/home/ubuntu/common
 
-      This configuration only applies if option `debug-mode` is set to true. 
+      This configuration only applies if option `debug-mode` is set to true.
diff --git a/installers/charm/osm-pol/lib/charms/data_platform_libs/v0/data_interfaces.py b/installers/charm/osm-pol/lib/charms/data_platform_libs/v0/data_interfaces.py
new file mode 100644 (file)
index 0000000..b3da5aa
--- /dev/null
@@ -0,0 +1,1130 @@
+# Copyright 2023 Canonical Ltd.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Library to manage the relation for the data-platform products.
+
+This library contains the Requires and Provides classes for handling the relation
+between an application and multiple managed application supported by the data-team:
+MySQL, Postgresql, MongoDB, Redis,  and Kakfa.
+
+### Database (MySQL, Postgresql, MongoDB, and Redis)
+
+#### Requires Charm
+This library is a uniform interface to a selection of common database
+metadata, with added custom events that add convenience to database management,
+and methods to consume the application related data.
+
+
+Following an example of using the DatabaseCreatedEvent, in the context of the
+application charm code:
+
+```python
+
+from charms.data_platform_libs.v0.data_interfaces import (
+    DatabaseCreatedEvent,
+    DatabaseRequires,
+)
+
+class ApplicationCharm(CharmBase):
+    # Application charm that connects to database charms.
+
+    def __init__(self, *args):
+        super().__init__(*args)
+
+        # Charm events defined in the database requires charm library.
+        self.database = DatabaseRequires(self, relation_name="database", database_name="database")
+        self.framework.observe(self.database.on.database_created, self._on_database_created)
+
+    def _on_database_created(self, event: DatabaseCreatedEvent) -> None:
+        # Handle the created database
+
+        # Create configuration file for app
+        config_file = self._render_app_config_file(
+            event.username,
+            event.password,
+            event.endpoints,
+        )
+
+        # Start application with rendered configuration
+        self._start_application(config_file)
+
+        # Set active status
+        self.unit.status = ActiveStatus("received database credentials")
+```
+
+As shown above, the library provides some custom events to handle specific situations,
+which are listed below:
+
+-  database_created: event emitted when the requested database is created.
+-  endpoints_changed: event emitted when the read/write endpoints of the database have changed.
+-  read_only_endpoints_changed: event emitted when the read-only endpoints of the database
+  have changed. Event is not triggered if read/write endpoints changed too.
+
+If it is needed to connect multiple database clusters to the same relation endpoint
+the application charm can implement the same code as if it would connect to only
+one database cluster (like the above code example).
+
+To differentiate multiple clusters connected to the same relation endpoint
+the application charm can use the name of the remote application:
+
+```python
+
+def _on_database_created(self, event: DatabaseCreatedEvent) -> None:
+    # Get the remote app name of the cluster that triggered this event
+    cluster = event.relation.app.name
+```
+
+It is also possible to provide an alias for each different database cluster/relation.
+
+So, it is possible to differentiate the clusters in two ways.
+The first is to use the remote application name, i.e., `event.relation.app.name`, as above.
+
+The second way is to use different event handlers to handle each cluster events.
+The implementation would be something like the following code:
+
+```python
+
+from charms.data_platform_libs.v0.data_interfaces import (
+    DatabaseCreatedEvent,
+    DatabaseRequires,
+)
+
+class ApplicationCharm(CharmBase):
+    # Application charm that connects to database charms.
+
+    def __init__(self, *args):
+        super().__init__(*args)
+
+        # Define the cluster aliases and one handler for each cluster database created event.
+        self.database = DatabaseRequires(
+            self,
+            relation_name="database",
+            database_name="database",
+            relations_aliases = ["cluster1", "cluster2"],
+        )
+        self.framework.observe(
+            self.database.on.cluster1_database_created, self._on_cluster1_database_created
+        )
+        self.framework.observe(
+            self.database.on.cluster2_database_created, self._on_cluster2_database_created
+        )
+
+    def _on_cluster1_database_created(self, event: DatabaseCreatedEvent) -> None:
+        # Handle the created database on the cluster named cluster1
+
+        # Create configuration file for app
+        config_file = self._render_app_config_file(
+            event.username,
+            event.password,
+            event.endpoints,
+        )
+        ...
+
+    def _on_cluster2_database_created(self, event: DatabaseCreatedEvent) -> None:
+        # Handle the created database on the cluster named cluster2
+
+        # Create configuration file for app
+        config_file = self._render_app_config_file(
+            event.username,
+            event.password,
+            event.endpoints,
+        )
+        ...
+
+```
+
+### Provider Charm
+
+Following an example of using the DatabaseRequestedEvent, in the context of the
+database charm code:
+
+```python
+from charms.data_platform_libs.v0.data_interfaces import DatabaseProvides
+
+class SampleCharm(CharmBase):
+
+    def __init__(self, *args):
+        super().__init__(*args)
+        # Charm events defined in the database provides charm library.
+        self.provided_database = DatabaseProvides(self, relation_name="database")
+        self.framework.observe(self.provided_database.on.database_requested,
+            self._on_database_requested)
+        # Database generic helper
+        self.database = DatabaseHelper()
+
+    def _on_database_requested(self, event: DatabaseRequestedEvent) -> None:
+        # Handle the event triggered by a new database requested in the relation
+        # Retrieve the database name using the charm library.
+        db_name = event.database
+        # generate a new user credential
+        username = self.database.generate_user()
+        password = self.database.generate_password()
+        # set the credentials for the relation
+        self.provided_database.set_credentials(event.relation.id, username, password)
+        # set other variables for the relation event.set_tls("False")
+```
+As shown above, the library provides a custom event (database_requested) to handle
+the situation when an application charm requests a new database to be created.
+It's preferred to subscribe to this event instead of relation changed event to avoid
+creating a new database when other information other than a database name is
+exchanged in the relation databag.
+
+### Kafka
+
+This library is the interface to use and interact with the Kafka charm. This library contains
+custom events that add convenience to manage Kafka, and provides methods to consume the
+application related data.
+
+#### Requirer Charm
+
+```python
+
+from charms.data_platform_libs.v0.data_interfaces import (
+    BootstrapServerChangedEvent,
+    KafkaRequires,
+    TopicCreatedEvent,
+)
+
+class ApplicationCharm(CharmBase):
+
+    def __init__(self, *args):
+        super().__init__(*args)
+        self.kafka = KafkaRequires(self, "kafka_client", "test-topic")
+        self.framework.observe(
+            self.kafka.on.bootstrap_server_changed, self._on_kafka_bootstrap_server_changed
+        )
+        self.framework.observe(
+            self.kafka.on.topic_created, self._on_kafka_topic_created
+        )
+
+    def _on_kafka_bootstrap_server_changed(self, event: BootstrapServerChangedEvent):
+        # Event triggered when a bootstrap server was changed for this application
+
+        new_bootstrap_server = event.bootstrap_server
+        ...
+
+    def _on_kafka_topic_created(self, event: TopicCreatedEvent):
+        # Event triggered when a topic was created for this application
+        username = event.username
+        password = event.password
+        tls = event.tls
+        tls_ca= event.tls_ca
+        bootstrap_server event.bootstrap_server
+        consumer_group_prefic = event.consumer_group_prefix
+        zookeeper_uris = event.zookeeper_uris
+        ...
+
+```
+
+As shown above, the library provides some custom events to handle specific situations,
+which are listed below:
+
+- topic_created: event emitted when the requested topic is created.
+- bootstrap_server_changed: event emitted when the bootstrap server have changed.
+- credential_changed: event emitted when the credentials of Kafka changed.
+
+### Provider Charm
+
+Following the previous example, this is an example of the provider charm.
+
+```python
+class SampleCharm(CharmBase):
+
+from charms.data_platform_libs.v0.data_interfaces import (
+    KafkaProvides,
+    TopicRequestedEvent,
+)
+
+    def __init__(self, *args):
+        super().__init__(*args)
+
+        # Default charm events.
+        self.framework.observe(self.on.start, self._on_start)
+
+        # Charm events defined in the Kafka Provides charm library.
+        self.kafka_provider = KafkaProvides(self, relation_name="kafka_client")
+        self.framework.observe(self.kafka_provider.on.topic_requested, self._on_topic_requested)
+        # Kafka generic helper
+        self.kafka = KafkaHelper()
+
+    def _on_topic_requested(self, event: TopicRequestedEvent):
+        # Handle the on_topic_requested event.
+
+        topic = event.topic
+        relation_id = event.relation.id
+        # set connection info in the databag relation
+        self.kafka_provider.set_bootstrap_server(relation_id, self.kafka.get_bootstrap_server())
+        self.kafka_provider.set_credentials(relation_id, username=username, password=password)
+        self.kafka_provider.set_consumer_group_prefix(relation_id, ...)
+        self.kafka_provider.set_tls(relation_id, "False")
+        self.kafka_provider.set_zookeeper_uris(relation_id, ...)
+
+```
+As shown above, the library provides a custom event (topic_requested) to handle
+the situation when an application charm requests a new topic to be created.
+It is preferred to subscribe to this event instead of relation changed event to avoid
+creating a new topic when other information other than a topic name is
+exchanged in the relation databag.
+"""
+
+import json
+import logging
+from abc import ABC, abstractmethod
+from collections import namedtuple
+from datetime import datetime
+from typing import List, Optional
+
+from ops.charm import (
+    CharmBase,
+    CharmEvents,
+    RelationChangedEvent,
+    RelationEvent,
+    RelationJoinedEvent,
+)
+from ops.framework import EventSource, Object
+from ops.model import Relation
+
+# The unique Charmhub library identifier, never change it
+LIBID = "6c3e6b6680d64e9c89e611d1a15f65be"
+
+# Increment this major API version when introducing breaking changes
+LIBAPI = 0
+
+# Increment this PATCH version before using `charmcraft publish-lib` or reset
+# to 0 if you are raising the major API version
+LIBPATCH = 7
+
+PYDEPS = ["ops>=2.0.0"]
+
+logger = logging.getLogger(__name__)
+
+Diff = namedtuple("Diff", "added changed deleted")
+Diff.__doc__ = """
+A tuple for storing the diff between two data mappings.
+
+added - keys that were added
+changed - keys that still exist but have new values
+deleted - key that were deleted"""
+
+
+def diff(event: RelationChangedEvent, bucket: str) -> Diff:
+    """Retrieves the diff of the data in the relation changed databag.
+
+    Args:
+        event: relation changed event.
+        bucket: bucket of the databag (app or unit)
+
+    Returns:
+        a Diff instance containing the added, deleted and changed
+            keys from the event relation databag.
+    """
+    # Retrieve the old data from the data key in the application relation databag.
+    old_data = json.loads(event.relation.data[bucket].get("data", "{}"))
+    # Retrieve the new data from the event relation databag.
+    new_data = {
+        key: value for key, value in event.relation.data[event.app].items() if key != "data"
+    }
+
+    # These are the keys that were added to the databag and triggered this event.
+    added = new_data.keys() - old_data.keys()
+    # These are the keys that were removed from the databag and triggered this event.
+    deleted = old_data.keys() - new_data.keys()
+    # These are the keys that already existed in the databag,
+    # but had their values changed.
+    changed = {key for key in old_data.keys() & new_data.keys() if old_data[key] != new_data[key]}
+    # Convert the new_data to a serializable format and save it for a next diff check.
+    event.relation.data[bucket].update({"data": json.dumps(new_data)})
+
+    # Return the diff with all possible changes.
+    return Diff(added, changed, deleted)
+
+
+# Base DataProvides and DataRequires
+
+
+class DataProvides(Object, ABC):
+    """Base provides-side of the data products relation."""
+
+    def __init__(self, charm: CharmBase, relation_name: str) -> None:
+        super().__init__(charm, relation_name)
+        self.charm = charm
+        self.local_app = self.charm.model.app
+        self.local_unit = self.charm.unit
+        self.relation_name = relation_name
+        self.framework.observe(
+            charm.on[relation_name].relation_changed,
+            self._on_relation_changed,
+        )
+
+    def _diff(self, event: RelationChangedEvent) -> Diff:
+        """Retrieves the diff of the data in the relation changed databag.
+
+        Args:
+            event: relation changed event.
+
+        Returns:
+            a Diff instance containing the added, deleted and changed
+                keys from the event relation databag.
+        """
+        return diff(event, self.local_app)
+
+    @abstractmethod
+    def _on_relation_changed(self, event: RelationChangedEvent) -> None:
+        """Event emitted when the relation data has changed."""
+        raise NotImplementedError
+
+    def fetch_relation_data(self) -> dict:
+        """Retrieves data from relation.
+
+        This function can be used to retrieve data from a relation
+        in the charm code when outside an event callback.
+
+        Returns:
+            a dict of the values stored in the relation data bag
+                for all relation instances (indexed by the relation id).
+        """
+        data = {}
+        for relation in self.relations:
+            data[relation.id] = {
+                key: value for key, value in relation.data[relation.app].items() if key != "data"
+            }
+        return data
+
+    def _update_relation_data(self, relation_id: int, data: dict) -> None:
+        """Updates a set of key-value pairs in the relation.
+
+        This function writes in the application data bag, therefore,
+        only the leader unit can call it.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            data: dict containing the key-value pairs
+                that should be updated in the relation.
+        """
+        if self.local_unit.is_leader():
+            relation = self.charm.model.get_relation(self.relation_name, relation_id)
+            relation.data[self.local_app].update(data)
+
+    @property
+    def relations(self) -> List[Relation]:
+        """The list of Relation instances associated with this relation_name."""
+        return list(self.charm.model.relations[self.relation_name])
+
+    def set_credentials(self, relation_id: int, username: str, password: str) -> None:
+        """Set credentials.
+
+        This function writes in the application data bag, therefore,
+        only the leader unit can call it.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            username: user that was created.
+            password: password of the created user.
+        """
+        self._update_relation_data(
+            relation_id,
+            {
+                "username": username,
+                "password": password,
+            },
+        )
+
+    def set_tls(self, relation_id: int, tls: str) -> None:
+        """Set whether TLS is enabled.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            tls: whether tls is enabled (True or False).
+        """
+        self._update_relation_data(relation_id, {"tls": tls})
+
+    def set_tls_ca(self, relation_id: int, tls_ca: str) -> None:
+        """Set the TLS CA in the application relation databag.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            tls_ca: TLS certification authority.
+        """
+        self._update_relation_data(relation_id, {"tls_ca": tls_ca})
+
+
+class DataRequires(Object, ABC):
+    """Requires-side of the relation."""
+
+    def __init__(
+        self,
+        charm,
+        relation_name: str,
+        extra_user_roles: str = None,
+    ):
+        """Manager of base client relations."""
+        super().__init__(charm, relation_name)
+        self.charm = charm
+        self.extra_user_roles = extra_user_roles
+        self.local_app = self.charm.model.app
+        self.local_unit = self.charm.unit
+        self.relation_name = relation_name
+        self.framework.observe(
+            self.charm.on[relation_name].relation_joined, self._on_relation_joined_event
+        )
+        self.framework.observe(
+            self.charm.on[relation_name].relation_changed, self._on_relation_changed_event
+        )
+
+    @abstractmethod
+    def _on_relation_joined_event(self, event: RelationJoinedEvent) -> None:
+        """Event emitted when the application joins the relation."""
+        raise NotImplementedError
+
+    @abstractmethod
+    def _on_relation_changed_event(self, event: RelationChangedEvent) -> None:
+        raise NotImplementedError
+
+    def fetch_relation_data(self) -> dict:
+        """Retrieves data from relation.
+
+        This function can be used to retrieve data from a relation
+        in the charm code when outside an event callback.
+        Function cannot be used in `*-relation-broken` events and will raise an exception.
+
+        Returns:
+            a dict of the values stored in the relation data bag
+                for all relation instances (indexed by the relation ID).
+        """
+        data = {}
+        for relation in self.relations:
+            data[relation.id] = {
+                key: value for key, value in relation.data[relation.app].items() if key != "data"
+            }
+        return data
+
+    def _update_relation_data(self, relation_id: int, data: dict) -> None:
+        """Updates a set of key-value pairs in the relation.
+
+        This function writes in the application data bag, therefore,
+        only the leader unit can call it.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            data: dict containing the key-value pairs
+                that should be updated in the relation.
+        """
+        if self.local_unit.is_leader():
+            relation = self.charm.model.get_relation(self.relation_name, relation_id)
+            relation.data[self.local_app].update(data)
+
+    def _diff(self, event: RelationChangedEvent) -> Diff:
+        """Retrieves the diff of the data in the relation changed databag.
+
+        Args:
+            event: relation changed event.
+
+        Returns:
+            a Diff instance containing the added, deleted and changed
+                keys from the event relation databag.
+        """
+        return diff(event, self.local_unit)
+
+    @property
+    def relations(self) -> List[Relation]:
+        """The list of Relation instances associated with this relation_name."""
+        return [
+            relation
+            for relation in self.charm.model.relations[self.relation_name]
+            if self._is_relation_active(relation)
+        ]
+
+    @staticmethod
+    def _is_relation_active(relation: Relation):
+        try:
+            _ = repr(relation.data)
+            return True
+        except RuntimeError:
+            return False
+
+    @staticmethod
+    def _is_resource_created_for_relation(relation: Relation):
+        return (
+            "username" in relation.data[relation.app] and "password" in relation.data[relation.app]
+        )
+
+    def is_resource_created(self, relation_id: Optional[int] = None) -> bool:
+        """Check if the resource has been created.
+
+        This function can be used to check if the Provider answered with data in the charm code
+        when outside an event callback.
+
+        Args:
+            relation_id (int, optional): When provided the check is done only for the relation id
+                provided, otherwise the check is done for all relations
+
+        Returns:
+            True or False
+
+        Raises:
+            IndexError: If relation_id is provided but that relation does not exist
+        """
+        if relation_id is not None:
+            try:
+                relation = [relation for relation in self.relations if relation.id == relation_id][
+                    0
+                ]
+                return self._is_resource_created_for_relation(relation)
+            except IndexError:
+                raise IndexError(f"relation id {relation_id} cannot be accessed")
+        else:
+            return (
+                all(
+                    [
+                        self._is_resource_created_for_relation(relation)
+                        for relation in self.relations
+                    ]
+                )
+                if self.relations
+                else False
+            )
+
+
+# General events
+
+
+class ExtraRoleEvent(RelationEvent):
+    """Base class for data events."""
+
+    @property
+    def extra_user_roles(self) -> Optional[str]:
+        """Returns the extra user roles that were requested."""
+        return self.relation.data[self.relation.app].get("extra-user-roles")
+
+
+class AuthenticationEvent(RelationEvent):
+    """Base class for authentication fields for events."""
+
+    @property
+    def username(self) -> Optional[str]:
+        """Returns the created username."""
+        return self.relation.data[self.relation.app].get("username")
+
+    @property
+    def password(self) -> Optional[str]:
+        """Returns the password for the created user."""
+        return self.relation.data[self.relation.app].get("password")
+
+    @property
+    def tls(self) -> Optional[str]:
+        """Returns whether TLS is configured."""
+        return self.relation.data[self.relation.app].get("tls")
+
+    @property
+    def tls_ca(self) -> Optional[str]:
+        """Returns TLS CA."""
+        return self.relation.data[self.relation.app].get("tls-ca")
+
+
+# Database related events and fields
+
+
+class DatabaseProvidesEvent(RelationEvent):
+    """Base class for database events."""
+
+    @property
+    def database(self) -> Optional[str]:
+        """Returns the database that was requested."""
+        return self.relation.data[self.relation.app].get("database")
+
+
+class DatabaseRequestedEvent(DatabaseProvidesEvent, ExtraRoleEvent):
+    """Event emitted when a new database is requested for use on this relation."""
+
+
+class DatabaseProvidesEvents(CharmEvents):
+    """Database events.
+
+    This class defines the events that the database can emit.
+    """
+
+    database_requested = EventSource(DatabaseRequestedEvent)
+
+
+class DatabaseRequiresEvent(RelationEvent):
+    """Base class for database events."""
+
+    @property
+    def endpoints(self) -> Optional[str]:
+        """Returns a comma separated list of read/write endpoints."""
+        return self.relation.data[self.relation.app].get("endpoints")
+
+    @property
+    def read_only_endpoints(self) -> Optional[str]:
+        """Returns a comma separated list of read only endpoints."""
+        return self.relation.data[self.relation.app].get("read-only-endpoints")
+
+    @property
+    def replset(self) -> Optional[str]:
+        """Returns the replicaset name.
+
+        MongoDB only.
+        """
+        return self.relation.data[self.relation.app].get("replset")
+
+    @property
+    def uris(self) -> Optional[str]:
+        """Returns the connection URIs.
+
+        MongoDB, Redis, OpenSearch.
+        """
+        return self.relation.data[self.relation.app].get("uris")
+
+    @property
+    def version(self) -> Optional[str]:
+        """Returns the version of the database.
+
+        Version as informed by the database daemon.
+        """
+        return self.relation.data[self.relation.app].get("version")
+
+
+class DatabaseCreatedEvent(AuthenticationEvent, DatabaseRequiresEvent):
+    """Event emitted when a new database is created for use on this relation."""
+
+
+class DatabaseEndpointsChangedEvent(AuthenticationEvent, DatabaseRequiresEvent):
+    """Event emitted when the read/write endpoints are changed."""
+
+
+class DatabaseReadOnlyEndpointsChangedEvent(AuthenticationEvent, DatabaseRequiresEvent):
+    """Event emitted when the read only endpoints are changed."""
+
+
+class DatabaseRequiresEvents(CharmEvents):
+    """Database events.
+
+    This class defines the events that the database can emit.
+    """
+
+    database_created = EventSource(DatabaseCreatedEvent)
+    endpoints_changed = EventSource(DatabaseEndpointsChangedEvent)
+    read_only_endpoints_changed = EventSource(DatabaseReadOnlyEndpointsChangedEvent)
+
+
+# Database Provider and Requires
+
+
+class DatabaseProvides(DataProvides):
+    """Provider-side of the database relations."""
+
+    on = DatabaseProvidesEvents()
+
+    def __init__(self, charm: CharmBase, relation_name: str) -> None:
+        super().__init__(charm, relation_name)
+
+    def _on_relation_changed(self, event: RelationChangedEvent) -> None:
+        """Event emitted when the relation has changed."""
+        # Only the leader should handle this event.
+        if not self.local_unit.is_leader():
+            return
+
+        # Check which data has changed to emit customs events.
+        diff = self._diff(event)
+
+        # Emit a database requested event if the setup key (database name and optional
+        # extra user roles) was added to the relation databag by the application.
+        if "database" in diff.added:
+            self.on.database_requested.emit(event.relation, app=event.app, unit=event.unit)
+
+    def set_endpoints(self, relation_id: int, connection_strings: str) -> None:
+        """Set database primary connections.
+
+        This function writes in the application data bag, therefore,
+        only the leader unit can call it.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            connection_strings: database hosts and ports comma separated list.
+        """
+        self._update_relation_data(relation_id, {"endpoints": connection_strings})
+
+    def set_read_only_endpoints(self, relation_id: int, connection_strings: str) -> None:
+        """Set database replicas connection strings.
+
+        This function writes in the application data bag, therefore,
+        only the leader unit can call it.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            connection_strings: database hosts and ports comma separated list.
+        """
+        self._update_relation_data(relation_id, {"read-only-endpoints": connection_strings})
+
+    def set_replset(self, relation_id: int, replset: str) -> None:
+        """Set replica set name in the application relation databag.
+
+        MongoDB only.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            replset: replica set name.
+        """
+        self._update_relation_data(relation_id, {"replset": replset})
+
+    def set_uris(self, relation_id: int, uris: str) -> None:
+        """Set the database connection URIs in the application relation databag.
+
+        MongoDB, Redis, and OpenSearch only.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            uris: connection URIs.
+        """
+        self._update_relation_data(relation_id, {"uris": uris})
+
+    def set_version(self, relation_id: int, version: str) -> None:
+        """Set the database version in the application relation databag.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            version: database version.
+        """
+        self._update_relation_data(relation_id, {"version": version})
+
+
+class DatabaseRequires(DataRequires):
+    """Requires-side of the database relation."""
+
+    on = DatabaseRequiresEvents()
+
+    def __init__(
+        self,
+        charm,
+        relation_name: str,
+        database_name: str,
+        extra_user_roles: str = None,
+        relations_aliases: List[str] = None,
+    ):
+        """Manager of database client relations."""
+        super().__init__(charm, relation_name, extra_user_roles)
+        self.database = database_name
+        self.relations_aliases = relations_aliases
+
+        # Define custom event names for each alias.
+        if relations_aliases:
+            # Ensure the number of aliases does not exceed the maximum
+            # of connections allowed in the specific relation.
+            relation_connection_limit = self.charm.meta.requires[relation_name].limit
+            if len(relations_aliases) != relation_connection_limit:
+                raise ValueError(
+                    f"The number of aliases must match the maximum number of connections allowed in the relation. "
+                    f"Expected {relation_connection_limit}, got {len(relations_aliases)}"
+                )
+
+            for relation_alias in relations_aliases:
+                self.on.define_event(f"{relation_alias}_database_created", DatabaseCreatedEvent)
+                self.on.define_event(
+                    f"{relation_alias}_endpoints_changed", DatabaseEndpointsChangedEvent
+                )
+                self.on.define_event(
+                    f"{relation_alias}_read_only_endpoints_changed",
+                    DatabaseReadOnlyEndpointsChangedEvent,
+                )
+
+    def _assign_relation_alias(self, relation_id: int) -> None:
+        """Assigns an alias to a relation.
+
+        This function writes in the unit data bag.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+        """
+        # If no aliases were provided, return immediately.
+        if not self.relations_aliases:
+            return
+
+        # Return if an alias was already assigned to this relation
+        # (like when there are more than one unit joining the relation).
+        if (
+            self.charm.model.get_relation(self.relation_name, relation_id)
+            .data[self.local_unit]
+            .get("alias")
+        ):
+            return
+
+        # Retrieve the available aliases (the ones that weren't assigned to any relation).
+        available_aliases = self.relations_aliases[:]
+        for relation in self.charm.model.relations[self.relation_name]:
+            alias = relation.data[self.local_unit].get("alias")
+            if alias:
+                logger.debug("Alias %s was already assigned to relation %d", alias, relation.id)
+                available_aliases.remove(alias)
+
+        # Set the alias in the unit relation databag of the specific relation.
+        relation = self.charm.model.get_relation(self.relation_name, relation_id)
+        relation.data[self.local_unit].update({"alias": available_aliases[0]})
+
+    def _emit_aliased_event(self, event: RelationChangedEvent, event_name: str) -> None:
+        """Emit an aliased event to a particular relation if it has an alias.
+
+        Args:
+            event: the relation changed event that was received.
+            event_name: the name of the event to emit.
+        """
+        alias = self._get_relation_alias(event.relation.id)
+        if alias:
+            getattr(self.on, f"{alias}_{event_name}").emit(
+                event.relation, app=event.app, unit=event.unit
+            )
+
+    def _get_relation_alias(self, relation_id: int) -> Optional[str]:
+        """Returns the relation alias.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+
+        Returns:
+            the relation alias or None if the relation was not found.
+        """
+        for relation in self.charm.model.relations[self.relation_name]:
+            if relation.id == relation_id:
+                return relation.data[self.local_unit].get("alias")
+        return None
+
+    def _on_relation_joined_event(self, event: RelationJoinedEvent) -> None:
+        """Event emitted when the application joins the database relation."""
+        # If relations aliases were provided, assign one to the relation.
+        self._assign_relation_alias(event.relation.id)
+
+        # Sets both database and extra user roles in the relation
+        # if the roles are provided. Otherwise, sets only the database.
+        if self.extra_user_roles:
+            self._update_relation_data(
+                event.relation.id,
+                {
+                    "database": self.database,
+                    "extra-user-roles": self.extra_user_roles,
+                },
+            )
+        else:
+            self._update_relation_data(event.relation.id, {"database": self.database})
+
+    def _on_relation_changed_event(self, event: RelationChangedEvent) -> None:
+        """Event emitted when the database relation has changed."""
+        # Check which data has changed to emit customs events.
+        diff = self._diff(event)
+
+        # Check if the database is created
+        # (the database charm shared the credentials).
+        if "username" in diff.added and "password" in diff.added:
+            # Emit the default event (the one without an alias).
+            logger.info("database created at %s", datetime.now())
+            self.on.database_created.emit(event.relation, app=event.app, unit=event.unit)
+
+            # Emit the aliased event (if any).
+            self._emit_aliased_event(event, "database_created")
+
+            # To avoid unnecessary application restarts do not trigger
+            # â€œendpoints_changed“ event if â€œdatabase_created“ is triggered.
+            return
+
+        # Emit an endpoints changed event if the database
+        # added or changed this info in the relation databag.
+        if "endpoints" in diff.added or "endpoints" in diff.changed:
+            # Emit the default event (the one without an alias).
+            logger.info("endpoints changed on %s", datetime.now())
+            self.on.endpoints_changed.emit(event.relation, app=event.app, unit=event.unit)
+
+            # Emit the aliased event (if any).
+            self._emit_aliased_event(event, "endpoints_changed")
+
+            # To avoid unnecessary application restarts do not trigger
+            # â€œread_only_endpoints_changed“ event if â€œendpoints_changed“ is triggered.
+            return
+
+        # Emit a read only endpoints changed event if the database
+        # added or changed this info in the relation databag.
+        if "read-only-endpoints" in diff.added or "read-only-endpoints" in diff.changed:
+            # Emit the default event (the one without an alias).
+            logger.info("read-only-endpoints changed on %s", datetime.now())
+            self.on.read_only_endpoints_changed.emit(
+                event.relation, app=event.app, unit=event.unit
+            )
+
+            # Emit the aliased event (if any).
+            self._emit_aliased_event(event, "read_only_endpoints_changed")
+
+
+# Kafka related events
+
+
+class KafkaProvidesEvent(RelationEvent):
+    """Base class for Kafka events."""
+
+    @property
+    def topic(self) -> Optional[str]:
+        """Returns the topic that was requested."""
+        return self.relation.data[self.relation.app].get("topic")
+
+
+class TopicRequestedEvent(KafkaProvidesEvent, ExtraRoleEvent):
+    """Event emitted when a new topic is requested for use on this relation."""
+
+
+class KafkaProvidesEvents(CharmEvents):
+    """Kafka events.
+
+    This class defines the events that the Kafka can emit.
+    """
+
+    topic_requested = EventSource(TopicRequestedEvent)
+
+
+class KafkaRequiresEvent(RelationEvent):
+    """Base class for Kafka events."""
+
+    @property
+    def bootstrap_server(self) -> Optional[str]:
+        """Returns a a comma-seperated list of broker uris."""
+        return self.relation.data[self.relation.app].get("endpoints")
+
+    @property
+    def consumer_group_prefix(self) -> Optional[str]:
+        """Returns the consumer-group-prefix."""
+        return self.relation.data[self.relation.app].get("consumer-group-prefix")
+
+    @property
+    def zookeeper_uris(self) -> Optional[str]:
+        """Returns a comma separated list of Zookeeper uris."""
+        return self.relation.data[self.relation.app].get("zookeeper-uris")
+
+
+class TopicCreatedEvent(AuthenticationEvent, KafkaRequiresEvent):
+    """Event emitted when a new topic is created for use on this relation."""
+
+
+class BootstrapServerChangedEvent(AuthenticationEvent, KafkaRequiresEvent):
+    """Event emitted when the bootstrap server is changed."""
+
+
+class KafkaRequiresEvents(CharmEvents):
+    """Kafka events.
+
+    This class defines the events that the Kafka can emit.
+    """
+
+    topic_created = EventSource(TopicCreatedEvent)
+    bootstrap_server_changed = EventSource(BootstrapServerChangedEvent)
+
+
+# Kafka Provides and Requires
+
+
+class KafkaProvides(DataProvides):
+    """Provider-side of the Kafka relation."""
+
+    on = KafkaProvidesEvents()
+
+    def __init__(self, charm: CharmBase, relation_name: str) -> None:
+        super().__init__(charm, relation_name)
+
+    def _on_relation_changed(self, event: RelationChangedEvent) -> None:
+        """Event emitted when the relation has changed."""
+        # Only the leader should handle this event.
+        if not self.local_unit.is_leader():
+            return
+
+        # Check which data has changed to emit customs events.
+        diff = self._diff(event)
+
+        # Emit a topic requested event if the setup key (topic name and optional
+        # extra user roles) was added to the relation databag by the application.
+        if "topic" in diff.added:
+            self.on.topic_requested.emit(event.relation, app=event.app, unit=event.unit)
+
+    def set_bootstrap_server(self, relation_id: int, bootstrap_server: str) -> None:
+        """Set the bootstrap server in the application relation databag.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            bootstrap_server: the bootstrap server address.
+        """
+        self._update_relation_data(relation_id, {"endpoints": bootstrap_server})
+
+    def set_consumer_group_prefix(self, relation_id: int, consumer_group_prefix: str) -> None:
+        """Set the consumer group prefix in the application relation databag.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            consumer_group_prefix: the consumer group prefix string.
+        """
+        self._update_relation_data(relation_id, {"consumer-group-prefix": consumer_group_prefix})
+
+    def set_zookeeper_uris(self, relation_id: int, zookeeper_uris: str) -> None:
+        """Set the zookeeper uris in the application relation databag.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            zookeeper_uris: comma-seperated list of ZooKeeper server uris.
+        """
+        self._update_relation_data(relation_id, {"zookeeper-uris": zookeeper_uris})
+
+
+class KafkaRequires(DataRequires):
+    """Requires-side of the Kafka relation."""
+
+    on = KafkaRequiresEvents()
+
+    def __init__(self, charm, relation_name: str, topic: str, extra_user_roles: str = None):
+        """Manager of Kafka client relations."""
+        # super().__init__(charm, relation_name)
+        super().__init__(charm, relation_name, extra_user_roles)
+        self.charm = charm
+        self.topic = topic
+
+    def _on_relation_joined_event(self, event: RelationJoinedEvent) -> None:
+        """Event emitted when the application joins the Kafka relation."""
+        # Sets both topic and extra user roles in the relation
+        # if the roles are provided. Otherwise, sets only the topic.
+        self._update_relation_data(
+            event.relation.id,
+            {
+                "topic": self.topic,
+                "extra-user-roles": self.extra_user_roles,
+            }
+            if self.extra_user_roles is not None
+            else {"topic": self.topic},
+        )
+
+    def _on_relation_changed_event(self, event: RelationChangedEvent) -> None:
+        """Event emitted when the Kafka relation has changed."""
+        # Check which data has changed to emit customs events.
+        diff = self._diff(event)
+
+        # Check if the topic is created
+        # (the Kafka charm shared the credentials).
+        if "username" in diff.added and "password" in diff.added:
+            # Emit the default event (the one without an alias).
+            logger.info("topic created at %s", datetime.now())
+            self.on.topic_created.emit(event.relation, app=event.app, unit=event.unit)
+
+            # To avoid unnecessary application restarts do not trigger
+            # â€œendpoints_changed“ event if â€œtopic_created“ is triggered.
+            return
+
+        # Emit an endpoints (bootstap-server) changed event if the Kakfa endpoints
+        # added or changed this info in the relation databag.
+        if "endpoints" in diff.added or "endpoints" in diff.changed:
+            # Emit the default event (the one without an alias).
+            logger.info("endpoints changed on %s", datetime.now())
+            self.on.bootstrap_server_changed.emit(
+                event.relation, app=event.app, unit=event.unit
+            )  # here check if this is the right design
+            return
index d6bb35a..adf189a 100644 (file)
@@ -56,7 +56,7 @@ requires:
     interface: kafka
     limit: 1
   mongodb:
-    interface: mongodb
+    interface: mongodb_client
     limit: 1
   mysql:
     interface: mysql
index d0d4a5b..16cf0f4 100644 (file)
@@ -50,7 +50,3 @@ ignore = ["W503", "E501", "D107"]
 # D100, D101, D102, D103: Ignore missing docstrings in tests
 per-file-ignores = ["tests/*:D100,D101,D102,D103,D104"]
 docstring-convention = "google"
-# Check for properly formatted copyright header in each file
-copyright-check = "True"
-copyright-author = "Canonical Ltd."
-copyright-regexp = "Copyright\\s\\d{4}([-,]\\d{4})*\\s+%(author)s"
index cb303a3..398d4ad 100644 (file)
@@ -17,7 +17,7 @@
 #
 # To get in touch with the maintainers, please contact:
 # osm-charmers@lists.launchpad.net
-ops >= 1.2.0
+ops < 2.2
 lightkube
 lightkube-models
 # git+https://github.com/charmed-osm/config-validator/
index 2749ddb..07bf87e 100755 (executable)
@@ -30,6 +30,7 @@ See more: https://charmhub.io/osm
 import logging
 from typing import Any, Dict
 
+from charms.data_platform_libs.v0.data_interfaces import DatabaseRequires
 from charms.kafka_k8s.v0.kafka import KafkaEvents, KafkaRequires
 from charms.osm_libs.v0.utils import (
     CharmError,
@@ -43,7 +44,7 @@ from ops.framework import StoredState
 from ops.main import main
 from ops.model import ActiveStatus, Container
 
-from legacy_interfaces import MongoClient, MysqlClient
+from legacy_interfaces import MysqlClient
 
 HOSTPATHS = [
     HostPath(
@@ -71,7 +72,7 @@ class OsmPolCharm(CharmBase):
         super().__init__(*args)
 
         self.kafka = KafkaRequires(self)
-        self.mongodb_client = MongoClient(self, "mongodb")
+        self.mongodb_client = DatabaseRequires(self, "mongodb", database_name="osm")
         self.mysql_client = MysqlClient(self, "mysql")
         self._observe_charm_events()
         self.container: Container = self.unit.get_container(self.container_name)
@@ -145,16 +146,23 @@ class OsmPolCharm(CharmBase):
             # Relation events
             self.on.kafka_available: self._on_config_changed,
             self.on["kafka"].relation_broken: self._on_required_relation_broken,
+            self.on["mysql"].relation_changed: self._on_config_changed,
+            self.on["mysql"].relation_broken: self._on_config_changed,
+            self.mongodb_client.on.database_created: self._on_config_changed,
+            self.on["mongodb"].relation_broken: self._on_required_relation_broken,
             # Action events
             self.on.get_debug_mode_information_action: self._on_get_debug_mode_information_action,
         }
-        for relation in [self.on[rel_name] for rel_name in ["mongodb", "mysql"]]:
-            event_handler_mapping[relation.relation_changed] = self._on_config_changed
-            event_handler_mapping[relation.relation_broken] = self._on_required_relation_broken
 
         for event, handler in event_handler_mapping.items():
             self.framework.observe(event, handler)
 
+    def _is_database_available(self) -> bool:
+        try:
+            return self.mongodb_client.is_resource_created()
+        except KeyError:
+            return False
+
     def _validate_config(self) -> None:
         """Validate charm configuration.
 
@@ -174,7 +182,7 @@ class OsmPolCharm(CharmBase):
 
         if not self.kafka.host or not self.kafka.port:
             missing_relations.append("kafka")
-        if self.mongodb_client.is_missing_data_in_unit():
+        if not self._is_database_available():
             missing_relations.append("mongodb")
         if not self.config.get("mysql-uri") and self.mysql_client.is_missing_data_in_unit():
             missing_relations.append("mysql")
@@ -214,7 +222,7 @@ class OsmPolCharm(CharmBase):
                         "OSMPOL_MESSAGE_DRIVER": "kafka",
                         # Database Mongodb configuration
                         "OSMPOL_DATABASE_DRIVER": "mongo",
-                        "OSMPOL_DATABASE_URI": self.mongodb_client.connection_string,
+                        "OSMPOL_DATABASE_URI": self._get_mongodb_uri(),
                         # Database MySQL configuration
                         "OSMPOL_SQL_DATABASE_URI": self._get_mysql_uri(),
                     },
@@ -225,6 +233,9 @@ class OsmPolCharm(CharmBase):
     def _get_mysql_uri(self):
         return self.config.get("mysql-uri") or self.mysql_client.get_root_uri("pol")
 
+    def _get_mongodb_uri(self):
+        return list(self.mongodb_client.fetch_relation_data().values())[0]["uris"]
+
 
 if __name__ == "__main__":  # pragma: no cover
     main(OsmPolCharm)
diff --git a/installers/charm/osm-pol/tests/integration/test_charm.py b/installers/charm/osm-pol/tests/integration/test_charm.py
new file mode 100644 (file)
index 0000000..87132d1
--- /dev/null
@@ -0,0 +1,168 @@
+#!/usr/bin/env python3
+# Copyright 2022 Canonical Ltd.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+#
+# For those usages not covered by the Apache License, Version 2.0 please
+# contact: legal@canonical.com
+#
+# To get in touch with the maintainers, please contact:
+# osm-charmers@lists.launchpad.net
+#
+# Learn more about testing at: https://juju.is/docs/sdk/testing
+
+import asyncio
+import logging
+from pathlib import Path
+
+import pytest
+import yaml
+from pytest_operator.plugin import OpsTest
+
+logger = logging.getLogger(__name__)
+
+METADATA = yaml.safe_load(Path("./metadata.yaml").read_text())
+POL_APP = METADATA["name"]
+KAFKA_CHARM = "kafka-k8s"
+KAFKA_APP = "kafka"
+MONGO_DB_CHARM = "mongodb-k8s"
+MONGO_DB_APP = "mongodb"
+MARIADB_CHARM = "charmed-osm-mariadb-k8s"
+MARIADB_APP = "mariadb"
+ZOOKEEPER_CHARM = "zookeeper-k8s"
+ZOOKEEPER_APP = "zookeeper"
+APPS = [KAFKA_APP, ZOOKEEPER_APP, MONGO_DB_APP, MARIADB_APP, POL_APP]
+
+
+@pytest.mark.abort_on_fail
+async def test_pol_is_deployed(ops_test: OpsTest):
+    charm = await ops_test.build_charm(".")
+    resources = {"pol-image": METADATA["resources"]["pol-image"]["upstream-source"]}
+
+    await asyncio.gather(
+        ops_test.model.deploy(
+            charm, resources=resources, application_name=POL_APP, series="focal"
+        ),
+        ops_test.model.deploy(KAFKA_CHARM, application_name=KAFKA_APP, channel="stable"),
+        ops_test.model.deploy(MONGO_DB_CHARM, application_name=MONGO_DB_APP, channel="edge"),
+        ops_test.model.deploy(MARIADB_CHARM, application_name=MARIADB_APP, channel="stable"),
+        ops_test.model.deploy(ZOOKEEPER_CHARM, application_name=ZOOKEEPER_APP, channel="stable"),
+    )
+
+    async with ops_test.fast_forward():
+        await ops_test.model.wait_for_idle(
+            apps=APPS,
+        )
+    assert ops_test.model.applications[POL_APP].status == "blocked"
+    unit = ops_test.model.applications[POL_APP].units[0]
+    assert unit.workload_status_message == "need kafka, mongodb, mysql relations"
+
+    logger.info("Adding relations for other components")
+    await ops_test.model.add_relation(KAFKA_APP, ZOOKEEPER_APP)
+
+    logger.info("Adding relations")
+    await ops_test.model.add_relation(POL_APP, KAFKA_APP)
+    await ops_test.model.add_relation(POL_APP, MONGO_DB_APP)
+    await ops_test.model.add_relation(POL_APP, MARIADB_APP)
+
+    async with ops_test.fast_forward():
+        await ops_test.model.wait_for_idle(
+            apps=APPS,
+            status="active",
+        )
+
+
+@pytest.mark.abort_on_fail
+async def test_pol_scales_up(ops_test: OpsTest):
+    logger.info("Scaling up osm-pol")
+    expected_units = 3
+    assert len(ops_test.model.applications[POL_APP].units) == 1
+    await ops_test.model.applications[POL_APP].scale(expected_units)
+    async with ops_test.fast_forward():
+        await ops_test.model.wait_for_idle(
+            apps=[POL_APP], status="active", wait_for_exact_units=expected_units
+        )
+
+
+@pytest.mark.abort_on_fail
+@pytest.mark.parametrize("relation_to_remove", [KAFKA_APP, MONGO_DB_APP, MARIADB_APP])
+async def test_pol_blocks_without_relation(ops_test: OpsTest, relation_to_remove):
+    logger.info("Removing relation: %s", relation_to_remove)
+    # mongoDB relation is named "database"
+    local_relation = relation_to_remove
+    if relation_to_remove == MONGO_DB_APP:
+        local_relation = "database"
+    # mariaDB relation is named "mysql"
+    if relation_to_remove == MARIADB_APP:
+        local_relation = "mysql"
+    await asyncio.gather(
+        ops_test.model.applications[relation_to_remove].remove_relation(local_relation, POL_APP)
+    )
+    async with ops_test.fast_forward():
+        await ops_test.model.wait_for_idle(apps=[POL_APP])
+    assert ops_test.model.applications[POL_APP].status == "blocked"
+    for unit in ops_test.model.applications[POL_APP].units:
+        assert (
+            unit.workload_status_message
+            == f"need {'mysql' if relation_to_remove == MARIADB_APP else relation_to_remove} relation"
+        )
+    await ops_test.model.add_relation(POL_APP, relation_to_remove)
+    async with ops_test.fast_forward():
+        await ops_test.model.wait_for_idle(
+            apps=APPS,
+            status="active",
+        )
+
+
+@pytest.mark.abort_on_fail
+async def test_pol_action_debug_mode_disabled(ops_test: OpsTest):
+    async with ops_test.fast_forward():
+        await ops_test.model.wait_for_idle(
+            apps=APPS,
+            status="active",
+        )
+    logger.info("Running action 'get-debug-mode-information'")
+    action = (
+        await ops_test.model.applications[POL_APP]
+        .units[0]
+        .run_action("get-debug-mode-information")
+    )
+    async with ops_test.fast_forward():
+        await ops_test.model.wait_for_idle(apps=[POL_APP])
+    status = await ops_test.model.get_action_status(uuid_or_prefix=action.entity_id)
+    assert status[action.entity_id] == "failed"
+
+
+@pytest.mark.abort_on_fail
+async def test_pol_action_debug_mode_enabled(ops_test: OpsTest):
+    await ops_test.model.applications[POL_APP].set_config({"debug-mode": "true"})
+    async with ops_test.fast_forward():
+        await ops_test.model.wait_for_idle(
+            apps=APPS,
+            status="active",
+        )
+    logger.info("Running action 'get-debug-mode-information'")
+    # list of units is not ordered
+    unit_id = list(
+        filter(
+            lambda x: (x.entity_id == f"{POL_APP}/0"), ops_test.model.applications[POL_APP].units
+        )
+    )[0]
+    action = await unit_id.run_action("get-debug-mode-information")
+    async with ops_test.fast_forward():
+        await ops_test.model.wait_for_idle(apps=[POL_APP])
+    status = await ops_test.model.get_action_status(uuid_or_prefix=action.entity_id)
+    message = await ops_test.model.get_action_output(action_uuid=action.entity_id)
+    assert status[action.entity_id] == "completed"
+    assert "command" in message
+    assert "password" in message
index 3767539..1b5013a 100644 (file)
@@ -36,6 +36,7 @@ service_name = "pol"
 def harness(mocker: MockerFixture):
     harness = Harness(OsmPolCharm)
     harness.begin()
+    harness.container_pebble_ready(container_name)
     yield harness
     harness.cleanup()
 
@@ -69,13 +70,15 @@ def _add_relations(harness: Harness):
     relation_id = harness.add_relation("mongodb", "mongodb")
     harness.add_relation_unit(relation_id, "mongodb/0")
     harness.update_relation_data(
-        relation_id, "mongodb/0", {"connection_string": "mongodb://:1234"}
+        relation_id,
+        "mongodb",
+        {"uris": "mongodb://:1234", "username": "user", "password": "password"},
     )
     relation_ids.append(relation_id)
     # Add kafka relation
     relation_id = harness.add_relation("kafka", "kafka")
     harness.add_relation_unit(relation_id, "kafka/0")
-    harness.update_relation_data(relation_id, "kafka", {"host": "kafka", "port": 9092})
+    harness.update_relation_data(relation_id, "kafka", {"host": "kafka", "port": "9092"})
     relation_ids.append(relation_id)
     # Add mysql relation
     relation_id = harness.add_relation("mysql", "mysql")
@@ -85,7 +88,7 @@ def _add_relations(harness: Harness):
         "mysql/0",
         {
             "host": "mysql",
-            "port": 3306,
+            "port": "3306",
             "user": "mano",
             "password": "manopw",
             "root_password": "rootmanopw",
index 275137c..2d95eca 100644 (file)
@@ -21,7 +21,7 @@
 [tox]
 skipsdist=True
 skip_missing_interpreters = True
-envlist = lint, unit
+envlist = lint, unit, integration
 
 [vars]
 src_path = {toxinidir}/src/
@@ -29,6 +29,7 @@ tst_path = {toxinidir}/tests/
 all_path = {[vars]src_path} {[vars]tst_path} 
 
 [testenv]
+basepython = python3.8
 setenv =
   PYTHONPATH = {toxinidir}:{toxinidir}/lib:{[vars]src_path}
   PYTHONBREAKPOINT=ipdb.set_trace
@@ -53,14 +54,13 @@ deps =
     black
     flake8
     flake8-docstrings
-    flake8-copyright
     flake8-builtins
     pyproject-flake8
     pep8-naming
     isort
     codespell
 commands =
-    codespell {toxinidir}/. --skip {toxinidir}/.git --skip {toxinidir}/.tox \
+    codespell {toxinidir} --skip {toxinidir}/.git --skip {toxinidir}/.tox \
       --skip {toxinidir}/build --skip {toxinidir}/lib --skip {toxinidir}/venv \
       --skip {toxinidir}/.mypy_cache --skip {toxinidir}/icon.svg
     # pflake8 wrapper supports config from pyproject.toml
@@ -85,8 +85,8 @@ commands =
 description = Run integration tests
 deps =
     pytest
-    juju
+    juju<3
     pytest-operator
     -r{toxinidir}/requirements.txt
 commands =
-    pytest -v --tb native --ignore={[vars]tst_path}unit --log-cli-level=INFO -s {posargs}
+    pytest -v --tb native --ignore={[vars]tst_path}unit --log-cli-level=INFO -s {posargs} --cloud microk8s
diff --git a/installers/charm/osm-ro/lib/charms/data_platform_libs/v0/data_interfaces.py b/installers/charm/osm-ro/lib/charms/data_platform_libs/v0/data_interfaces.py
new file mode 100644 (file)
index 0000000..b3da5aa
--- /dev/null
@@ -0,0 +1,1130 @@
+# Copyright 2023 Canonical Ltd.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Library to manage the relation for the data-platform products.
+
+This library contains the Requires and Provides classes for handling the relation
+between an application and multiple managed application supported by the data-team:
+MySQL, Postgresql, MongoDB, Redis,  and Kakfa.
+
+### Database (MySQL, Postgresql, MongoDB, and Redis)
+
+#### Requires Charm
+This library is a uniform interface to a selection of common database
+metadata, with added custom events that add convenience to database management,
+and methods to consume the application related data.
+
+
+Following an example of using the DatabaseCreatedEvent, in the context of the
+application charm code:
+
+```python
+
+from charms.data_platform_libs.v0.data_interfaces import (
+    DatabaseCreatedEvent,
+    DatabaseRequires,
+)
+
+class ApplicationCharm(CharmBase):
+    # Application charm that connects to database charms.
+
+    def __init__(self, *args):
+        super().__init__(*args)
+
+        # Charm events defined in the database requires charm library.
+        self.database = DatabaseRequires(self, relation_name="database", database_name="database")
+        self.framework.observe(self.database.on.database_created, self._on_database_created)
+
+    def _on_database_created(self, event: DatabaseCreatedEvent) -> None:
+        # Handle the created database
+
+        # Create configuration file for app
+        config_file = self._render_app_config_file(
+            event.username,
+            event.password,
+            event.endpoints,
+        )
+
+        # Start application with rendered configuration
+        self._start_application(config_file)
+
+        # Set active status
+        self.unit.status = ActiveStatus("received database credentials")
+```
+
+As shown above, the library provides some custom events to handle specific situations,
+which are listed below:
+
+-  database_created: event emitted when the requested database is created.
+-  endpoints_changed: event emitted when the read/write endpoints of the database have changed.
+-  read_only_endpoints_changed: event emitted when the read-only endpoints of the database
+  have changed. Event is not triggered if read/write endpoints changed too.
+
+If it is needed to connect multiple database clusters to the same relation endpoint
+the application charm can implement the same code as if it would connect to only
+one database cluster (like the above code example).
+
+To differentiate multiple clusters connected to the same relation endpoint
+the application charm can use the name of the remote application:
+
+```python
+
+def _on_database_created(self, event: DatabaseCreatedEvent) -> None:
+    # Get the remote app name of the cluster that triggered this event
+    cluster = event.relation.app.name
+```
+
+It is also possible to provide an alias for each different database cluster/relation.
+
+So, it is possible to differentiate the clusters in two ways.
+The first is to use the remote application name, i.e., `event.relation.app.name`, as above.
+
+The second way is to use different event handlers to handle each cluster events.
+The implementation would be something like the following code:
+
+```python
+
+from charms.data_platform_libs.v0.data_interfaces import (
+    DatabaseCreatedEvent,
+    DatabaseRequires,
+)
+
+class ApplicationCharm(CharmBase):
+    # Application charm that connects to database charms.
+
+    def __init__(self, *args):
+        super().__init__(*args)
+
+        # Define the cluster aliases and one handler for each cluster database created event.
+        self.database = DatabaseRequires(
+            self,
+            relation_name="database",
+            database_name="database",
+            relations_aliases = ["cluster1", "cluster2"],
+        )
+        self.framework.observe(
+            self.database.on.cluster1_database_created, self._on_cluster1_database_created
+        )
+        self.framework.observe(
+            self.database.on.cluster2_database_created, self._on_cluster2_database_created
+        )
+
+    def _on_cluster1_database_created(self, event: DatabaseCreatedEvent) -> None:
+        # Handle the created database on the cluster named cluster1
+
+        # Create configuration file for app
+        config_file = self._render_app_config_file(
+            event.username,
+            event.password,
+            event.endpoints,
+        )
+        ...
+
+    def _on_cluster2_database_created(self, event: DatabaseCreatedEvent) -> None:
+        # Handle the created database on the cluster named cluster2
+
+        # Create configuration file for app
+        config_file = self._render_app_config_file(
+            event.username,
+            event.password,
+            event.endpoints,
+        )
+        ...
+
+```
+
+### Provider Charm
+
+Following an example of using the DatabaseRequestedEvent, in the context of the
+database charm code:
+
+```python
+from charms.data_platform_libs.v0.data_interfaces import DatabaseProvides
+
+class SampleCharm(CharmBase):
+
+    def __init__(self, *args):
+        super().__init__(*args)
+        # Charm events defined in the database provides charm library.
+        self.provided_database = DatabaseProvides(self, relation_name="database")
+        self.framework.observe(self.provided_database.on.database_requested,
+            self._on_database_requested)
+        # Database generic helper
+        self.database = DatabaseHelper()
+
+    def _on_database_requested(self, event: DatabaseRequestedEvent) -> None:
+        # Handle the event triggered by a new database requested in the relation
+        # Retrieve the database name using the charm library.
+        db_name = event.database
+        # generate a new user credential
+        username = self.database.generate_user()
+        password = self.database.generate_password()
+        # set the credentials for the relation
+        self.provided_database.set_credentials(event.relation.id, username, password)
+        # set other variables for the relation event.set_tls("False")
+```
+As shown above, the library provides a custom event (database_requested) to handle
+the situation when an application charm requests a new database to be created.
+It's preferred to subscribe to this event instead of relation changed event to avoid
+creating a new database when other information other than a database name is
+exchanged in the relation databag.
+
+### Kafka
+
+This library is the interface to use and interact with the Kafka charm. This library contains
+custom events that add convenience to manage Kafka, and provides methods to consume the
+application related data.
+
+#### Requirer Charm
+
+```python
+
+from charms.data_platform_libs.v0.data_interfaces import (
+    BootstrapServerChangedEvent,
+    KafkaRequires,
+    TopicCreatedEvent,
+)
+
+class ApplicationCharm(CharmBase):
+
+    def __init__(self, *args):
+        super().__init__(*args)
+        self.kafka = KafkaRequires(self, "kafka_client", "test-topic")
+        self.framework.observe(
+            self.kafka.on.bootstrap_server_changed, self._on_kafka_bootstrap_server_changed
+        )
+        self.framework.observe(
+            self.kafka.on.topic_created, self._on_kafka_topic_created
+        )
+
+    def _on_kafka_bootstrap_server_changed(self, event: BootstrapServerChangedEvent):
+        # Event triggered when a bootstrap server was changed for this application
+
+        new_bootstrap_server = event.bootstrap_server
+        ...
+
+    def _on_kafka_topic_created(self, event: TopicCreatedEvent):
+        # Event triggered when a topic was created for this application
+        username = event.username
+        password = event.password
+        tls = event.tls
+        tls_ca= event.tls_ca
+        bootstrap_server event.bootstrap_server
+        consumer_group_prefic = event.consumer_group_prefix
+        zookeeper_uris = event.zookeeper_uris
+        ...
+
+```
+
+As shown above, the library provides some custom events to handle specific situations,
+which are listed below:
+
+- topic_created: event emitted when the requested topic is created.
+- bootstrap_server_changed: event emitted when the bootstrap server have changed.
+- credential_changed: event emitted when the credentials of Kafka changed.
+
+### Provider Charm
+
+Following the previous example, this is an example of the provider charm.
+
+```python
+class SampleCharm(CharmBase):
+
+from charms.data_platform_libs.v0.data_interfaces import (
+    KafkaProvides,
+    TopicRequestedEvent,
+)
+
+    def __init__(self, *args):
+        super().__init__(*args)
+
+        # Default charm events.
+        self.framework.observe(self.on.start, self._on_start)
+
+        # Charm events defined in the Kafka Provides charm library.
+        self.kafka_provider = KafkaProvides(self, relation_name="kafka_client")
+        self.framework.observe(self.kafka_provider.on.topic_requested, self._on_topic_requested)
+        # Kafka generic helper
+        self.kafka = KafkaHelper()
+
+    def _on_topic_requested(self, event: TopicRequestedEvent):
+        # Handle the on_topic_requested event.
+
+        topic = event.topic
+        relation_id = event.relation.id
+        # set connection info in the databag relation
+        self.kafka_provider.set_bootstrap_server(relation_id, self.kafka.get_bootstrap_server())
+        self.kafka_provider.set_credentials(relation_id, username=username, password=password)
+        self.kafka_provider.set_consumer_group_prefix(relation_id, ...)
+        self.kafka_provider.set_tls(relation_id, "False")
+        self.kafka_provider.set_zookeeper_uris(relation_id, ...)
+
+```
+As shown above, the library provides a custom event (topic_requested) to handle
+the situation when an application charm requests a new topic to be created.
+It is preferred to subscribe to this event instead of relation changed event to avoid
+creating a new topic when other information other than a topic name is
+exchanged in the relation databag.
+"""
+
+import json
+import logging
+from abc import ABC, abstractmethod
+from collections import namedtuple
+from datetime import datetime
+from typing import List, Optional
+
+from ops.charm import (
+    CharmBase,
+    CharmEvents,
+    RelationChangedEvent,
+    RelationEvent,
+    RelationJoinedEvent,
+)
+from ops.framework import EventSource, Object
+from ops.model import Relation
+
+# The unique Charmhub library identifier, never change it
+LIBID = "6c3e6b6680d64e9c89e611d1a15f65be"
+
+# Increment this major API version when introducing breaking changes
+LIBAPI = 0
+
+# Increment this PATCH version before using `charmcraft publish-lib` or reset
+# to 0 if you are raising the major API version
+LIBPATCH = 7
+
+PYDEPS = ["ops>=2.0.0"]
+
+logger = logging.getLogger(__name__)
+
+Diff = namedtuple("Diff", "added changed deleted")
+Diff.__doc__ = """
+A tuple for storing the diff between two data mappings.
+
+added - keys that were added
+changed - keys that still exist but have new values
+deleted - key that were deleted"""
+
+
+def diff(event: RelationChangedEvent, bucket: str) -> Diff:
+    """Retrieves the diff of the data in the relation changed databag.
+
+    Args:
+        event: relation changed event.
+        bucket: bucket of the databag (app or unit)
+
+    Returns:
+        a Diff instance containing the added, deleted and changed
+            keys from the event relation databag.
+    """
+    # Retrieve the old data from the data key in the application relation databag.
+    old_data = json.loads(event.relation.data[bucket].get("data", "{}"))
+    # Retrieve the new data from the event relation databag.
+    new_data = {
+        key: value for key, value in event.relation.data[event.app].items() if key != "data"
+    }
+
+    # These are the keys that were added to the databag and triggered this event.
+    added = new_data.keys() - old_data.keys()
+    # These are the keys that were removed from the databag and triggered this event.
+    deleted = old_data.keys() - new_data.keys()
+    # These are the keys that already existed in the databag,
+    # but had their values changed.
+    changed = {key for key in old_data.keys() & new_data.keys() if old_data[key] != new_data[key]}
+    # Convert the new_data to a serializable format and save it for a next diff check.
+    event.relation.data[bucket].update({"data": json.dumps(new_data)})
+
+    # Return the diff with all possible changes.
+    return Diff(added, changed, deleted)
+
+
+# Base DataProvides and DataRequires
+
+
+class DataProvides(Object, ABC):
+    """Base provides-side of the data products relation."""
+
+    def __init__(self, charm: CharmBase, relation_name: str) -> None:
+        super().__init__(charm, relation_name)
+        self.charm = charm
+        self.local_app = self.charm.model.app
+        self.local_unit = self.charm.unit
+        self.relation_name = relation_name
+        self.framework.observe(
+            charm.on[relation_name].relation_changed,
+            self._on_relation_changed,
+        )
+
+    def _diff(self, event: RelationChangedEvent) -> Diff:
+        """Retrieves the diff of the data in the relation changed databag.
+
+        Args:
+            event: relation changed event.
+
+        Returns:
+            a Diff instance containing the added, deleted and changed
+                keys from the event relation databag.
+        """
+        return diff(event, self.local_app)
+
+    @abstractmethod
+    def _on_relation_changed(self, event: RelationChangedEvent) -> None:
+        """Event emitted when the relation data has changed."""
+        raise NotImplementedError
+
+    def fetch_relation_data(self) -> dict:
+        """Retrieves data from relation.
+
+        This function can be used to retrieve data from a relation
+        in the charm code when outside an event callback.
+
+        Returns:
+            a dict of the values stored in the relation data bag
+                for all relation instances (indexed by the relation id).
+        """
+        data = {}
+        for relation in self.relations:
+            data[relation.id] = {
+                key: value for key, value in relation.data[relation.app].items() if key != "data"
+            }
+        return data
+
+    def _update_relation_data(self, relation_id: int, data: dict) -> None:
+        """Updates a set of key-value pairs in the relation.
+
+        This function writes in the application data bag, therefore,
+        only the leader unit can call it.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            data: dict containing the key-value pairs
+                that should be updated in the relation.
+        """
+        if self.local_unit.is_leader():
+            relation = self.charm.model.get_relation(self.relation_name, relation_id)
+            relation.data[self.local_app].update(data)
+
+    @property
+    def relations(self) -> List[Relation]:
+        """The list of Relation instances associated with this relation_name."""
+        return list(self.charm.model.relations[self.relation_name])
+
+    def set_credentials(self, relation_id: int, username: str, password: str) -> None:
+        """Set credentials.
+
+        This function writes in the application data bag, therefore,
+        only the leader unit can call it.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            username: user that was created.
+            password: password of the created user.
+        """
+        self._update_relation_data(
+            relation_id,
+            {
+                "username": username,
+                "password": password,
+            },
+        )
+
+    def set_tls(self, relation_id: int, tls: str) -> None:
+        """Set whether TLS is enabled.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            tls: whether tls is enabled (True or False).
+        """
+        self._update_relation_data(relation_id, {"tls": tls})
+
+    def set_tls_ca(self, relation_id: int, tls_ca: str) -> None:
+        """Set the TLS CA in the application relation databag.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            tls_ca: TLS certification authority.
+        """
+        self._update_relation_data(relation_id, {"tls_ca": tls_ca})
+
+
+class DataRequires(Object, ABC):
+    """Requires-side of the relation."""
+
+    def __init__(
+        self,
+        charm,
+        relation_name: str,
+        extra_user_roles: str = None,
+    ):
+        """Manager of base client relations."""
+        super().__init__(charm, relation_name)
+        self.charm = charm
+        self.extra_user_roles = extra_user_roles
+        self.local_app = self.charm.model.app
+        self.local_unit = self.charm.unit
+        self.relation_name = relation_name
+        self.framework.observe(
+            self.charm.on[relation_name].relation_joined, self._on_relation_joined_event
+        )
+        self.framework.observe(
+            self.charm.on[relation_name].relation_changed, self._on_relation_changed_event
+        )
+
+    @abstractmethod
+    def _on_relation_joined_event(self, event: RelationJoinedEvent) -> None:
+        """Event emitted when the application joins the relation."""
+        raise NotImplementedError
+
+    @abstractmethod
+    def _on_relation_changed_event(self, event: RelationChangedEvent) -> None:
+        raise NotImplementedError
+
+    def fetch_relation_data(self) -> dict:
+        """Retrieves data from relation.
+
+        This function can be used to retrieve data from a relation
+        in the charm code when outside an event callback.
+        Function cannot be used in `*-relation-broken` events and will raise an exception.
+
+        Returns:
+            a dict of the values stored in the relation data bag
+                for all relation instances (indexed by the relation ID).
+        """
+        data = {}
+        for relation in self.relations:
+            data[relation.id] = {
+                key: value for key, value in relation.data[relation.app].items() if key != "data"
+            }
+        return data
+
+    def _update_relation_data(self, relation_id: int, data: dict) -> None:
+        """Updates a set of key-value pairs in the relation.
+
+        This function writes in the application data bag, therefore,
+        only the leader unit can call it.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            data: dict containing the key-value pairs
+                that should be updated in the relation.
+        """
+        if self.local_unit.is_leader():
+            relation = self.charm.model.get_relation(self.relation_name, relation_id)
+            relation.data[self.local_app].update(data)
+
+    def _diff(self, event: RelationChangedEvent) -> Diff:
+        """Retrieves the diff of the data in the relation changed databag.
+
+        Args:
+            event: relation changed event.
+
+        Returns:
+            a Diff instance containing the added, deleted and changed
+                keys from the event relation databag.
+        """
+        return diff(event, self.local_unit)
+
+    @property
+    def relations(self) -> List[Relation]:
+        """The list of Relation instances associated with this relation_name."""
+        return [
+            relation
+            for relation in self.charm.model.relations[self.relation_name]
+            if self._is_relation_active(relation)
+        ]
+
+    @staticmethod
+    def _is_relation_active(relation: Relation):
+        try:
+            _ = repr(relation.data)
+            return True
+        except RuntimeError:
+            return False
+
+    @staticmethod
+    def _is_resource_created_for_relation(relation: Relation):
+        return (
+            "username" in relation.data[relation.app] and "password" in relation.data[relation.app]
+        )
+
+    def is_resource_created(self, relation_id: Optional[int] = None) -> bool:
+        """Check if the resource has been created.
+
+        This function can be used to check if the Provider answered with data in the charm code
+        when outside an event callback.
+
+        Args:
+            relation_id (int, optional): When provided the check is done only for the relation id
+                provided, otherwise the check is done for all relations
+
+        Returns:
+            True or False
+
+        Raises:
+            IndexError: If relation_id is provided but that relation does not exist
+        """
+        if relation_id is not None:
+            try:
+                relation = [relation for relation in self.relations if relation.id == relation_id][
+                    0
+                ]
+                return self._is_resource_created_for_relation(relation)
+            except IndexError:
+                raise IndexError(f"relation id {relation_id} cannot be accessed")
+        else:
+            return (
+                all(
+                    [
+                        self._is_resource_created_for_relation(relation)
+                        for relation in self.relations
+                    ]
+                )
+                if self.relations
+                else False
+            )
+
+
+# General events
+
+
+class ExtraRoleEvent(RelationEvent):
+    """Base class for data events."""
+
+    @property
+    def extra_user_roles(self) -> Optional[str]:
+        """Returns the extra user roles that were requested."""
+        return self.relation.data[self.relation.app].get("extra-user-roles")
+
+
+class AuthenticationEvent(RelationEvent):
+    """Base class for authentication fields for events."""
+
+    @property
+    def username(self) -> Optional[str]:
+        """Returns the created username."""
+        return self.relation.data[self.relation.app].get("username")
+
+    @property
+    def password(self) -> Optional[str]:
+        """Returns the password for the created user."""
+        return self.relation.data[self.relation.app].get("password")
+
+    @property
+    def tls(self) -> Optional[str]:
+        """Returns whether TLS is configured."""
+        return self.relation.data[self.relation.app].get("tls")
+
+    @property
+    def tls_ca(self) -> Optional[str]:
+        """Returns TLS CA."""
+        return self.relation.data[self.relation.app].get("tls-ca")
+
+
+# Database related events and fields
+
+
+class DatabaseProvidesEvent(RelationEvent):
+    """Base class for database events."""
+
+    @property
+    def database(self) -> Optional[str]:
+        """Returns the database that was requested."""
+        return self.relation.data[self.relation.app].get("database")
+
+
+class DatabaseRequestedEvent(DatabaseProvidesEvent, ExtraRoleEvent):
+    """Event emitted when a new database is requested for use on this relation."""
+
+
+class DatabaseProvidesEvents(CharmEvents):
+    """Database events.
+
+    This class defines the events that the database can emit.
+    """
+
+    database_requested = EventSource(DatabaseRequestedEvent)
+
+
+class DatabaseRequiresEvent(RelationEvent):
+    """Base class for database events."""
+
+    @property
+    def endpoints(self) -> Optional[str]:
+        """Returns a comma separated list of read/write endpoints."""
+        return self.relation.data[self.relation.app].get("endpoints")
+
+    @property
+    def read_only_endpoints(self) -> Optional[str]:
+        """Returns a comma separated list of read only endpoints."""
+        return self.relation.data[self.relation.app].get("read-only-endpoints")
+
+    @property
+    def replset(self) -> Optional[str]:
+        """Returns the replicaset name.
+
+        MongoDB only.
+        """
+        return self.relation.data[self.relation.app].get("replset")
+
+    @property
+    def uris(self) -> Optional[str]:
+        """Returns the connection URIs.
+
+        MongoDB, Redis, OpenSearch.
+        """
+        return self.relation.data[self.relation.app].get("uris")
+
+    @property
+    def version(self) -> Optional[str]:
+        """Returns the version of the database.
+
+        Version as informed by the database daemon.
+        """
+        return self.relation.data[self.relation.app].get("version")
+
+
+class DatabaseCreatedEvent(AuthenticationEvent, DatabaseRequiresEvent):
+    """Event emitted when a new database is created for use on this relation."""
+
+
+class DatabaseEndpointsChangedEvent(AuthenticationEvent, DatabaseRequiresEvent):
+    """Event emitted when the read/write endpoints are changed."""
+
+
+class DatabaseReadOnlyEndpointsChangedEvent(AuthenticationEvent, DatabaseRequiresEvent):
+    """Event emitted when the read only endpoints are changed."""
+
+
+class DatabaseRequiresEvents(CharmEvents):
+    """Database events.
+
+    This class defines the events that the database can emit.
+    """
+
+    database_created = EventSource(DatabaseCreatedEvent)
+    endpoints_changed = EventSource(DatabaseEndpointsChangedEvent)
+    read_only_endpoints_changed = EventSource(DatabaseReadOnlyEndpointsChangedEvent)
+
+
+# Database Provider and Requires
+
+
+class DatabaseProvides(DataProvides):
+    """Provider-side of the database relations."""
+
+    on = DatabaseProvidesEvents()
+
+    def __init__(self, charm: CharmBase, relation_name: str) -> None:
+        super().__init__(charm, relation_name)
+
+    def _on_relation_changed(self, event: RelationChangedEvent) -> None:
+        """Event emitted when the relation has changed."""
+        # Only the leader should handle this event.
+        if not self.local_unit.is_leader():
+            return
+
+        # Check which data has changed to emit customs events.
+        diff = self._diff(event)
+
+        # Emit a database requested event if the setup key (database name and optional
+        # extra user roles) was added to the relation databag by the application.
+        if "database" in diff.added:
+            self.on.database_requested.emit(event.relation, app=event.app, unit=event.unit)
+
+    def set_endpoints(self, relation_id: int, connection_strings: str) -> None:
+        """Set database primary connections.
+
+        This function writes in the application data bag, therefore,
+        only the leader unit can call it.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            connection_strings: database hosts and ports comma separated list.
+        """
+        self._update_relation_data(relation_id, {"endpoints": connection_strings})
+
+    def set_read_only_endpoints(self, relation_id: int, connection_strings: str) -> None:
+        """Set database replicas connection strings.
+
+        This function writes in the application data bag, therefore,
+        only the leader unit can call it.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            connection_strings: database hosts and ports comma separated list.
+        """
+        self._update_relation_data(relation_id, {"read-only-endpoints": connection_strings})
+
+    def set_replset(self, relation_id: int, replset: str) -> None:
+        """Set replica set name in the application relation databag.
+
+        MongoDB only.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            replset: replica set name.
+        """
+        self._update_relation_data(relation_id, {"replset": replset})
+
+    def set_uris(self, relation_id: int, uris: str) -> None:
+        """Set the database connection URIs in the application relation databag.
+
+        MongoDB, Redis, and OpenSearch only.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            uris: connection URIs.
+        """
+        self._update_relation_data(relation_id, {"uris": uris})
+
+    def set_version(self, relation_id: int, version: str) -> None:
+        """Set the database version in the application relation databag.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            version: database version.
+        """
+        self._update_relation_data(relation_id, {"version": version})
+
+
+class DatabaseRequires(DataRequires):
+    """Requires-side of the database relation."""
+
+    on = DatabaseRequiresEvents()
+
+    def __init__(
+        self,
+        charm,
+        relation_name: str,
+        database_name: str,
+        extra_user_roles: str = None,
+        relations_aliases: List[str] = None,
+    ):
+        """Manager of database client relations."""
+        super().__init__(charm, relation_name, extra_user_roles)
+        self.database = database_name
+        self.relations_aliases = relations_aliases
+
+        # Define custom event names for each alias.
+        if relations_aliases:
+            # Ensure the number of aliases does not exceed the maximum
+            # of connections allowed in the specific relation.
+            relation_connection_limit = self.charm.meta.requires[relation_name].limit
+            if len(relations_aliases) != relation_connection_limit:
+                raise ValueError(
+                    f"The number of aliases must match the maximum number of connections allowed in the relation. "
+                    f"Expected {relation_connection_limit}, got {len(relations_aliases)}"
+                )
+
+            for relation_alias in relations_aliases:
+                self.on.define_event(f"{relation_alias}_database_created", DatabaseCreatedEvent)
+                self.on.define_event(
+                    f"{relation_alias}_endpoints_changed", DatabaseEndpointsChangedEvent
+                )
+                self.on.define_event(
+                    f"{relation_alias}_read_only_endpoints_changed",
+                    DatabaseReadOnlyEndpointsChangedEvent,
+                )
+
+    def _assign_relation_alias(self, relation_id: int) -> None:
+        """Assigns an alias to a relation.
+
+        This function writes in the unit data bag.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+        """
+        # If no aliases were provided, return immediately.
+        if not self.relations_aliases:
+            return
+
+        # Return if an alias was already assigned to this relation
+        # (like when there are more than one unit joining the relation).
+        if (
+            self.charm.model.get_relation(self.relation_name, relation_id)
+            .data[self.local_unit]
+            .get("alias")
+        ):
+            return
+
+        # Retrieve the available aliases (the ones that weren't assigned to any relation).
+        available_aliases = self.relations_aliases[:]
+        for relation in self.charm.model.relations[self.relation_name]:
+            alias = relation.data[self.local_unit].get("alias")
+            if alias:
+                logger.debug("Alias %s was already assigned to relation %d", alias, relation.id)
+                available_aliases.remove(alias)
+
+        # Set the alias in the unit relation databag of the specific relation.
+        relation = self.charm.model.get_relation(self.relation_name, relation_id)
+        relation.data[self.local_unit].update({"alias": available_aliases[0]})
+
+    def _emit_aliased_event(self, event: RelationChangedEvent, event_name: str) -> None:
+        """Emit an aliased event to a particular relation if it has an alias.
+
+        Args:
+            event: the relation changed event that was received.
+            event_name: the name of the event to emit.
+        """
+        alias = self._get_relation_alias(event.relation.id)
+        if alias:
+            getattr(self.on, f"{alias}_{event_name}").emit(
+                event.relation, app=event.app, unit=event.unit
+            )
+
+    def _get_relation_alias(self, relation_id: int) -> Optional[str]:
+        """Returns the relation alias.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+
+        Returns:
+            the relation alias or None if the relation was not found.
+        """
+        for relation in self.charm.model.relations[self.relation_name]:
+            if relation.id == relation_id:
+                return relation.data[self.local_unit].get("alias")
+        return None
+
+    def _on_relation_joined_event(self, event: RelationJoinedEvent) -> None:
+        """Event emitted when the application joins the database relation."""
+        # If relations aliases were provided, assign one to the relation.
+        self._assign_relation_alias(event.relation.id)
+
+        # Sets both database and extra user roles in the relation
+        # if the roles are provided. Otherwise, sets only the database.
+        if self.extra_user_roles:
+            self._update_relation_data(
+                event.relation.id,
+                {
+                    "database": self.database,
+                    "extra-user-roles": self.extra_user_roles,
+                },
+            )
+        else:
+            self._update_relation_data(event.relation.id, {"database": self.database})
+
+    def _on_relation_changed_event(self, event: RelationChangedEvent) -> None:
+        """Event emitted when the database relation has changed."""
+        # Check which data has changed to emit customs events.
+        diff = self._diff(event)
+
+        # Check if the database is created
+        # (the database charm shared the credentials).
+        if "username" in diff.added and "password" in diff.added:
+            # Emit the default event (the one without an alias).
+            logger.info("database created at %s", datetime.now())
+            self.on.database_created.emit(event.relation, app=event.app, unit=event.unit)
+
+            # Emit the aliased event (if any).
+            self._emit_aliased_event(event, "database_created")
+
+            # To avoid unnecessary application restarts do not trigger
+            # â€œendpoints_changed“ event if â€œdatabase_created“ is triggered.
+            return
+
+        # Emit an endpoints changed event if the database
+        # added or changed this info in the relation databag.
+        if "endpoints" in diff.added or "endpoints" in diff.changed:
+            # Emit the default event (the one without an alias).
+            logger.info("endpoints changed on %s", datetime.now())
+            self.on.endpoints_changed.emit(event.relation, app=event.app, unit=event.unit)
+
+            # Emit the aliased event (if any).
+            self._emit_aliased_event(event, "endpoints_changed")
+
+            # To avoid unnecessary application restarts do not trigger
+            # â€œread_only_endpoints_changed“ event if â€œendpoints_changed“ is triggered.
+            return
+
+        # Emit a read only endpoints changed event if the database
+        # added or changed this info in the relation databag.
+        if "read-only-endpoints" in diff.added or "read-only-endpoints" in diff.changed:
+            # Emit the default event (the one without an alias).
+            logger.info("read-only-endpoints changed on %s", datetime.now())
+            self.on.read_only_endpoints_changed.emit(
+                event.relation, app=event.app, unit=event.unit
+            )
+
+            # Emit the aliased event (if any).
+            self._emit_aliased_event(event, "read_only_endpoints_changed")
+
+
+# Kafka related events
+
+
+class KafkaProvidesEvent(RelationEvent):
+    """Base class for Kafka events."""
+
+    @property
+    def topic(self) -> Optional[str]:
+        """Returns the topic that was requested."""
+        return self.relation.data[self.relation.app].get("topic")
+
+
+class TopicRequestedEvent(KafkaProvidesEvent, ExtraRoleEvent):
+    """Event emitted when a new topic is requested for use on this relation."""
+
+
+class KafkaProvidesEvents(CharmEvents):
+    """Kafka events.
+
+    This class defines the events that the Kafka can emit.
+    """
+
+    topic_requested = EventSource(TopicRequestedEvent)
+
+
+class KafkaRequiresEvent(RelationEvent):
+    """Base class for Kafka events."""
+
+    @property
+    def bootstrap_server(self) -> Optional[str]:
+        """Returns a a comma-seperated list of broker uris."""
+        return self.relation.data[self.relation.app].get("endpoints")
+
+    @property
+    def consumer_group_prefix(self) -> Optional[str]:
+        """Returns the consumer-group-prefix."""
+        return self.relation.data[self.relation.app].get("consumer-group-prefix")
+
+    @property
+    def zookeeper_uris(self) -> Optional[str]:
+        """Returns a comma separated list of Zookeeper uris."""
+        return self.relation.data[self.relation.app].get("zookeeper-uris")
+
+
+class TopicCreatedEvent(AuthenticationEvent, KafkaRequiresEvent):
+    """Event emitted when a new topic is created for use on this relation."""
+
+
+class BootstrapServerChangedEvent(AuthenticationEvent, KafkaRequiresEvent):
+    """Event emitted when the bootstrap server is changed."""
+
+
+class KafkaRequiresEvents(CharmEvents):
+    """Kafka events.
+
+    This class defines the events that the Kafka can emit.
+    """
+
+    topic_created = EventSource(TopicCreatedEvent)
+    bootstrap_server_changed = EventSource(BootstrapServerChangedEvent)
+
+
+# Kafka Provides and Requires
+
+
+class KafkaProvides(DataProvides):
+    """Provider-side of the Kafka relation."""
+
+    on = KafkaProvidesEvents()
+
+    def __init__(self, charm: CharmBase, relation_name: str) -> None:
+        super().__init__(charm, relation_name)
+
+    def _on_relation_changed(self, event: RelationChangedEvent) -> None:
+        """Event emitted when the relation has changed."""
+        # Only the leader should handle this event.
+        if not self.local_unit.is_leader():
+            return
+
+        # Check which data has changed to emit customs events.
+        diff = self._diff(event)
+
+        # Emit a topic requested event if the setup key (topic name and optional
+        # extra user roles) was added to the relation databag by the application.
+        if "topic" in diff.added:
+            self.on.topic_requested.emit(event.relation, app=event.app, unit=event.unit)
+
+    def set_bootstrap_server(self, relation_id: int, bootstrap_server: str) -> None:
+        """Set the bootstrap server in the application relation databag.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            bootstrap_server: the bootstrap server address.
+        """
+        self._update_relation_data(relation_id, {"endpoints": bootstrap_server})
+
+    def set_consumer_group_prefix(self, relation_id: int, consumer_group_prefix: str) -> None:
+        """Set the consumer group prefix in the application relation databag.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            consumer_group_prefix: the consumer group prefix string.
+        """
+        self._update_relation_data(relation_id, {"consumer-group-prefix": consumer_group_prefix})
+
+    def set_zookeeper_uris(self, relation_id: int, zookeeper_uris: str) -> None:
+        """Set the zookeeper uris in the application relation databag.
+
+        Args:
+            relation_id: the identifier for a particular relation.
+            zookeeper_uris: comma-seperated list of ZooKeeper server uris.
+        """
+        self._update_relation_data(relation_id, {"zookeeper-uris": zookeeper_uris})
+
+
+class KafkaRequires(DataRequires):
+    """Requires-side of the Kafka relation."""
+
+    on = KafkaRequiresEvents()
+
+    def __init__(self, charm, relation_name: str, topic: str, extra_user_roles: str = None):
+        """Manager of Kafka client relations."""
+        # super().__init__(charm, relation_name)
+        super().__init__(charm, relation_name, extra_user_roles)
+        self.charm = charm
+        self.topic = topic
+
+    def _on_relation_joined_event(self, event: RelationJoinedEvent) -> None:
+        """Event emitted when the application joins the Kafka relation."""
+        # Sets both topic and extra user roles in the relation
+        # if the roles are provided. Otherwise, sets only the topic.
+        self._update_relation_data(
+            event.relation.id,
+            {
+                "topic": self.topic,
+                "extra-user-roles": self.extra_user_roles,
+            }
+            if self.extra_user_roles is not None
+            else {"topic": self.topic},
+        )
+
+    def _on_relation_changed_event(self, event: RelationChangedEvent) -> None:
+        """Event emitted when the Kafka relation has changed."""
+        # Check which data has changed to emit customs events.
+        diff = self._diff(event)
+
+        # Check if the topic is created
+        # (the Kafka charm shared the credentials).
+        if "username" in diff.added and "password" in diff.added:
+            # Emit the default event (the one without an alias).
+            logger.info("topic created at %s", datetime.now())
+            self.on.topic_created.emit(event.relation, app=event.app, unit=event.unit)
+
+            # To avoid unnecessary application restarts do not trigger
+            # â€œendpoints_changed“ event if â€œtopic_created“ is triggered.
+            return
+
+        # Emit an endpoints (bootstap-server) changed event if the Kakfa endpoints
+        # added or changed this info in the relation databag.
+        if "endpoints" in diff.added or "endpoints" in diff.changed:
+            # Emit the default event (the one without an alias).
+            logger.info("endpoints changed on %s", datetime.now())
+            self.on.bootstrap_server_changed.emit(
+                event.relation, app=event.app, unit=event.unit
+            )  # here check if this is the right design
+            return
index 4336ded..a94036a 100644 (file)
@@ -58,7 +58,7 @@ requires:
     interface: kafka
     limit: 1
   mongodb:
-    interface: mongodb
+    interface: mongodb_client
     limit: 1
 
 provides:
index d0d4a5b..16cf0f4 100644 (file)
@@ -50,7 +50,3 @@ ignore = ["W503", "E501", "D107"]
 # D100, D101, D102, D103: Ignore missing docstrings in tests
 per-file-ignores = ["tests/*:D100,D101,D102,D103,D104"]
 docstring-convention = "google"
-# Check for properly formatted copyright header in each file
-copyright-check = "True"
-copyright-author = "Canonical Ltd."
-copyright-regexp = "Copyright\\s\\d{4}([-,]\\d{4})*\\s+%(author)s"
index cb303a3..398d4ad 100644 (file)
@@ -17,7 +17,7 @@
 #
 # To get in touch with the maintainers, please contact:
 # osm-charmers@lists.launchpad.net
-ops >= 1.2.0
+ops < 2.2
 lightkube
 lightkube-models
 # git+https://github.com/charmed-osm/config-validator/
index e112d4c..84c0ee3 100755 (executable)
@@ -31,6 +31,7 @@ import base64
 import logging
 from typing import Any, Dict
 
+from charms.data_platform_libs.v0.data_interfaces import DatabaseRequires
 from charms.kafka_k8s.v0.kafka import KafkaEvents, KafkaRequires
 from charms.observability_libs.v1.kubernetes_service_patch import KubernetesServicePatch
 from charms.osm_libs.v0.utils import (
@@ -47,8 +48,6 @@ from ops.framework import StoredState
 from ops.main import main
 from ops.model import ActiveStatus, Container
 
-from legacy_interfaces import MongoClient
-
 ro_host_paths = {
     "NG-RO": "/usr/lib/python3/dist-packages/osm_ng_ro",
     "RO-plugin": "/usr/lib/python3/dist-packages/osm_ro_plugin",
@@ -101,7 +100,7 @@ class OsmRoCharm(CharmBase):
         super().__init__(*args)
         self._stored.set_default(certificates=set())
         self.kafka = KafkaRequires(self)
-        self.mongodb_client = MongoClient(self, "mongodb")
+        self.mongodb_client = DatabaseRequires(self, "mongodb", database_name="osm")
         self._observe_charm_events()
         self._patch_k8s_service()
         self.ro = RoProvides(self)
@@ -197,7 +196,7 @@ class OsmRoCharm(CharmBase):
             # Relation events
             self.on.kafka_available: self._on_config_changed,
             self.on["kafka"].relation_broken: self._on_required_relation_broken,
-            self.on["mongodb"].relation_changed: self._on_config_changed,
+            self.mongodb_client.on.database_created: self._on_config_changed,
             self.on["mongodb"].relation_broken: self._on_required_relation_broken,
             self.on.ro_relation_joined: self._update_ro_relation,
             # Action events
@@ -207,6 +206,12 @@ class OsmRoCharm(CharmBase):
         for event, handler in event_handler_mapping.items():
             self.framework.observe(event, handler)
 
+    def _is_database_available(self) -> bool:
+        try:
+            return self.mongodb_client.is_resource_created()
+        except KeyError:
+            return False
+
     def _validate_config(self) -> None:
         """Validate charm configuration.
 
@@ -241,7 +246,7 @@ class OsmRoCharm(CharmBase):
 
         if not self.kafka.host or not self.kafka.port:
             missing_relations.append("kafka")
-        if self.mongodb_client.is_missing_data_in_unit():
+        if not self._is_database_available():
             missing_relations.append("mongodb")
 
         if missing_relations:
@@ -310,13 +315,13 @@ class OsmRoCharm(CharmBase):
                         "OSMRO_MESSAGE_DRIVER": "kafka",
                         # Database configuration
                         "OSMRO_DATABASE_DRIVER": "mongo",
-                        "OSMRO_DATABASE_URI": self.mongodb_client.connection_string,
+                        "OSMRO_DATABASE_URI": self._get_mongodb_uri(),
                         "OSMRO_DATABASE_COMMONKEY": self.config["database-commonkey"],
                         # Storage configuration
                         "OSMRO_STORAGE_DRIVER": "mongo",
                         "OSMRO_STORAGE_PATH": "/app/storage",
                         "OSMRO_STORAGE_COLLECTION": "files",
-                        "OSMRO_STORAGE_URI": self.mongodb_client.connection_string,
+                        "OSMRO_STORAGE_URI": self._get_mongodb_uri(),
                         "OSMRO_PERIOD_REFRESH_ACTIVE": self.config.get("period_refresh_active")
                         or 60,
                     },
@@ -324,6 +329,9 @@ class OsmRoCharm(CharmBase):
             },
         }
 
+    def _get_mongodb_uri(self):
+        return list(self.mongodb_client.fetch_relation_data().values())[0]["uris"]
+
 
 if __name__ == "__main__":  # pragma: no cover
     main(OsmRoCharm)
index c39c47a..38dc40f 100644 (file)
@@ -51,7 +51,7 @@ async def test_ro_is_deployed(ops_test: OpsTest):
         ops_test.model.deploy(charm, resources=resources, application_name=RO_APP),
         ops_test.model.deploy(ZOOKEEPER_CHARM, application_name=ZOOKEEPER_APP, channel="stable"),
         ops_test.model.deploy(KAFKA_CHARM, application_name=KAFKA_APP, channel="stable"),
-        ops_test.model.deploy(MONGO_DB_CHARM, application_name=MONGO_DB_APP, channel="stable"),
+        ops_test.model.deploy(MONGO_DB_CHARM, application_name=MONGO_DB_APP, channel="edge"),
     )
 
     async with ops_test.fast_forward():
index 05206d0..d0353ab 100644 (file)
@@ -37,6 +37,7 @@ def harness(mocker: MockerFixture):
     mocker.patch("charm.KubernetesServicePatch", lambda x, y: None)
     harness = Harness(OsmRoCharm)
     harness.begin()
+    harness.container_pebble_ready(container_name)
     yield harness
     harness.cleanup()
 
@@ -88,7 +89,9 @@ def _add_relations(harness: Harness):
     relation_id = harness.add_relation("mongodb", "mongodb")
     harness.add_relation_unit(relation_id, "mongodb/0")
     harness.update_relation_data(
-        relation_id, "mongodb/0", {"connection_string": "mongodb://:1234"}
+        relation_id,
+        "mongodb",
+        {"uris": "mongodb://:1234", "username": "user", "password": "password"},
     )
     relation_ids.append(relation_id)
     # Add kafka relation
index 0083afe..c6cc629 100644 (file)
@@ -30,6 +30,7 @@ lib_path = {toxinidir}/lib/charms/osm_ro
 all_path = {[vars]src_path} {[vars]tst_path}
 
 [testenv]
+basepython = python3.8
 setenv =
   PYTHONPATH = {toxinidir}:{toxinidir}/lib:{[vars]src_path}
   PYTHONBREAKPOINT=ipdb.set_trace
@@ -54,7 +55,6 @@ deps =
     black
     flake8==4.0.1
     flake8-docstrings
-    flake8-copyright
     flake8-builtins
     pyproject-flake8
     pep8-naming
@@ -63,7 +63,7 @@ deps =
 commands =
     # uncomment the following line if this charm owns a lib
     codespell {[vars]lib_path} --ignore-words-list=Ro,RO,ro
-    codespell {toxinidir}/. --skip {toxinidir}/.git --skip {toxinidir}/.tox \
+    codespell {toxinidir} --skip {toxinidir}/.git --skip {toxinidir}/.tox \
       --skip {toxinidir}/build --skip {toxinidir}/lib --skip {toxinidir}/venv \
       --skip {toxinidir}/.mypy_cache --skip {toxinidir}/icon.svg --ignore-words-list=Ro,RO,ro
     # pflake8 wrapper supports config from pyproject.toml
@@ -88,7 +88,7 @@ commands =
 description = Run integration tests
 deps =
     pytest
-    juju
+    juju<3
     pytest-operator
     -r{toxinidir}/requirements.txt
 commands =
index df3da94..efc6d74 100644 (file)
@@ -235,12 +235,14 @@ wait
 @dataclass
 class SubModule:
     """Represent RO Submodules."""
+
     sub_module_path: str
     container_path: str
 
 
 class HostPath:
     """Represents a hostpath."""
+
     def __init__(self, config: str, container_path: str, submodules: dict = None) -> None:
         mount_path_items = config.split("-")
         mount_path_items.reverse()
@@ -257,6 +259,7 @@ class HostPath:
             self.container_path = container_path
             self.module_name = container_path.split("/")[-1]
 
+
 class DebugMode(Object):
     """Class to handle the debug-mode."""
 
@@ -432,7 +435,9 @@ class DebugMode(Object):
             logger.debug(f"adding symlink for {hostpath.config}")
             if len(hostpath.sub_module_dict) > 0:
                 for sub_module in hostpath.sub_module_dict.keys():
-                    self.container.exec(["rm", "-rf", hostpath.sub_module_dict[sub_module].container_path]).wait_output()
+                    self.container.exec(
+                        ["rm", "-rf", hostpath.sub_module_dict[sub_module].container_path]
+                    ).wait_output()
                     self.container.exec(
                         [
                             "ln",
@@ -506,7 +511,6 @@ class DebugMode(Object):
     def _delete_hostpath_from_statefulset(self, hostpath: HostPath, statefulset: StatefulSet):
         hostpath_unmounted = False
         for volume in statefulset.spec.template.spec.volumes:
-
             if hostpath.config != volume.name:
                 continue
 
index 07da477..74a728c 100755 (executable)
@@ -50,7 +50,7 @@ from ops.model import ActiveStatus, Container
 from legacy_interfaces import MysqlClient
 
 logger = logging.getLogger(__name__)
-SERVICE_PORT=7233
+SERVICE_PORT = 7233
 
 
 class OsmTemporalCharm(CharmBase):
@@ -179,7 +179,9 @@ class OsmTemporalCharm(CharmBase):
         """Handler for the temporal-relation-joined event."""
         logger.info(f"isLeader? {self.unit.is_leader()}")
         if self.unit.is_leader():
-            self.temporal.set_host_info(self.app.name, SERVICE_PORT, event.relation if event else None)
+            self.temporal.set_host_info(
+                self.app.name, SERVICE_PORT, event.relation if event else None
+            )
             logger.info(f"temporal host info set to {self.app.name} : {SERVICE_PORT}")
 
     def _patch_k8s_service(self) -> None:
@@ -206,7 +208,9 @@ class OsmTemporalCharm(CharmBase):
                     "startup": "enabled",
                     "user": "root",
                     "group": "root",
-                    "ports": [7233,],
+                    "ports": [
+                        7233,
+                    ],
                     "environment": {
                         "DB": "mysql",
                         "DB_PORT": self.db_client.port,
index 80a4648..2c9273b 100644 (file)
@@ -25,7 +25,6 @@ import ops
 
 
 class BaseRelationClient(ops.framework.Object):
-
     def __init__(
         self,
         charm: ops.charm.CharmBase,
index f2a0a23..ee871fe 100644 (file)
@@ -43,9 +43,7 @@ def harness(mocker: MockerFixture):
 def test_missing_relations(harness: Harness):
     harness.charm.on.config_changed.emit()
     assert type(harness.charm.unit.status) == BlockedStatus
-    assert all(
-        relation in harness.charm.unit.status.message for relation in ["mysql"]
-    )
+    assert all(relation in harness.charm.unit.status.message for relation in ["mysql"])
 
 
 def test_ready(harness: Harness):
diff --git a/installers/charm/osm-update-db-operator/.gitignore b/installers/charm/osm-update-db-operator/.gitignore
new file mode 100644 (file)
index 0000000..c250157
--- /dev/null
@@ -0,0 +1,23 @@
+# Copyright 2022 Canonical Ltd.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+venv/
+build/
+*.charm
+.coverage
+coverage.xml
+__pycache__/
+*.py[cod]
+.vscode
+.tox
diff --git a/installers/charm/osm-update-db-operator/.jujuignore b/installers/charm/osm-update-db-operator/.jujuignore
new file mode 100644 (file)
index 0000000..ddb544e
--- /dev/null
@@ -0,0 +1,17 @@
+# Copyright 2022 Canonical Ltd.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+/venv
+*.py[cod]
+*.charm
diff --git a/installers/charm/osm-update-db-operator/CONTRIBUTING.md b/installers/charm/osm-update-db-operator/CONTRIBUTING.md
new file mode 100644 (file)
index 0000000..4d70671
--- /dev/null
@@ -0,0 +1,74 @@
+<!-- Copyright 2022 Canonical Ltd.
+
+Licensed under the Apache License, Version 2.0 (the "License"); you may
+not use this file except in compliance with the License. You may obtain
+a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+License for the specific language governing permissions and limitations
+under the License.
+-->
+# Contributing
+
+## Overview
+
+This documents explains the processes and practices recommended for contributing enhancements to
+the Update DB charm.
+
+- Generally, before developing enhancements to this charm, you should consider [opening an issue
+  ](https://github.com/gcalvinos/update-db-operator/issues) explaining your use case.
+- If you would like to chat with us about your use-cases or proposed implementation, you can reach
+  us at [Canonical Mattermost public channel](https://chat.charmhub.io/charmhub/channels/charm-dev)
+  or [Discourse](https://discourse.charmhub.io/). The primary author of this charm is available on
+  the Mattermost channel as `@davigar15`.
+- Familiarising yourself with the [Charmed Operator Framework](https://juju.is/docs/sdk) library
+  will help you a lot when working on new features or bug fixes.
+- All enhancements require review before being merged. Code review typically examines
+  - code quality
+  - test coverage
+  - user experience for Juju administrators this charm.
+- Please help us out in ensuring easy to review branches by rebasing your pull request branch onto
+  the `main` branch. This also avoids merge commits and creates a linear Git commit history.
+
+## Developing
+
+You can use the environments created by `tox` for development:
+
+```shell
+tox --notest -e unit
+source .tox/unit/bin/activate
+```
+
+### Testing
+
+```shell
+tox -e fmt           # update your code according to linting rules
+tox -e lint          # code style
+tox -e unit          # unit tests
+# tox -e integration   # integration tests
+tox                  # runs 'lint' and 'unit' environments
+```
+
+## Build charm
+
+Build the charm in this git repository using:
+
+```shell
+charmcraft pack
+```
+
+### Deploy
+
+```bash
+# Create a model
+juju add-model test-update-db
+# Enable DEBUG logging
+juju model-config logging-config="<root>=INFO;unit=DEBUG"
+# Deploy the charm
+juju deploy ./update-db_ubuntu-20.04-amd64.charm \
+  --resource update-db-image=ubuntu:latest
+```
diff --git a/installers/charm/osm-update-db-operator/LICENSE b/installers/charm/osm-update-db-operator/LICENSE
new file mode 100644 (file)
index 0000000..d645695
--- /dev/null
@@ -0,0 +1,202 @@
+
+                                 Apache License
+                           Version 2.0, January 2004
+                        http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+      "License" shall mean the terms and conditions for use, reproduction,
+      and distribution as defined by Sections 1 through 9 of this document.
+
+      "Licensor" shall mean the copyright owner or entity authorized by
+      the copyright owner that is granting the License.
+
+      "Legal Entity" shall mean the union of the acting entity and all
+      other entities that control, are controlled by, or are under common
+      control with that entity. For the purposes of this definition,
+      "control" means (i) the power, direct or indirect, to cause the
+      direction or management of such entity, whether by contract or
+      otherwise, or (ii) ownership of fifty percent (50%) or more of the
+      outstanding shares, or (iii) beneficial ownership of such entity.
+
+      "You" (or "Your") shall mean an individual or Legal Entity
+      exercising permissions granted by this License.
+
+      "Source" form shall mean the preferred form for making modifications,
+      including but not limited to software source code, documentation
+      source, and configuration files.
+
+      "Object" form shall mean any form resulting from mechanical
+      transformation or translation of a Source form, including but
+      not limited to compiled object code, generated documentation,
+      and conversions to other media types.
+
+      "Work" shall mean the work of authorship, whether in Source or
+      Object form, made available under the License, as indicated by a
+      copyright notice that is included in or attached to the work
+      (an example is provided in the Appendix below).
+
+      "Derivative Works" shall mean any work, whether in Source or Object
+      form, that is based on (or derived from) the Work and for which the
+      editorial revisions, annotations, elaborations, or other modifications
+      represent, as a whole, an original work of authorship. For the purposes
+      of this License, Derivative Works shall not include works that remain
+      separable from, or merely link (or bind by name) to the interfaces of,
+      the Work and Derivative Works thereof.
+
+      "Contribution" shall mean any work of authorship, including
+      the original version of the Work and any modifications or additions
+      to that Work or Derivative Works thereof, that is intentionally
+      submitted to Licensor for inclusion in the Work by the copyright owner
+      or by an individual or Legal Entity authorized to submit on behalf of
+      the copyright owner. For the purposes of this definition, "submitted"
+      means any form of electronic, verbal, or written communication sent
+      to the Licensor or its representatives, including but not limited to
+      communication on electronic mailing lists, source code control systems,
+      and issue tracking systems that are managed by, or on behalf of, the
+      Licensor for the purpose of discussing and improving the Work, but
+      excluding communication that is conspicuously marked or otherwise
+      designated in writing by the copyright owner as "Not a Contribution."
+
+      "Contributor" shall mean Licensor and any individual or Legal Entity
+      on behalf of whom a Contribution has been received by Licensor and
+      subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      copyright license to reproduce, prepare Derivative Works of,
+      publicly display, publicly perform, sublicense, and distribute the
+      Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      (except as stated in this section) patent license to make, have made,
+      use, offer to sell, sell, import, and otherwise transfer the Work,
+      where such license applies only to those patent claims licensable
+      by such Contributor that are necessarily infringed by their
+      Contribution(s) alone or by combination of their Contribution(s)
+      with the Work to which such Contribution(s) was submitted. If You
+      institute patent litigation against any entity (including a
+      cross-claim or counterclaim in a lawsuit) alleging that the Work
+      or a Contribution incorporated within the Work constitutes direct
+      or contributory patent infringement, then any patent licenses
+      granted to You under this License for that Work shall terminate
+      as of the date such litigation is filed.
+
+   4. Redistribution. You may reproduce and distribute copies of the
+      Work or Derivative Works thereof in any medium, with or without
+      modifications, and in Source or Object form, provided that You
+      meet the following conditions:
+
+      (a) You must give any other recipients of the Work or
+          Derivative Works a copy of this License; and
+
+      (b) You must cause any modified files to carry prominent notices
+          stating that You changed the files; and
+
+      (c) You must retain, in the Source form of any Derivative Works
+          that You distribute, all copyright, patent, trademark, and
+          attribution notices from the Source form of the Work,
+          excluding those notices that do not pertain to any part of
+          the Derivative Works; and
+
+      (d) If the Work includes a "NOTICE" text file as part of its
+          distribution, then any Derivative Works that You distribute must
+          include a readable copy of the attribution notices contained
+          within such NOTICE file, excluding those notices that do not
+          pertain to any part of the Derivative Works, in at least one
+          of the following places: within a NOTICE text file distributed
+          as part of the Derivative Works; within the Source form or
+          documentation, if provided along with the Derivative Works; or,
+          within a display generated by the Derivative Works, if and
+          wherever such third-party notices normally appear. The contents
+          of the NOTICE file are for informational purposes only and
+          do not modify the License. You may add Your own attribution
+          notices within Derivative Works that You distribute, alongside
+          or as an addendum to the NOTICE text from the Work, provided
+          that such additional attribution notices cannot be construed
+          as modifying the License.
+
+      You may add Your own copyright statement to Your modifications and
+      may provide additional or different license terms and conditions
+      for use, reproduction, or distribution of Your modifications, or
+      for any such Derivative Works as a whole, provided Your use,
+      reproduction, and distribution of the Work otherwise complies with
+      the conditions stated in this License.
+
+   5. Submission of Contributions. Unless You explicitly state otherwise,
+      any Contribution intentionally submitted for inclusion in the Work
+      by You to the Licensor shall be under the terms and conditions of
+      this License, without any additional terms or conditions.
+      Notwithstanding the above, nothing herein shall supersede or modify
+      the terms of any separate license agreement you may have executed
+      with Licensor regarding such Contributions.
+
+   6. Trademarks. This License does not grant permission to use the trade
+      names, trademarks, service marks, or product names of the Licensor,
+      except as required for reasonable and customary use in describing the
+      origin of the Work and reproducing the content of the NOTICE file.
+
+   7. Disclaimer of Warranty. Unless required by applicable law or
+      agreed to in writing, Licensor provides the Work (and each
+      Contributor provides its Contributions) on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+      implied, including, without limitation, any warranties or conditions
+      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+      PARTICULAR PURPOSE. You are solely responsible for determining the
+      appropriateness of using or redistributing the Work and assume any
+      risks associated with Your exercise of permissions under this License.
+
+   8. Limitation of Liability. In no event and under no legal theory,
+      whether in tort (including negligence), contract, or otherwise,
+      unless required by applicable law (such as deliberate and grossly
+      negligent acts) or agreed to in writing, shall any Contributor be
+      liable to You for damages, including any direct, indirect, special,
+      incidental, or consequential damages of any character arising as a
+      result of this License or out of the use or inability to use the
+      Work (including but not limited to damages for loss of goodwill,
+      work stoppage, computer failure or malfunction, or any and all
+      other commercial damages or losses), even if such Contributor
+      has been advised of the possibility of such damages.
+
+   9. Accepting Warranty or Additional Liability. While redistributing
+      the Work or Derivative Works thereof, You may choose to offer,
+      and charge a fee for, acceptance of support, warranty, indemnity,
+      or other liability obligations and/or rights consistent with this
+      License. However, in accepting such obligations, You may act only
+      on Your own behalf and on Your sole responsibility, not on behalf
+      of any other Contributor, and only if You agree to indemnify,
+      defend, and hold each Contributor harmless for any liability
+      incurred by, or claims asserted against, such Contributor by reason
+      of your accepting any such warranty or additional liability.
+
+   END OF TERMS AND CONDITIONS
+
+   APPENDIX: How to apply the Apache License to your work.
+
+      To apply the Apache License to your work, attach the following
+      boilerplate notice, with the fields enclosed by brackets "[]"
+      replaced with your own identifying information. (Don't include
+      the brackets!)  The text should be enclosed in the appropriate
+      comment syntax for the file format. We also recommend that a
+      file or class name and description of purpose be included on the
+      same "printed page" as the copyright notice for easier
+      identification within third-party archives.
+
+   Copyright [yyyy] [name of copyright owner]
+
+   Licensed under the Apache License, Version 2.0 (the "License");
+   you may not use this file except in compliance with the License.
+   You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
diff --git a/installers/charm/osm-update-db-operator/README.md b/installers/charm/osm-update-db-operator/README.md
new file mode 100644 (file)
index 0000000..2ee8f6e
--- /dev/null
@@ -0,0 +1,80 @@
+<!-- Copyright 2022 Canonical Ltd.
+
+Licensed under the Apache License, Version 2.0 (the "License"); you may
+not use this file except in compliance with the License. You may obtain
+a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+License for the specific language governing permissions and limitations
+under the License.
+-->
+
+# OSM Update DB Operator
+
+[![code style](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black/tree/main)
+
+## Description
+
+Charm used to update the OSM databases during an OSM upgrade process. To be used you should have an instance of OSM running that you may want to upgrade
+
+## Usage
+
+### Deploy the charm (locally)
+
+```shell
+juju add-model update-db
+juju deploy osm-update-db-operator --series focal
+```
+
+Set MongoDB and MySQL URIs:
+
+```shell
+juju config osm-update-db-operator mysql-uri=<mysql_uri>
+juju config osm-update-db-operator mongodb-uri=<mongodb_uri>
+```
+
+### Updating the databases
+
+In case we want to update both databases, we need to run the following command:
+
+```shell
+juju run-action osm-update-db-operator/0 update-db current-version=<Number_of_current_version> target-version=<Number_of_target_version>
+# Example:
+juju run-action osm-update-db-operator/0 update-db current-version=9 target-version=10
+```
+
+In case only you just want to update MongoDB, then we can use a flag 'mongodb-only=True':
+
+```shell
+juju run-action osm-update-db-operator/0 update-db current-version=9 target-version=10 mongodb-only=True
+```
+
+In case only you just want to update MySQL database, then we can use a flag 'mysql-only=True':
+
+```shell
+juju run-action osm-update-db-operator/0 update-db current-version=9 target-version=10 mysql-only=True
+```
+
+You can check if the update of the database was properly done checking the result of the command:
+
+```shell
+juju show-action-output <Number_of_the_action>
+```
+
+### Fixes for bugs
+
+Updates de database to apply the changes needed to fix a bug. You need to specify the bug number. Example:
+
+```shell
+juju run-action osm-update-db-operator/0 apply-patch bug-number=1837
+```
+
+## Contributing
+
+Please see the [Juju SDK docs](https://juju.is/docs/sdk) for guidelines
+on enhancements to this charm following best practice guidelines, and
+`CONTRIBUTING.md` for developer guidance.
diff --git a/installers/charm/osm-update-db-operator/actions.yaml b/installers/charm/osm-update-db-operator/actions.yaml
new file mode 100644 (file)
index 0000000..aba1ee3
--- /dev/null
@@ -0,0 +1,42 @@
+# Copyright 2022 Canonical Ltd.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+update-db:
+  description: |
+    Updates the Mongodb and MySQL with the new data needed for the target OSM
+  params:
+    current-version:
+      type: integer
+      description: "Current version of Charmed OSM - Example: 9"
+    target-version:
+      type: integer
+      description: "Final version of OSM after the update - Example: 10"
+    mysql-only:
+      type: boolean
+      description: "if True the update is only applied for mysql database"
+    mongodb-only:
+      type: boolean
+      description: "if True the update is only applied for mongo database"
+  required:
+    - current-version
+    - target-version
+apply-patch:
+  description: |
+    Updates de database to apply the changes needed to fix a bug
+  params:
+    bug-number:
+      type: integer
+      description: "The number of the bug that needs to be fixed"
+  required:
+    - bug-number
diff --git a/installers/charm/osm-update-db-operator/charmcraft.yaml b/installers/charm/osm-update-db-operator/charmcraft.yaml
new file mode 100644 (file)
index 0000000..31c233b
--- /dev/null
@@ -0,0 +1,26 @@
+# Copyright 2022 Canonical Ltd.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+type: "charm"
+bases:
+  - build-on:
+      - name: "ubuntu"
+        channel: "20.04"
+    run-on:
+      - name: "ubuntu"
+        channel: "20.04"
+parts:
+  charm:
+    build-packages:
+      - git
diff --git a/installers/charm/osm-update-db-operator/config.yaml b/installers/charm/osm-update-db-operator/config.yaml
new file mode 100644 (file)
index 0000000..3b7190b
--- /dev/null
@@ -0,0 +1,29 @@
+# Copyright 2022 Canonical Ltd.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+options:
+  log-level:
+    description: "Log Level"
+    type: string
+    default: "INFO"
+  mongodb-uri:
+    type: string
+    description: |
+      MongoDB URI (external database)
+      mongodb://<mongo_host>:<mongo_port>/
+  mysql-uri:
+    type: string
+    description: |
+      Mysql URI with the following format:
+        mysql://<user>:<password>@<mysql_host>:<mysql_port>/<database>
diff --git a/installers/charm/osm-update-db-operator/metadata.yaml b/installers/charm/osm-update-db-operator/metadata.yaml
new file mode 100644 (file)
index 0000000..b058591
--- /dev/null
@@ -0,0 +1,19 @@
+# Copyright 2022 Canonical Ltd.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+name: osm-update-db-operator
+description: |
+  Charm to update the OSM databases
+summary: |
+  Charm to update the OSM databases
diff --git a/installers/charm/osm-update-db-operator/pyproject.toml b/installers/charm/osm-update-db-operator/pyproject.toml
new file mode 100644 (file)
index 0000000..3fae174
--- /dev/null
@@ -0,0 +1,53 @@
+# Copyright 2022 Canonical Ltd.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+# Testing tools configuration
+[tool.coverage.run]
+branch = true
+
+[tool.coverage.report]
+show_missing = true
+
+[tool.pytest.ini_options]
+minversion = "6.0"
+log_cli_level = "INFO"
+
+# Formatting tools configuration
+[tool.black]
+line-length = 99
+target-version = ["py38"]
+
+[tool.isort]
+profile = "black"
+
+# Linting tools configuration
+[tool.flake8]
+max-line-length = 99
+max-doc-length = 99
+max-complexity = 10
+exclude = [".git", "__pycache__", ".tox", "build", "dist", "*.egg_info", "venv"]
+select = ["E", "W", "F", "C", "N", "R", "D", "H"]
+# Ignore W503, E501 because using black creates errors with this
+# Ignore D107 Missing docstring in __init__
+ignore = ["W503", "E501", "D107"]
+# D100, D101, D102, D103: Ignore missing docstrings in tests
+per-file-ignores = ["tests/*:D100,D101,D102,D103,D104"]
+docstring-convention = "google"
+# Check for properly formatted copyright header in each file
+copyright-check = "True"
+copyright-author = "Canonical Ltd."
+copyright-regexp = "Copyright\\s\\d{4}([-,]\\d{4})*\\s+%(author)s"
+
+[tool.bandit]
+tests = ["B201", "B301"]
diff --git a/installers/charm/osm-update-db-operator/requirements.txt b/installers/charm/osm-update-db-operator/requirements.txt
new file mode 100644 (file)
index 0000000..b488dba
--- /dev/null
@@ -0,0 +1,16 @@
+# Copyright 2022 Canonical Ltd.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+ops < 2.2
+pymongo == 3.12.3
diff --git a/installers/charm/osm-update-db-operator/src/charm.py b/installers/charm/osm-update-db-operator/src/charm.py
new file mode 100755 (executable)
index 0000000..32db2f7
--- /dev/null
@@ -0,0 +1,119 @@
+#!/usr/bin/env python3
+# Copyright 2022 Canonical Ltd.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+"""Update DB charm module."""
+
+import logging
+
+from ops.charm import CharmBase
+from ops.framework import StoredState
+from ops.main import main
+from ops.model import ActiveStatus, BlockedStatus
+
+from db_upgrade import MongoUpgrade, MysqlUpgrade
+
+logger = logging.getLogger(__name__)
+
+
+class UpgradeDBCharm(CharmBase):
+    """Upgrade DB Charm operator."""
+
+    _stored = StoredState()
+
+    def __init__(self, *args):
+        super().__init__(*args)
+
+        # Observe events
+        event_observe_mapping = {
+            self.on.update_db_action: self._on_update_db_action,
+            self.on.apply_patch_action: self._on_apply_patch_action,
+            self.on.config_changed: self._on_config_changed,
+        }
+        for event, observer in event_observe_mapping.items():
+            self.framework.observe(event, observer)
+
+    @property
+    def mongo(self):
+        """Create MongoUpgrade object if the configuration has been set."""
+        mongo_uri = self.config.get("mongodb-uri")
+        return MongoUpgrade(mongo_uri) if mongo_uri else None
+
+    @property
+    def mysql(self):
+        """Create MysqlUpgrade object if the configuration has been set."""
+        mysql_uri = self.config.get("mysql-uri")
+        return MysqlUpgrade(mysql_uri) if mysql_uri else None
+
+    def _on_config_changed(self, _):
+        mongo_uri = self.config.get("mongodb-uri")
+        mysql_uri = self.config.get("mysql-uri")
+        if not mongo_uri and not mysql_uri:
+            self.unit.status = BlockedStatus("mongodb-uri and/or mysql-uri must be set")
+            return
+        self.unit.status = ActiveStatus()
+
+    def _on_update_db_action(self, event):
+        """Handle the update-db action."""
+        current_version = str(event.params["current-version"])
+        target_version = str(event.params["target-version"])
+        mysql_only = event.params.get("mysql-only")
+        mongodb_only = event.params.get("mongodb-only")
+        try:
+            results = {}
+            if mysql_only and mongodb_only:
+                raise Exception("cannot set both mysql-only and mongodb-only options to True")
+            if mysql_only:
+                self._upgrade_mysql(current_version, target_version)
+                results["mysql"] = "Upgraded successfully"
+            elif mongodb_only:
+                self._upgrade_mongodb(current_version, target_version)
+                results["mongodb"] = "Upgraded successfully"
+            else:
+                self._upgrade_mysql(current_version, target_version)
+                results["mysql"] = "Upgraded successfully"
+                self._upgrade_mongodb(current_version, target_version)
+                results["mongodb"] = "Upgraded successfully"
+            event.set_results(results)
+        except Exception as e:
+            event.fail(f"Failed DB Upgrade: {e}")
+
+    def _upgrade_mysql(self, current_version, target_version):
+        logger.debug("Upgrading mysql")
+        if self.mysql:
+            self.mysql.upgrade(current_version, target_version)
+        else:
+            raise Exception("mysql-uri not set")
+
+    def _upgrade_mongodb(self, current_version, target_version):
+        logger.debug("Upgrading mongodb")
+        if self.mongo:
+            self.mongo.upgrade(current_version, target_version)
+        else:
+            raise Exception("mongo-uri not set")
+
+    def _on_apply_patch_action(self, event):
+        bug_number = event.params["bug-number"]
+        logger.debug("Patching bug number {}".format(str(bug_number)))
+        try:
+            if self.mongo:
+                self.mongo.apply_patch(bug_number)
+            else:
+                raise Exception("mongo-uri not set")
+        except Exception as e:
+            event.fail(f"Failed Patch Application: {e}")
+
+
+if __name__ == "__main__":  # pragma: no cover
+    main(UpgradeDBCharm, use_juju_for_storage=True)
diff --git a/installers/charm/osm-update-db-operator/src/db_upgrade.py b/installers/charm/osm-update-db-operator/src/db_upgrade.py
new file mode 100644 (file)
index 0000000..05cc0a0
--- /dev/null
@@ -0,0 +1,275 @@
+# Copyright 2022 Canonical Ltd.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+"""Upgrade DB charm module."""
+
+import json
+import logging
+
+from pymongo import MongoClient
+
+logger = logging.getLogger(__name__)
+
+
+class MongoUpgrade1012:
+    """Upgrade MongoDB Database from OSM v10 to v12."""
+
+    @staticmethod
+    def _remove_namespace_from_k8s(nsrs, nsr):
+        namespace = "kube-system:"
+        if nsr["_admin"].get("deployed"):
+            k8s_list = []
+            for k8s in nsr["_admin"]["deployed"].get("K8s"):
+                if k8s.get("k8scluster-uuid"):
+                    k8s["k8scluster-uuid"] = k8s["k8scluster-uuid"].replace(namespace, "", 1)
+                k8s_list.append(k8s)
+            myquery = {"_id": nsr["_id"]}
+            nsrs.update_one(myquery, {"$set": {"_admin.deployed.K8s": k8s_list}})
+
+    @staticmethod
+    def _update_nsr(osm_db):
+        """Update nsr.
+
+        Add vim_message = None if it does not exist.
+        Remove "namespace:" from k8scluster-uuid.
+        """
+        if "nsrs" not in osm_db.list_collection_names():
+            return
+        logger.info("Entering in MongoUpgrade1012._update_nsr function")
+
+        nsrs = osm_db["nsrs"]
+        for nsr in nsrs.find():
+            logger.debug(f"Updating {nsr['_id']} nsr")
+            for key, values in nsr.items():
+                if isinstance(values, list):
+                    item_list = []
+                    for value in values:
+                        if isinstance(value, dict) and value.get("vim_info"):
+                            index = list(value["vim_info"].keys())[0]
+                            if not value["vim_info"][index].get("vim_message"):
+                                value["vim_info"][index]["vim_message"] = None
+                            item_list.append(value)
+                    myquery = {"_id": nsr["_id"]}
+                    nsrs.update_one(myquery, {"$set": {key: item_list}})
+            MongoUpgrade1012._remove_namespace_from_k8s(nsrs, nsr)
+
+    @staticmethod
+    def _update_vnfr(osm_db):
+        """Update vnfr.
+
+        Add vim_message to vdur if it does not exist.
+        Copy content of interfaces into interfaces_backup.
+        """
+        if "vnfrs" not in osm_db.list_collection_names():
+            return
+        logger.info("Entering in MongoUpgrade1012._update_vnfr function")
+        mycol = osm_db["vnfrs"]
+        for vnfr in mycol.find():
+            logger.debug(f"Updating {vnfr['_id']} vnfr")
+            vdur_list = []
+            for vdur in vnfr["vdur"]:
+                if vdur.get("vim_info"):
+                    index = list(vdur["vim_info"].keys())[0]
+                    if not vdur["vim_info"][index].get("vim_message"):
+                        vdur["vim_info"][index]["vim_message"] = None
+                    if vdur["vim_info"][index].get(
+                        "interfaces", "Not found"
+                    ) != "Not found" and not vdur["vim_info"][index].get("interfaces_backup"):
+                        vdur["vim_info"][index]["interfaces_backup"] = vdur["vim_info"][index][
+                            "interfaces"
+                        ]
+                vdur_list.append(vdur)
+            myquery = {"_id": vnfr["_id"]}
+            mycol.update_one(myquery, {"$set": {"vdur": vdur_list}})
+
+    @staticmethod
+    def _update_k8scluster(osm_db):
+        """Remove namespace from helm-chart and helm-chart-v3 id."""
+        if "k8sclusters" not in osm_db.list_collection_names():
+            return
+        logger.info("Entering in MongoUpgrade1012._update_k8scluster function")
+        namespace = "kube-system:"
+        k8sclusters = osm_db["k8sclusters"]
+        for k8scluster in k8sclusters.find():
+            if k8scluster["_admin"].get("helm-chart") and k8scluster["_admin"]["helm-chart"].get(
+                "id"
+            ):
+                if k8scluster["_admin"]["helm-chart"]["id"].startswith(namespace):
+                    k8scluster["_admin"]["helm-chart"]["id"] = k8scluster["_admin"]["helm-chart"][
+                        "id"
+                    ].replace(namespace, "", 1)
+            if k8scluster["_admin"].get("helm-chart-v3") and k8scluster["_admin"][
+                "helm-chart-v3"
+            ].get("id"):
+                if k8scluster["_admin"]["helm-chart-v3"]["id"].startswith(namespace):
+                    k8scluster["_admin"]["helm-chart-v3"]["id"] = k8scluster["_admin"][
+                        "helm-chart-v3"
+                    ]["id"].replace(namespace, "", 1)
+            myquery = {"_id": k8scluster["_id"]}
+            k8sclusters.update_one(myquery, {"$set": k8scluster})
+
+    @staticmethod
+    def upgrade(mongo_uri):
+        """Upgrade nsr, vnfr and k8scluster in DB."""
+        logger.info("Entering in MongoUpgrade1012.upgrade function")
+        myclient = MongoClient(mongo_uri)
+        osm_db = myclient["osm"]
+        MongoUpgrade1012._update_nsr(osm_db)
+        MongoUpgrade1012._update_vnfr(osm_db)
+        MongoUpgrade1012._update_k8scluster(osm_db)
+
+
+class MongoUpgrade910:
+    """Upgrade MongoDB Database from OSM v9 to v10."""
+
+    @staticmethod
+    def upgrade(mongo_uri):
+        """Add parameter alarm status = OK if not found in alarms collection."""
+        myclient = MongoClient(mongo_uri)
+        osm_db = myclient["osm"]
+        collist = osm_db.list_collection_names()
+
+        if "alarms" in collist:
+            mycol = osm_db["alarms"]
+            for x in mycol.find():
+                if not x.get("alarm_status"):
+                    myquery = {"_id": x["_id"]}
+                    mycol.update_one(myquery, {"$set": {"alarm_status": "ok"}})
+
+
+class MongoPatch1837:
+    """Patch Bug 1837 on MongoDB."""
+
+    @staticmethod
+    def _update_nslcmops_params(osm_db):
+        """Updates the nslcmops collection to change the additional params to a string."""
+        logger.info("Entering in MongoPatch1837._update_nslcmops_params function")
+        if "nslcmops" in osm_db.list_collection_names():
+            nslcmops = osm_db["nslcmops"]
+            for nslcmop in nslcmops.find():
+                if nslcmop.get("operationParams"):
+                    if nslcmop["operationParams"].get("additionalParamsForVnf") and isinstance(
+                        nslcmop["operationParams"].get("additionalParamsForVnf"), list
+                    ):
+                        string_param = json.dumps(
+                            nslcmop["operationParams"]["additionalParamsForVnf"]
+                        )
+                        myquery = {"_id": nslcmop["_id"]}
+                        nslcmops.update_one(
+                            myquery,
+                            {
+                                "$set": {
+                                    "operationParams": {"additionalParamsForVnf": string_param}
+                                }
+                            },
+                        )
+                    elif nslcmop["operationParams"].get("primitive_params") and isinstance(
+                        nslcmop["operationParams"].get("primitive_params"), dict
+                    ):
+                        string_param = json.dumps(nslcmop["operationParams"]["primitive_params"])
+                        myquery = {"_id": nslcmop["_id"]}
+                        nslcmops.update_one(
+                            myquery,
+                            {"$set": {"operationParams": {"primitive_params": string_param}}},
+                        )
+
+    @staticmethod
+    def _update_vnfrs_params(osm_db):
+        """Updates the vnfrs collection to change the additional params to a string."""
+        logger.info("Entering in MongoPatch1837._update_vnfrs_params function")
+        if "vnfrs" in osm_db.list_collection_names():
+            mycol = osm_db["vnfrs"]
+            for vnfr in mycol.find():
+                if vnfr.get("kdur"):
+                    kdur_list = []
+                    for kdur in vnfr["kdur"]:
+                        if kdur.get("additionalParams") and not isinstance(
+                            kdur["additionalParams"], str
+                        ):
+                            kdur["additionalParams"] = json.dumps(kdur["additionalParams"])
+                        kdur_list.append(kdur)
+                    myquery = {"_id": vnfr["_id"]}
+                    mycol.update_one(
+                        myquery,
+                        {"$set": {"kdur": kdur_list}},
+                    )
+                    vnfr["kdur"] = kdur_list
+
+    @staticmethod
+    def patch(mongo_uri):
+        """Updates the database to change the additional params from dict to a string."""
+        logger.info("Entering in MongoPatch1837.patch function")
+        myclient = MongoClient(mongo_uri)
+        osm_db = myclient["osm"]
+        MongoPatch1837._update_nslcmops_params(osm_db)
+        MongoPatch1837._update_vnfrs_params(osm_db)
+
+
+MONGODB_UPGRADE_FUNCTIONS = {
+    "9": {"10": [MongoUpgrade910.upgrade]},
+    "10": {"12": [MongoUpgrade1012.upgrade]},
+}
+MYSQL_UPGRADE_FUNCTIONS = {}
+BUG_FIXES = {
+    1837: MongoPatch1837.patch,
+}
+
+
+class MongoUpgrade:
+    """Upgrade MongoDB Database."""
+
+    def __init__(self, mongo_uri):
+        self.mongo_uri = mongo_uri
+
+    def upgrade(self, current, target):
+        """Validates the upgrading path and upgrades the DB."""
+        self._validate_upgrade(current, target)
+        for function in MONGODB_UPGRADE_FUNCTIONS.get(current)[target]:
+            function(self.mongo_uri)
+
+    def _validate_upgrade(self, current, target):
+        """Check if the upgrade path chosen is possible."""
+        logger.info("Validating the upgrade path")
+        if current not in MONGODB_UPGRADE_FUNCTIONS:
+            raise Exception(f"cannot upgrade from {current} version.")
+        if target not in MONGODB_UPGRADE_FUNCTIONS[current]:
+            raise Exception(f"cannot upgrade from version {current} to {target}.")
+
+    def apply_patch(self, bug_number: int) -> None:
+        """Checks the bug-number and applies the fix in the database."""
+        if bug_number not in BUG_FIXES:
+            raise Exception(f"There is no patch for bug {bug_number}")
+        patch_function = BUG_FIXES[bug_number]
+        patch_function(self.mongo_uri)
+
+
+class MysqlUpgrade:
+    """Upgrade Mysql Database."""
+
+    def __init__(self, mysql_uri):
+        self.mysql_uri = mysql_uri
+
+    def upgrade(self, current, target):
+        """Validates the upgrading path and upgrades the DB."""
+        self._validate_upgrade(current, target)
+        for function in MYSQL_UPGRADE_FUNCTIONS[current][target]:
+            function(self.mysql_uri)
+
+    def _validate_upgrade(self, current, target):
+        """Check if the upgrade path chosen is possible."""
+        logger.info("Validating the upgrade path")
+        if current not in MYSQL_UPGRADE_FUNCTIONS:
+            raise Exception(f"cannot upgrade from {current} version.")
+        if target not in MYSQL_UPGRADE_FUNCTIONS[current]:
+            raise Exception(f"cannot upgrade from version {current} to {target}.")
diff --git a/installers/charm/osm-update-db-operator/tests/integration/test_charm.py b/installers/charm/osm-update-db-operator/tests/integration/test_charm.py
new file mode 100644 (file)
index 0000000..cc9e0be
--- /dev/null
@@ -0,0 +1,48 @@
+# Copyright 2022 Canonical Ltd.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import base64
+import logging
+from pathlib import Path
+
+import pytest
+import yaml
+from pytest_operator.plugin import OpsTest
+
+logger = logging.getLogger(__name__)
+
+METADATA = yaml.safe_load(Path("./metadata.yaml").read_text())
+
+
+@pytest.mark.abort_on_fail
+async def test_build_and_deploy(ops_test: OpsTest):
+    """Build the charm-under-test and deploy it together with related charms.
+
+    Assert on the unit status before any relations/configurations take place.
+    """
+    await ops_test.model.set_config({"update-status-hook-interval": "10s"})
+    # build and deploy charm from local source folder
+    charm = await ops_test.build_charm(".")
+    resources = {
+        "update-db-image": METADATA["resources"]["update-db-image"]["upstream-source"],
+    }
+    await ops_test.model.deploy(charm, resources=resources, application_name="update-db")
+    await ops_test.model.wait_for_idle(apps=["update-db"], status="active", timeout=1000)
+    assert ops_test.model.applications["update-db"].units[0].workload_status == "active"
+
+    await ops_test.model.set_config({"update-status-hook-interval": "60m"})
+
+
+def base64_encode(phrase: str) -> str:
+    return base64.b64encode(phrase.encode("utf-8")).decode("utf-8")
diff --git a/installers/charm/osm-update-db-operator/tests/unit/test_charm.py b/installers/charm/osm-update-db-operator/tests/unit/test_charm.py
new file mode 100644 (file)
index 0000000..a0f625d
--- /dev/null
@@ -0,0 +1,165 @@
+# Copyright 2022 Canonical Ltd.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import unittest
+from unittest.mock import Mock, patch
+
+from ops.model import ActiveStatus, BlockedStatus, MaintenanceStatus
+from ops.testing import Harness
+
+from charm import UpgradeDBCharm
+
+
+class TestCharm(unittest.TestCase):
+    def setUp(self):
+        self.harness = Harness(UpgradeDBCharm)
+        self.addCleanup(self.harness.cleanup)
+        self.harness.begin()
+
+    def test_initial_config(self):
+        self.assertEqual(self.harness.model.unit.status, MaintenanceStatus(""))
+
+    def test_config_changed(self):
+        self.harness.update_config({"mongodb-uri": "foo"})
+        self.assertEqual(self.harness.model.unit.status, ActiveStatus())
+
+    def test_config_changed_blocked(self):
+        self.harness.update_config({"log-level": "DEBUG"})
+        self.assertEqual(
+            self.harness.model.unit.status,
+            BlockedStatus("mongodb-uri and/or mysql-uri must be set"),
+        )
+
+    def test_update_db_fail_only_params(self):
+        action_event = Mock(
+            params={
+                "current-version": 9,
+                "target-version": 10,
+                "mysql-only": True,
+                "mongodb-only": True,
+            }
+        )
+        self.harness.charm._on_update_db_action(action_event)
+        self.assertEqual(
+            action_event.fail.call_args,
+            [("Failed DB Upgrade: cannot set both mysql-only and mongodb-only options to True",)],
+        )
+
+    @patch("charm.MongoUpgrade")
+    @patch("charm.MysqlUpgrade")
+    def test_update_db_mysql(self, mock_mysql_upgrade, mock_mongo_upgrade):
+        self.harness.update_config({"mysql-uri": "foo"})
+        action_event = Mock(
+            params={
+                "current-version": 9,
+                "target-version": 10,
+                "mysql-only": True,
+                "mongodb-only": False,
+            }
+        )
+        self.harness.charm._on_update_db_action(action_event)
+        mock_mysql_upgrade().upgrade.assert_called_once()
+        mock_mongo_upgrade.assert_not_called()
+
+    @patch("charm.MongoUpgrade")
+    @patch("charm.MysqlUpgrade")
+    def test_update_db_mongo(self, mock_mysql_upgrade, mock_mongo_upgrade):
+        self.harness.update_config({"mongodb-uri": "foo"})
+        action_event = Mock(
+            params={
+                "current-version": 7,
+                "target-version": 10,
+                "mysql-only": False,
+                "mongodb-only": True,
+            }
+        )
+        self.harness.charm._on_update_db_action(action_event)
+        mock_mongo_upgrade().upgrade.assert_called_once()
+        mock_mysql_upgrade.assert_not_called()
+
+    @patch("charm.MongoUpgrade")
+    def test_update_db_not_configured_mongo_fail(self, mock_mongo_upgrade):
+        action_event = Mock(
+            params={
+                "current-version": 7,
+                "target-version": 10,
+                "mysql-only": False,
+                "mongodb-only": True,
+            }
+        )
+        self.harness.charm._on_update_db_action(action_event)
+        mock_mongo_upgrade.assert_not_called()
+        self.assertEqual(
+            action_event.fail.call_args,
+            [("Failed DB Upgrade: mongo-uri not set",)],
+        )
+
+    @patch("charm.MysqlUpgrade")
+    def test_update_db_not_configured_mysql_fail(self, mock_mysql_upgrade):
+        action_event = Mock(
+            params={
+                "current-version": 7,
+                "target-version": 10,
+                "mysql-only": True,
+                "mongodb-only": False,
+            }
+        )
+        self.harness.charm._on_update_db_action(action_event)
+        mock_mysql_upgrade.assert_not_called()
+        self.assertEqual(
+            action_event.fail.call_args,
+            [("Failed DB Upgrade: mysql-uri not set",)],
+        )
+
+    @patch("charm.MongoUpgrade")
+    @patch("charm.MysqlUpgrade")
+    def test_update_db_mongodb_and_mysql(self, mock_mysql_upgrade, mock_mongo_upgrade):
+        self.harness.update_config({"mongodb-uri": "foo"})
+        self.harness.update_config({"mysql-uri": "foo"})
+        action_event = Mock(
+            params={
+                "current-version": 7,
+                "target-version": 10,
+                "mysql-only": False,
+                "mongodb-only": False,
+            }
+        )
+        self.harness.charm._on_update_db_action(action_event)
+        mock_mysql_upgrade().upgrade.assert_called_once()
+        mock_mongo_upgrade().upgrade.assert_called_once()
+
+    @patch("charm.MongoUpgrade")
+    def test_apply_patch(self, mock_mongo_upgrade):
+        self.harness.update_config({"mongodb-uri": "foo"})
+        action_event = Mock(
+            params={
+                "bug-number": 57,
+            }
+        )
+        self.harness.charm._on_apply_patch_action(action_event)
+        mock_mongo_upgrade().apply_patch.assert_called_once()
+
+    @patch("charm.MongoUpgrade")
+    def test_apply_patch_fail(self, mock_mongo_upgrade):
+        action_event = Mock(
+            params={
+                "bug-number": 57,
+            }
+        )
+        self.harness.charm._on_apply_patch_action(action_event)
+        mock_mongo_upgrade.assert_not_called()
+        self.assertEqual(
+            action_event.fail.call_args,
+            [("Failed Patch Application: mongo-uri not set",)],
+        )
diff --git a/installers/charm/osm-update-db-operator/tests/unit/test_db_upgrade.py b/installers/charm/osm-update-db-operator/tests/unit/test_db_upgrade.py
new file mode 100644 (file)
index 0000000..50affdd
--- /dev/null
@@ -0,0 +1,413 @@
+# Copyright 2022 Canonical Ltd.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import logging
+import unittest
+from unittest.mock import MagicMock, Mock, call, patch
+
+import db_upgrade
+from db_upgrade import (
+    MongoPatch1837,
+    MongoUpgrade,
+    MongoUpgrade910,
+    MongoUpgrade1012,
+    MysqlUpgrade,
+)
+
+logger = logging.getLogger(__name__)
+
+
+class TestUpgradeMongo910(unittest.TestCase):
+    @patch("db_upgrade.MongoClient")
+    def test_upgrade_mongo_9_10(self, mock_mongo_client):
+        mock_db = MagicMock()
+        alarms = Mock()
+        alarms.find.return_value = [{"_id": "1", "alarm_status": "1"}]
+        collection_dict = {"alarms": alarms, "other": {}}
+        mock_db.list_collection_names.return_value = collection_dict
+        mock_db.__getitem__.side_effect = collection_dict.__getitem__
+        mock_mongo_client.return_value = {"osm": mock_db}
+        MongoUpgrade910.upgrade("mongo_uri")
+        alarms.update_one.assert_not_called()
+
+    @patch("db_upgrade.MongoClient")
+    def test_upgrade_mongo_9_10_no_alarms(self, mock_mongo_client):
+        mock_db = Mock()
+        mock_db.__getitem__ = Mock()
+
+        mock_db.list_collection_names.return_value = {"other": {}}
+        mock_db.alarms.return_value = None
+        mock_mongo_client.return_value = {"osm": mock_db}
+        self.assertIsNone(MongoUpgrade910.upgrade("mongo_uri"))
+
+    @patch("db_upgrade.MongoClient")
+    def test_upgrade_mongo_9_10_no_alarm_status(self, mock_mongo_client):
+        mock_db = MagicMock()
+        alarms = Mock()
+        alarms.find.return_value = [{"_id": "1"}]
+        collection_dict = {"alarms": alarms, "other": {}}
+        mock_db.list_collection_names.return_value = collection_dict
+        mock_db.__getitem__.side_effect = collection_dict.__getitem__
+        mock_db.alarms.return_value = alarms
+        mock_mongo_client.return_value = {"osm": mock_db}
+        MongoUpgrade910.upgrade("mongo_uri")
+        alarms.update_one.assert_called_once_with({"_id": "1"}, {"$set": {"alarm_status": "ok"}})
+
+
+class TestUpgradeMongo1012(unittest.TestCase):
+    def setUp(self):
+        self.mock_db = MagicMock()
+        self.nsrs = Mock()
+        self.vnfrs = Mock()
+        self.k8s_clusters = Mock()
+
+    @patch("db_upgrade.MongoClient")
+    def test_update_nsr_empty_nsrs(self, mock_mongo_client):
+        self.nsrs.find.return_value = []
+        collection_list = {"nsrs": self.nsrs}
+        self.mock_db.__getitem__.side_effect = collection_list.__getitem__
+        self.mock_db.list_collection_names.return_value = collection_list
+        mock_mongo_client.return_value = {"osm": self.mock_db}
+        MongoUpgrade1012.upgrade("mongo_uri")
+
+    @patch("db_upgrade.MongoClient")
+    def test_update_nsr_empty_nsr(self, mock_mongo_client):
+        nsr = MagicMock()
+        nsr_values = {"_id": "2", "_admin": {}}
+        nsr.__getitem__.side_effect = nsr_values.__getitem__
+        nsr.items.return_value = []
+        self.nsrs.find.return_value = [nsr]
+        collection_list = {"nsrs": self.nsrs}
+        self.mock_db.__getitem__.side_effect = collection_list.__getitem__
+        self.mock_db.list_collection_names.return_value = collection_list
+        mock_mongo_client.return_value = {"osm": self.mock_db}
+        MongoUpgrade1012.upgrade("mongo_uri")
+
+    @patch("db_upgrade.MongoClient")
+    def test_update_nsr_add_vim_message(self, mock_mongo_client):
+        nsr = MagicMock()
+        vim_info1 = {"vim_info_key1": {}}
+        vim_info2 = {"vim_info_key2": {"vim_message": "Hello"}}
+        nsr_items = {"nsr_item_key": [{"vim_info": vim_info1}, {"vim_info": vim_info2}]}
+        nsr_values = {"_id": "2", "_admin": {}}
+        nsr.__getitem__.side_effect = nsr_values.__getitem__
+        nsr.items.return_value = nsr_items.items()
+        self.nsrs.find.return_value = [nsr]
+        collection_list = {"nsrs": self.nsrs}
+        self.mock_db.__getitem__.side_effect = collection_list.__getitem__
+        self.mock_db.list_collection_names.return_value = collection_list
+        mock_mongo_client.return_value = {"osm": self.mock_db}
+        MongoUpgrade1012.upgrade("mongo_uri")
+        expected_vim_info = {"vim_info_key1": {"vim_message": None}}
+        expected_vim_info2 = {"vim_info_key2": {"vim_message": "Hello"}}
+        self.assertEqual(vim_info1, expected_vim_info)
+        self.assertEqual(vim_info2, expected_vim_info2)
+        self.nsrs.update_one.assert_called_once_with({"_id": "2"}, {"$set": nsr_items})
+
+    @patch("db_upgrade.MongoClient")
+    def test_update_nsr_admin(self, mock_mongo_client):
+        nsr = MagicMock()
+        k8s = [{"k8scluster-uuid": "namespace"}, {"k8scluster-uuid": "kube-system:k8s"}]
+        admin = {"deployed": {"K8s": k8s}}
+        nsr_values = {"_id": "2", "_admin": admin}
+        nsr.__getitem__.side_effect = nsr_values.__getitem__
+        nsr_items = {}
+        nsr.items.return_value = nsr_items.items()
+        self.nsrs.find.return_value = [nsr]
+        collection_list = {"nsrs": self.nsrs}
+        self.mock_db.__getitem__.side_effect = collection_list.__getitem__
+        self.mock_db.list_collection_names.return_value = collection_list
+        mock_mongo_client.return_value = {"osm": self.mock_db}
+        MongoUpgrade1012.upgrade("mongo_uri")
+        expected_k8s = [{"k8scluster-uuid": "namespace"}, {"k8scluster-uuid": "k8s"}]
+        self.nsrs.update_one.assert_called_once_with(
+            {"_id": "2"}, {"$set": {"_admin.deployed.K8s": expected_k8s}}
+        )
+
+    @patch("db_upgrade.MongoClient")
+    def test_update_vnfr_empty_vnfrs(self, mock_mongo_client):
+        self.vnfrs.find.return_value = [{"_id": "10", "vdur": []}]
+        collection_list = {"vnfrs": self.vnfrs}
+        self.mock_db.__getitem__.side_effect = collection_list.__getitem__
+        self.mock_db.list_collection_names.return_value = collection_list
+        mock_mongo_client.return_value = {"osm": self.mock_db}
+        MongoUpgrade1012.upgrade("mongo_uri")
+        self.vnfrs.update_one.assert_called_once_with({"_id": "10"}, {"$set": {"vdur": []}})
+
+    @patch("db_upgrade.MongoClient")
+    def test_update_vnfr_no_vim_info(self, mock_mongo_client):
+        vdur = {"other": {}}
+        vnfr = {"_id": "10", "vdur": [vdur]}
+        self.vnfrs.find.return_value = [vnfr]
+        collection_list = {"vnfrs": self.vnfrs}
+        self.mock_db.__getitem__.side_effect = collection_list.__getitem__
+        self.mock_db.list_collection_names.return_value = collection_list
+        mock_mongo_client.return_value = {"osm": self.mock_db}
+        MongoUpgrade1012.upgrade("mongo_uri")
+        self.assertEqual(vdur, {"other": {}})
+        self.vnfrs.update_one.assert_called_once_with({"_id": "10"}, {"$set": {"vdur": [vdur]}})
+
+    @patch("db_upgrade.MongoClient")
+    def test_update_vnfr_vim_message_not_conditions_matched(self, mock_mongo_client):
+        vim_info = {"vim_message": "HelloWorld"}
+        vim_infos = {"key1": vim_info, "key2": "value2"}
+        vdur = {"vim_info": vim_infos, "other": {}}
+        vnfr = {"_id": "10", "vdur": [vdur]}
+        self.vnfrs.find.return_value = [vnfr]
+        collection_list = {"vnfrs": self.vnfrs}
+        self.mock_db.__getitem__.side_effect = collection_list.__getitem__
+        self.mock_db.list_collection_names.return_value = collection_list
+        mock_mongo_client.return_value = {"osm": self.mock_db}
+        MongoUpgrade1012.upgrade("mongo_uri")
+        expected_vim_info = {"vim_message": "HelloWorld"}
+        self.assertEqual(vim_info, expected_vim_info)
+        self.vnfrs.update_one.assert_called_once_with({"_id": "10"}, {"$set": {"vdur": [vdur]}})
+
+    @patch("db_upgrade.MongoClient")
+    def test_update_vnfr_vim_message_is_missing(self, mock_mongo_client):
+        vim_info = {"interfaces_backup": "HelloWorld"}
+        vim_infos = {"key1": vim_info, "key2": "value2"}
+        vdur = {"vim_info": vim_infos, "other": {}}
+        vnfr = {"_id": "10", "vdur": [vdur]}
+        self.vnfrs.find.return_value = [vnfr]
+        collection_list = {"vnfrs": self.vnfrs}
+        self.mock_db.__getitem__.side_effect = collection_list.__getitem__
+        self.mock_db.list_collection_names.return_value = collection_list
+        mock_mongo_client.return_value = {"osm": self.mock_db}
+        MongoUpgrade1012.upgrade("mongo_uri")
+        expected_vim_info = {"vim_message": None, "interfaces_backup": "HelloWorld"}
+        self.assertEqual(vim_info, expected_vim_info)
+        self.vnfrs.update_one.assert_called_once_with({"_id": "10"}, {"$set": {"vdur": [vdur]}})
+
+    @patch("db_upgrade.MongoClient")
+    def test_update_vnfr_interfaces_backup_is_updated(self, mock_mongo_client):
+        vim_info = {"interfaces": "HelloWorld", "vim_message": "ByeWorld"}
+        vim_infos = {"key1": vim_info, "key2": "value2"}
+        vdur = {"vim_info": vim_infos, "other": {}}
+        vnfr = {"_id": "10", "vdur": [vdur]}
+        self.vnfrs.find.return_value = [vnfr]
+        collection_list = {"vnfrs": self.vnfrs}
+        self.mock_db.__getitem__.side_effect = collection_list.__getitem__
+        self.mock_db.list_collection_names.return_value = collection_list
+        mock_mongo_client.return_value = {"osm": self.mock_db}
+        MongoUpgrade1012.upgrade("mongo_uri")
+        expected_vim_info = {
+            "interfaces": "HelloWorld",
+            "vim_message": "ByeWorld",
+            "interfaces_backup": "HelloWorld",
+        }
+        self.assertEqual(vim_info, expected_vim_info)
+        self.vnfrs.update_one.assert_called_once_with({"_id": "10"}, {"$set": {"vdur": [vdur]}})
+
+    @patch("db_upgrade.MongoClient")
+    def test_update_k8scluster_empty_k8scluster(self, mock_mongo_client):
+        self.k8s_clusters.find.return_value = []
+        collection_list = {"k8sclusters": self.k8s_clusters}
+        self.mock_db.__getitem__.side_effect = collection_list.__getitem__
+        self.mock_db.list_collection_names.return_value = collection_list
+        mock_mongo_client.return_value = {"osm": self.mock_db}
+        MongoUpgrade1012.upgrade("mongo_uri")
+
+    @patch("db_upgrade.MongoClient")
+    def test_update_k8scluster_replace_namespace_in_helm_chart(self, mock_mongo_client):
+        helm_chart = {"id": "kube-system:Hello", "other": {}}
+        k8s_cluster = {"_id": "8", "_admin": {"helm-chart": helm_chart}}
+        self.k8s_clusters.find.return_value = [k8s_cluster]
+        collection_list = {"k8sclusters": self.k8s_clusters}
+        self.mock_db.__getitem__.side_effect = collection_list.__getitem__
+        self.mock_db.list_collection_names.return_value = collection_list
+        mock_mongo_client.return_value = {"osm": self.mock_db}
+        MongoUpgrade1012.upgrade("mongo_uri")
+        expected_helm_chart = {"id": "Hello", "other": {}}
+        expected_k8s_cluster = {"_id": "8", "_admin": {"helm-chart": expected_helm_chart}}
+        self.k8s_clusters.update_one.assert_called_once_with(
+            {"_id": "8"}, {"$set": expected_k8s_cluster}
+        )
+
+    @patch("db_upgrade.MongoClient")
+    def test_update_k8scluster_replace_namespace_in_helm_chart_v3(self, mock_mongo_client):
+        helm_chart_v3 = {"id": "kube-system:Hello", "other": {}}
+        k8s_cluster = {"_id": "8", "_admin": {"helm-chart-v3": helm_chart_v3}}
+        self.k8s_clusters.find.return_value = [k8s_cluster]
+        collection_list = {"k8sclusters": self.k8s_clusters}
+        self.mock_db.__getitem__.side_effect = collection_list.__getitem__
+        self.mock_db.list_collection_names.return_value = collection_list
+        mock_mongo_client.return_value = {"osm": self.mock_db}
+        MongoUpgrade1012.upgrade("mongo_uri")
+        expected_helm_chart_v3 = {"id": "Hello", "other": {}}
+        expected_k8s_cluster = {"_id": "8", "_admin": {"helm-chart-v3": expected_helm_chart_v3}}
+        self.k8s_clusters.update_one.assert_called_once_with(
+            {"_id": "8"}, {"$set": expected_k8s_cluster}
+        )
+
+
+class TestPatch1837(unittest.TestCase):
+    def setUp(self):
+        self.mock_db = MagicMock()
+        self.vnfrs = Mock()
+        self.nslcmops = Mock()
+
+    @patch("db_upgrade.MongoClient")
+    def test_update_vnfrs_params_no_vnfrs_or_nslcmops(self, mock_mongo_client):
+        collection_dict = {"other": {}}
+        self.mock_db.list_collection_names.return_value = collection_dict
+        mock_mongo_client.return_value = {"osm": self.mock_db}
+        MongoPatch1837.patch("mongo_uri")
+
+    @patch("db_upgrade.MongoClient")
+    def test_update_vnfrs_params_no_kdur(self, mock_mongo_client):
+        self.vnfrs.find.return_value = {"_id": "1"}
+        collection_dict = {"vnfrs": self.vnfrs, "other": {}}
+        self.mock_db.list_collection_names.return_value = collection_dict
+        mock_mongo_client.return_value = {"osm": self.mock_db}
+        MongoPatch1837.patch("mongo_uri")
+
+    @patch("db_upgrade.MongoClient")
+    def test_update_vnfrs_params_kdur_without_additional_params(self, mock_mongo_client):
+        kdur = [{"other": {}}]
+        self.vnfrs.find.return_value = [{"_id": "1", "kdur": kdur}]
+        collection_dict = {"vnfrs": self.vnfrs, "other": {}}
+        self.mock_db.list_collection_names.return_value = collection_dict
+        self.mock_db.__getitem__.side_effect = collection_dict.__getitem__
+        mock_mongo_client.return_value = {"osm": self.mock_db}
+        MongoPatch1837.patch("mongo_uri")
+        self.vnfrs.update_one.assert_called_once_with({"_id": "1"}, {"$set": {"kdur": kdur}})
+
+    @patch("db_upgrade.MongoClient")
+    def test_update_vnfrs_params_kdur_two_additional_params(self, mock_mongo_client):
+        kdur1 = {"additionalParams": "additional_params", "other": {}}
+        kdur2 = {"additionalParams": 4, "other": {}}
+        kdur = [kdur1, kdur2]
+        self.vnfrs.find.return_value = [{"_id": "1", "kdur": kdur}]
+        collection_dict = {"vnfrs": self.vnfrs, "other": {}}
+        self.mock_db.list_collection_names.return_value = collection_dict
+        self.mock_db.__getitem__.side_effect = collection_dict.__getitem__
+        mock_mongo_client.return_value = {"osm": self.mock_db}
+        MongoPatch1837.patch("mongo_uri")
+        self.vnfrs.update_one.assert_called_once_with(
+            {"_id": "1"}, {"$set": {"kdur": [kdur1, {"additionalParams": "4", "other": {}}]}}
+        )
+
+    @patch("db_upgrade.MongoClient")
+    def test_update_nslcmops_params_no_nslcmops(self, mock_mongo_client):
+        self.nslcmops.find.return_value = []
+        collection_dict = {"nslcmops": self.nslcmops, "other": {}}
+        self.mock_db.list_collection_names.return_value = collection_dict
+        self.mock_db.__getitem__.side_effect = collection_dict.__getitem__
+        mock_mongo_client.return_value = {"osm": self.mock_db}
+        MongoPatch1837.patch("mongo_uri")
+
+    @patch("db_upgrade.MongoClient")
+    def test_update_nslcmops_additional_params(self, mock_mongo_client):
+        operation_params_list = {"additionalParamsForVnf": [1, 2, 3]}
+        operation_params_dict = {"primitive_params": {"dict_key": 5}}
+        nslcmops1 = {"_id": "1", "other": {}}
+        nslcmops2 = {"_id": "2", "operationParams": operation_params_list, "other": {}}
+        nslcmops3 = {"_id": "3", "operationParams": operation_params_dict, "other": {}}
+        self.nslcmops.find.return_value = [nslcmops1, nslcmops2, nslcmops3]
+        collection_dict = {"nslcmops": self.nslcmops, "other": {}}
+        self.mock_db.list_collection_names.return_value = collection_dict
+        self.mock_db.__getitem__.side_effect = collection_dict.__getitem__
+        mock_mongo_client.return_value = {"osm": self.mock_db}
+        MongoPatch1837.patch("mongo_uri")
+        call1 = call(
+            {"_id": "2"}, {"$set": {"operationParams": {"additionalParamsForVnf": "[1, 2, 3]"}}}
+        )
+        call2 = call(
+            {"_id": "3"}, {"$set": {"operationParams": {"primitive_params": '{"dict_key": 5}'}}}
+        )
+        expected_calls = [call1, call2]
+        self.nslcmops.update_one.assert_has_calls(expected_calls)
+
+
+class TestMongoUpgrade(unittest.TestCase):
+    def setUp(self):
+        self.mongo = MongoUpgrade("http://fake_mongo:27017")
+        self.upgrade_function = Mock()
+        self.patch_function = Mock()
+        db_upgrade.MONGODB_UPGRADE_FUNCTIONS = {"9": {"10": [self.upgrade_function]}}
+        db_upgrade.BUG_FIXES = {1837: self.patch_function}
+
+    def test_validate_upgrade_fail_target(self):
+        valid_current = "9"
+        invalid_target = "7"
+        with self.assertRaises(Exception) as context:
+            self.mongo._validate_upgrade(valid_current, invalid_target)
+        self.assertEqual("cannot upgrade from version 9 to 7.", str(context.exception))
+
+    def test_validate_upgrade_fail_current(self):
+        invalid_current = "7"
+        invalid_target = "8"
+        with self.assertRaises(Exception) as context:
+            self.mongo._validate_upgrade(invalid_current, invalid_target)
+        self.assertEqual("cannot upgrade from 7 version.", str(context.exception))
+
+    def test_validate_upgrade_pass(self):
+        valid_current = "9"
+        valid_target = "10"
+        self.assertIsNone(self.mongo._validate_upgrade(valid_current, valid_target))
+
+    @patch("db_upgrade.MongoUpgrade._validate_upgrade")
+    def test_update_mongo_success(self, mock_validate):
+        valid_current = "9"
+        valid_target = "10"
+        mock_validate.return_value = ""
+        self.mongo.upgrade(valid_current, valid_target)
+        self.upgrade_function.assert_called_once()
+
+    def test_validate_apply_patch(self):
+        bug_number = 1837
+        self.mongo.apply_patch(bug_number)
+        self.patch_function.assert_called_once()
+
+    def test_validate_apply_patch_invalid_bug_fail(self):
+        bug_number = 2
+        with self.assertRaises(Exception) as context:
+            self.mongo.apply_patch(bug_number)
+        self.assertEqual("There is no patch for bug 2", str(context.exception))
+        self.patch_function.assert_not_called()
+
+
+class TestMysqlUpgrade(unittest.TestCase):
+    def setUp(self):
+        self.mysql = MysqlUpgrade("mysql://fake_mysql:23023")
+        self.upgrade_function = Mock()
+        db_upgrade.MYSQL_UPGRADE_FUNCTIONS = {"9": {"10": [self.upgrade_function]}}
+
+    def test_validate_upgrade_mysql_fail_current(self):
+        invalid_current = "7"
+        invalid_target = "8"
+        with self.assertRaises(Exception) as context:
+            self.mysql._validate_upgrade(invalid_current, invalid_target)
+        self.assertEqual("cannot upgrade from 7 version.", str(context.exception))
+
+    def test_validate_upgrade_mysql_fail_target(self):
+        valid_current = "9"
+        invalid_target = "7"
+        with self.assertRaises(Exception) as context:
+            self.mysql._validate_upgrade(valid_current, invalid_target)
+        self.assertEqual("cannot upgrade from version 9 to 7.", str(context.exception))
+
+    def test_validate_upgrade_mysql_success(self):
+        valid_current = "9"
+        valid_target = "10"
+        self.assertIsNone(self.mysql._validate_upgrade(valid_current, valid_target))
+
+    @patch("db_upgrade.MysqlUpgrade._validate_upgrade")
+    def test_upgrade_mysql_success(self, mock_validate):
+        valid_current = "9"
+        valid_target = "10"
+        mock_validate.return_value = ""
+        self.mysql.upgrade(valid_current, valid_target)
+        self.upgrade_function.assert_called_once()
diff --git a/installers/charm/osm-update-db-operator/tox.ini b/installers/charm/osm-update-db-operator/tox.ini
new file mode 100644 (file)
index 0000000..bcf628a
--- /dev/null
@@ -0,0 +1,104 @@
+# Copyright 2022 Canonical Ltd.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+[tox]
+skipsdist=True
+skip_missing_interpreters = True
+envlist = lint, unit
+
+[vars]
+src_path = {toxinidir}/src/
+tst_path = {toxinidir}/tests/
+;lib_path = {toxinidir}/lib/charms/
+all_path = {[vars]src_path} {[vars]tst_path}
+
+[testenv]
+basepython = python3
+setenv =
+  PYTHONPATH = {toxinidir}:{toxinidir}/lib:{[vars]src_path}
+  PYTHONBREAKPOINT=ipdb.set_trace
+passenv =
+  PYTHONPATH
+  HOME
+  PATH
+  CHARM_BUILD_DIR
+  MODEL_SETTINGS
+  HTTP_PROXY
+  HTTPS_PROXY
+  NO_PROXY
+
+[testenv:fmt]
+description = Apply coding style standards to code
+deps =
+    black
+    isort
+commands =
+    isort {[vars]all_path}
+    black {[vars]all_path}
+
+[testenv:lint]
+description = Check code against coding style standards
+deps =
+    black
+    flake8>= 4.0.0, < 5.0.0
+    flake8-docstrings
+    flake8-copyright
+    flake8-builtins
+    # prospector[with_everything]
+    pylint
+    pyproject-flake8
+    pep8-naming
+    isort
+    codespell
+    yamllint
+    -r{toxinidir}/requirements.txt
+commands =
+    codespell {toxinidir}/*.yaml {toxinidir}/*.ini {toxinidir}/*.md \
+      {toxinidir}/*.toml {toxinidir}/*.txt {toxinidir}/.github
+    # prospector -A -F -T
+    pylint -E {[vars]src_path}
+    yamllint -d '\{extends: default, ignore: "build\n.tox" \}' .
+    # pflake8 wrapper supports config from pyproject.toml
+    pflake8 {[vars]all_path}
+    isort --check-only --diff {[vars]all_path}
+    black --check --diff {[vars]all_path}
+
+[testenv:unit]
+description = Run unit tests
+deps =
+    pytest
+    pytest-mock
+    pytest-cov
+    coverage[toml]
+    -r{toxinidir}/requirements.txt
+commands =
+    pytest --ignore={[vars]tst_path}integration --cov={[vars]src_path} --cov-report=xml
+    coverage report
+
+[testenv:security]
+description = Run security tests
+deps =
+    bandit
+    safety
+commands =
+    bandit -r {[vars]src_path}
+    - safety check
+
+[testenv:integration]
+description = Run integration tests
+deps =
+    pytest
+    pytest-operator
+commands =
+    pytest -v --tb native --ignore={[vars]tst_path}unit --log-cli-level=INFO -s {posargs}
diff --git a/installers/charm/pla/.gitignore b/installers/charm/pla/.gitignore
deleted file mode 100644 (file)
index 493739e..0000000
+++ /dev/null
@@ -1,30 +0,0 @@
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-venv
-.vscode
-build
-*.charm
-.coverage
-coverage.xml
-.stestr
-cover
-release
diff --git a/installers/charm/pla/.jujuignore b/installers/charm/pla/.jujuignore
deleted file mode 100644 (file)
index 3ae3e7d..0000000
+++ /dev/null
@@ -1,34 +0,0 @@
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-venv
-.vscode
-build
-*.charm
-.coverage
-coverage.xml
-.gitignore
-.stestr
-cover
-release
-tests/
-requirements*
-tox.ini
diff --git a/installers/charm/pla/.yamllint.yaml b/installers/charm/pla/.yamllint.yaml
deleted file mode 100644 (file)
index d71fb69..0000000
+++ /dev/null
@@ -1,34 +0,0 @@
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
----
-extends: default
-
-yaml-files:
-  - "*.yaml"
-  - "*.yml"
-  - ".yamllint"
-ignore: |
-  .tox
-  cover/
-  build/
-  venv
-  release/
diff --git a/installers/charm/pla/README.md b/installers/charm/pla/README.md
deleted file mode 100644 (file)
index 8d486d0..0000000
+++ /dev/null
@@ -1,14 +0,0 @@
-<!-- #   Copyright 2020 Canonical Ltd.
-#
-#   Licensed under the Apache License, Version 2.0 (the "License");
-#   you may not use this file except in compliance with the License.
-#   You may obtain a copy of the License at
-#
-#       http://www.apache.org/licenses/LICENSE-2.0
-#
-#   Unless required by applicable law or agreed to in writing, software
-#   distributed under the License is distributed on an "AS IS" BASIS,
-#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-#   See the License for the specific language governing permissions and
-#   limitations under the License. -->
-# PLA Charm
\ No newline at end of file
diff --git a/installers/charm/pla/charmcraft.yaml b/installers/charm/pla/charmcraft.yaml
deleted file mode 100644 (file)
index 0a285a9..0000000
+++ /dev/null
@@ -1,37 +0,0 @@
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-type: charm
-bases:
-  - build-on:
-      - name: ubuntu
-        channel: "20.04"
-        architectures: ["amd64"]
-    run-on:
-      - name: ubuntu
-        channel: "20.04"
-        architectures:
-          - amd64
-          - aarch64
-          - arm64
-parts:
-  charm:
-    build-packages: [git]
diff --git a/installers/charm/pla/config.yaml b/installers/charm/pla/config.yaml
deleted file mode 100644 (file)
index 642c165..0000000
+++ /dev/null
@@ -1,39 +0,0 @@
-# -*- coding: utf-8 -*-
-
-# Copyright 2020 Arctos Labs Scandinavia AB
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#    http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
-# implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-options:
-  log_level:
-    description: Log level
-    type: string
-    default: INFO
-  database_commonkey:
-    description: Common Key for Mongo database
-    type: string
-    default: osm
-  mongodb_uri:
-    type: string
-    description: MongoDB URI (external database)
-  image_pull_policy:
-    type: string
-    description: |
-      ImagePullPolicy configuration for the pod.
-      Possible values: always, ifnotpresent, never
-    default: always
-  security_context:
-    description: Enables the security context of the pods
-    type: boolean
-    default: false
diff --git a/installers/charm/pla/lib/charms/kafka_k8s/v0/kafka.py b/installers/charm/pla/lib/charms/kafka_k8s/v0/kafka.py
deleted file mode 100644 (file)
index 1baf9a8..0000000
+++ /dev/null
@@ -1,207 +0,0 @@
-# Copyright 2022 Canonical Ltd.
-# See LICENSE file for licensing details.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#    http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
-# implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""Kafka library.
-
-This [library](https://juju.is/docs/sdk/libraries) implements both sides of the
-`kafka` [interface](https://juju.is/docs/sdk/relations).
-
-The *provider* side of this interface is implemented by the
-[kafka-k8s Charmed Operator](https://charmhub.io/kafka-k8s).
-
-Any Charmed Operator that *requires* Kafka for providing its
-service should implement the *requirer* side of this interface.
-
-In a nutshell using this library to implement a Charmed Operator *requiring*
-Kafka would look like
-
-```
-$ charmcraft fetch-lib charms.kafka_k8s.v0.kafka
-```
-
-`metadata.yaml`:
-
-```
-requires:
-  kafka:
-    interface: kafka
-    limit: 1
-```
-
-`src/charm.py`:
-
-```
-from charms.kafka_k8s.v0.kafka import KafkaEvents, KafkaRequires
-from ops.charm import CharmBase
-
-
-class MyCharm(CharmBase):
-
-    on = KafkaEvents()
-
-    def __init__(self, *args):
-        super().__init__(*args)
-        self.kafka = KafkaRequires(self)
-        self.framework.observe(
-            self.on.kafka_available,
-            self._on_kafka_available,
-        )
-        self.framework.observe(
-            self.on.kafka_broken,
-            self._on_kafka_broken,
-        )
-
-    def _on_kafka_available(self, event):
-        # Get Kafka host and port
-        host: str = self.kafka.host
-        port: int = self.kafka.port
-        # host => "kafka-k8s"
-        # port => 9092
-
-    def _on_kafka_broken(self, event):
-        # Stop service
-        # ...
-        self.unit.status = BlockedStatus("need kafka relation")
-```
-
-You can file bugs
-[here](https://github.com/charmed-osm/kafka-k8s-operator/issues)!
-"""
-
-from typing import Optional
-
-from ops.charm import CharmBase, CharmEvents
-from ops.framework import EventBase, EventSource, Object
-
-# The unique Charmhub library identifier, never change it
-from ops.model import Relation
-
-LIBID = "eacc8c85082347c9aae740e0220b8376"
-
-# Increment this major API version when introducing breaking changes
-LIBAPI = 0
-
-# Increment this PATCH version before using `charmcraft publish-lib` or reset
-# to 0 if you are raising the major API version
-LIBPATCH = 3
-
-
-KAFKA_HOST_APP_KEY = "host"
-KAFKA_PORT_APP_KEY = "port"
-
-
-class _KafkaAvailableEvent(EventBase):
-    """Event emitted when Kafka is available."""
-
-
-class _KafkaBrokenEvent(EventBase):
-    """Event emitted when Kafka relation is broken."""
-
-
-class KafkaEvents(CharmEvents):
-    """Kafka events.
-
-    This class defines the events that Kafka can emit.
-
-    Events:
-        kafka_available (_KafkaAvailableEvent)
-    """
-
-    kafka_available = EventSource(_KafkaAvailableEvent)
-    kafka_broken = EventSource(_KafkaBrokenEvent)
-
-
-class KafkaRequires(Object):
-    """Requires-side of the Kafka relation."""
-
-    def __init__(self, charm: CharmBase, endpoint_name: str = "kafka") -> None:
-        super().__init__(charm, endpoint_name)
-        self.charm = charm
-        self._endpoint_name = endpoint_name
-
-        # Observe relation events
-        event_observe_mapping = {
-            charm.on[self._endpoint_name].relation_changed: self._on_relation_changed,
-            charm.on[self._endpoint_name].relation_broken: self._on_relation_broken,
-        }
-        for event, observer in event_observe_mapping.items():
-            self.framework.observe(event, observer)
-
-    def _on_relation_changed(self, event) -> None:
-        if event.relation.app and all(
-            key in event.relation.data[event.relation.app]
-            for key in (KAFKA_HOST_APP_KEY, KAFKA_PORT_APP_KEY)
-        ):
-            self.charm.on.kafka_available.emit()
-
-    def _on_relation_broken(self, _) -> None:
-        self.charm.on.kafka_broken.emit()
-
-    @property
-    def host(self) -> str:
-        relation: Relation = self.model.get_relation(self._endpoint_name)
-        return (
-            relation.data[relation.app].get(KAFKA_HOST_APP_KEY)
-            if relation and relation.app
-            else None
-        )
-
-    @property
-    def port(self) -> int:
-        relation: Relation = self.model.get_relation(self._endpoint_name)
-        return (
-            int(relation.data[relation.app].get(KAFKA_PORT_APP_KEY))
-            if relation and relation.app
-            else None
-        )
-
-
-class KafkaProvides(Object):
-    """Provides-side of the Kafka relation."""
-
-    def __init__(self, charm: CharmBase, endpoint_name: str = "kafka") -> None:
-        super().__init__(charm, endpoint_name)
-        self._endpoint_name = endpoint_name
-
-    def set_host_info(self, host: str, port: int, relation: Optional[Relation] = None) -> None:
-        """Set Kafka host and port.
-
-        This function writes in the application data of the relation, therefore,
-        only the unit leader can call it.
-
-        Args:
-            host (str): Kafka hostname or IP address.
-            port (int): Kafka port.
-            relation (Optional[Relation]): Relation to update.
-                                           If not specified, all relations will be updated.
-
-        Raises:
-            Exception: if a non-leader unit calls this function.
-        """
-        if not self.model.unit.is_leader():
-            raise Exception("only the leader set host information.")
-
-        if relation:
-            self._update_relation_data(host, port, relation)
-            return
-
-        for relation in self.model.relations[self._endpoint_name]:
-            self._update_relation_data(host, port, relation)
-
-    def _update_relation_data(self, host: str, port: int, relation: Relation) -> None:
-        """Update data in relation if needed."""
-        relation.data[self.model.app][KAFKA_HOST_APP_KEY] = host
-        relation.data[self.model.app][KAFKA_PORT_APP_KEY] = str(port)
diff --git a/installers/charm/pla/metadata.yaml b/installers/charm/pla/metadata.yaml
deleted file mode 100644 (file)
index bd8b279..0000000
+++ /dev/null
@@ -1,34 +0,0 @@
-#   Copyright 2020 Canonical Ltd.
-#
-#   Licensed under the Apache License, Version 2.0 (the "License");
-#   you may not use this file except in compliance with the License.
-#   You may obtain a copy of the License at
-#
-#       http://www.apache.org/licenses/LICENSE-2.0
-#
-#   Unless required by applicable law or agreed to in writing, software
-#   distributed under the License is distributed on an "AS IS" BASIS,
-#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-#   See the License for the specific language governing permissions and
-#   limitations under the License.
-
-name: osm-pla
-summary: A Placement charm for Opensource MANO
-description: |
-  Placement module for OSM
-series:
-  - kubernetes
-min-juju-version: 2.7.0
-deployment:
-  type: stateless
-  service: cluster
-resources:
-  image:
-    type: oci-image
-    description: OSM docker image for POL
-    upstream-source: "opensourcemano/pla:latest"
-requires:
-  kafka:
-    interface: kafka
-  mongodb:
-    interface: mongodb
diff --git a/installers/charm/pla/requirements-test.txt b/installers/charm/pla/requirements-test.txt
deleted file mode 100644 (file)
index cf61dd4..0000000
+++ /dev/null
@@ -1,20 +0,0 @@
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-mock==4.0.3
diff --git a/installers/charm/pla/requirements.txt b/installers/charm/pla/requirements.txt
deleted file mode 100644 (file)
index 1a8928c..0000000
+++ /dev/null
@@ -1,22 +0,0 @@
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-git+https://github.com/charmed-osm/ops-lib-charmed-osm/@master
\ No newline at end of file
diff --git a/installers/charm/pla/src/charm.py b/installers/charm/pla/src/charm.py
deleted file mode 100755 (executable)
index d907f0b..0000000
+++ /dev/null
@@ -1,172 +0,0 @@
-#!/usr/bin/env python3
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-# pylint: disable=E0213
-
-
-import logging
-from typing import NoReturn, Optional
-
-from charms.kafka_k8s.v0.kafka import KafkaEvents, KafkaRequires
-from ops.main import main
-from opslib.osm.charm import CharmedOsmBase, RelationsMissing
-from opslib.osm.interfaces.mongo import MongoClient
-from opslib.osm.pod import (
-    ContainerV3Builder,
-    PodRestartPolicy,
-    PodSpecV3Builder,
-)
-from opslib.osm.validator import ModelValidator, validator
-
-
-logger = logging.getLogger(__name__)
-
-PORT = 9999
-
-
-class ConfigModel(ModelValidator):
-    database_commonkey: str
-    mongodb_uri: Optional[str]
-    log_level: str
-    image_pull_policy: str
-    security_context: bool
-
-    @validator("log_level")
-    def validate_log_level(cls, v):
-        if v not in {"INFO", "DEBUG"}:
-            raise ValueError("value must be INFO or DEBUG")
-        return v
-
-    @validator("mongodb_uri")
-    def validate_mongodb_uri(cls, v):
-        if v and not v.startswith("mongodb://"):
-            raise ValueError("mongodb_uri is not properly formed")
-        return v
-
-    @validator("image_pull_policy")
-    def validate_image_pull_policy(cls, v):
-        values = {
-            "always": "Always",
-            "ifnotpresent": "IfNotPresent",
-            "never": "Never",
-        }
-        v = v.lower()
-        if v not in values.keys():
-            raise ValueError("value must be always, ifnotpresent or never")
-        return values[v]
-
-
-class PlaCharm(CharmedOsmBase):
-    on = KafkaEvents()
-
-    def __init__(self, *args) -> NoReturn:
-        super().__init__(*args, oci_image="image")
-
-        self.kafka = KafkaRequires(self)
-        self.framework.observe(self.on.kafka_available, self.configure_pod)
-        self.framework.observe(self.on.kafka_broken, self.configure_pod)
-
-        self.mongodb_client = MongoClient(self, "mongodb")
-        self.framework.observe(self.on["mongodb"].relation_changed, self.configure_pod)
-        self.framework.observe(self.on["mongodb"].relation_broken, self.configure_pod)
-
-    def _check_missing_dependencies(self, config: ConfigModel):
-        missing_relations = []
-
-        if not self.kafka.host or not self.kafka.port:
-            missing_relations.append("kafka")
-        if not config.mongodb_uri and self.mongodb_client.is_missing_data_in_unit():
-            missing_relations.append("mongodb")
-
-        if missing_relations:
-            raise RelationsMissing(missing_relations)
-
-    def build_pod_spec(self, image_info):
-        # Validate config
-        config = ConfigModel(**dict(self.config))
-
-        if config.mongodb_uri and not self.mongodb_client.is_missing_data_in_unit():
-            raise Exception("Mongodb data cannot be provided via config and relation")
-
-        # Check relations
-        self._check_missing_dependencies(config)
-
-        # Create Builder for the PodSpec
-        pod_spec_builder = PodSpecV3Builder(
-            enable_security_context=config.security_context
-        )
-
-        # Add secrets to the pod
-        mongodb_secret_name = f"{self.app.name}-mongodb-secret"
-        pod_spec_builder.add_secret(
-            mongodb_secret_name,
-            {
-                "uri": config.mongodb_uri or self.mongodb_client.connection_string,
-                "commonkey": config.database_commonkey,
-            },
-        )
-
-        # Build Container
-        container_builder = ContainerV3Builder(
-            self.app.name,
-            image_info,
-            config.image_pull_policy,
-            run_as_non_root=config.security_context,
-        )
-        container_builder.add_port(name=self.app.name, port=PORT)
-        container_builder.add_envs(
-            {
-                # General configuration
-                "ALLOW_ANONYMOUS_LOGIN": "yes",
-                "OSMPLA_GLOBAL_LOG_LEVEL": config.log_level,
-                # Kafka configuration
-                "OSMPLA_MESSAGE_DRIVER": "kafka",
-                "OSMPLA_MESSAGE_HOST": self.kafka.host,
-                "OSMPLA_MESSAGE_PORT": self.kafka.port,
-                # Database configuration
-                "OSMPLA_DATABASE_DRIVER": "mongo",
-            }
-        )
-
-        container_builder.add_secret_envs(
-            secret_name=mongodb_secret_name,
-            envs={
-                "OSMPLA_DATABASE_URI": "uri",
-                "OSMPLA_DATABASE_COMMONKEY": "commonkey",
-            },
-        )
-
-        container = container_builder.build()
-
-        # Add Pod restart policy
-        restart_policy = PodRestartPolicy()
-        restart_policy.add_secrets(secret_names=(mongodb_secret_name,))
-        pod_spec_builder.set_restart_policy(restart_policy)
-
-        # Add container to pod spec
-        pod_spec_builder.add_container(container)
-
-        return pod_spec_builder.build()
-
-
-if __name__ == "__main__":
-    main(PlaCharm)
diff --git a/installers/charm/pla/tests/__init__.py b/installers/charm/pla/tests/__init__.py
deleted file mode 100644 (file)
index 446d5ce..0000000
+++ /dev/null
@@ -1,40 +0,0 @@
-#!/usr/bin/env python3
-# Copyright 2020 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-"""Init mocking for unit tests."""
-
-import sys
-
-
-import mock
-
-
-class OCIImageResourceErrorMock(Exception):
-    pass
-
-
-sys.path.append("src")
-
-oci_image = mock.MagicMock()
-oci_image.OCIImageResourceError = OCIImageResourceErrorMock
-sys.modules["oci_image"] = oci_image
-sys.modules["oci_image"].OCIImageResource().fetch.return_value = {}
diff --git a/installers/charm/pla/tests/test_charm.py b/installers/charm/pla/tests/test_charm.py
deleted file mode 100644 (file)
index d577e9f..0000000
+++ /dev/null
@@ -1,122 +0,0 @@
-#!/usr/bin/env python3
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-import sys
-from typing import NoReturn
-import unittest
-
-
-from charm import PlaCharm
-from ops.model import ActiveStatus, BlockedStatus
-from ops.testing import Harness
-
-
-class TestCharm(unittest.TestCase):
-    """Pla Charm unit tests."""
-
-    def setUp(self) -> NoReturn:
-        """Test setup"""
-        self.image_info = sys.modules["oci_image"].OCIImageResource().fetch()
-        self.harness = Harness(PlaCharm)
-        self.harness.set_leader(is_leader=True)
-        self.harness.begin()
-        self.config = {
-            "log_level": "INFO",
-            "mongodb_uri": "",
-        }
-        self.harness.update_config(self.config)
-
-    def test_config_changed_no_relations(
-        self,
-    ) -> NoReturn:
-        """Test ingress resources without HTTP."""
-
-        self.harness.charm.on.config_changed.emit()
-
-        # Assertions
-        self.assertIsInstance(self.harness.charm.unit.status, BlockedStatus)
-        self.assertTrue(
-            all(
-                relation in self.harness.charm.unit.status.message
-                for relation in ["mongodb", "kafka"]
-            )
-        )
-
-    def test_config_changed_non_leader(
-        self,
-    ) -> NoReturn:
-        """Test ingress resources without HTTP."""
-        self.harness.set_leader(is_leader=False)
-        self.harness.charm.on.config_changed.emit()
-
-        # Assertions
-        self.assertIsInstance(self.harness.charm.unit.status, ActiveStatus)
-
-    def test_with_relations_and_mongodb_config(
-        self,
-    ) -> NoReturn:
-        "Test with relations and mongodb config (internal)"
-        self.initialize_kafka_relation()
-        self.initialize_mongo_config()
-        # Verifying status
-        self.assertNotIsInstance(self.harness.charm.unit.status, BlockedStatus)
-
-    def test_with_relations(
-        self,
-    ) -> NoReturn:
-        "Test with relations (internal)"
-        self.initialize_kafka_relation()
-        self.initialize_mongo_relation()
-        # Verifying status
-        self.assertNotIsInstance(self.harness.charm.unit.status, BlockedStatus)
-
-    def test_exception_mongodb_relation_and_config(
-        self,
-    ) -> NoReturn:
-        "Test with relation and config for Mongodb. Test must fail"
-        self.initialize_mongo_relation()
-        self.initialize_mongo_config()
-        # Verifying status
-        self.assertIsInstance(self.harness.charm.unit.status, BlockedStatus)
-
-    def initialize_kafka_relation(self):
-        kafka_relation_id = self.harness.add_relation("kafka", "kafka")
-        self.harness.add_relation_unit(kafka_relation_id, "kafka/0")
-        self.harness.update_relation_data(
-            kafka_relation_id, "kafka", {"host": "kafka", "port": 9092}
-        )
-
-    def initialize_mongo_config(self):
-        self.harness.update_config({"mongodb_uri": "mongodb://mongo:27017"})
-
-    def initialize_mongo_relation(self):
-        mongodb_relation_id = self.harness.add_relation("mongodb", "mongodb")
-        self.harness.add_relation_unit(mongodb_relation_id, "mongodb/0")
-        self.harness.update_relation_data(
-            mongodb_relation_id,
-            "mongodb/0",
-            {"connection_string": "mongodb://mongo:27017"},
-        )
-
-
-if __name__ == "__main__":
-    unittest.main()
diff --git a/installers/charm/pla/tox.ini b/installers/charm/pla/tox.ini
deleted file mode 100644 (file)
index f3c9144..0000000
+++ /dev/null
@@ -1,128 +0,0 @@
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-#######################################################################################
-
-[tox]
-envlist = black, cover, flake8, pylint, yamllint, safety
-skipsdist = true
-
-[tox:jenkins]
-toxworkdir = /tmp/.tox
-
-[testenv]
-basepython = python3.8
-setenv =
-  VIRTUAL_ENV={envdir}
-  PYTHONPATH = {toxinidir}:{toxinidir}/lib:{toxinidir}/src
-  PYTHONDONTWRITEBYTECODE = 1
-deps =  -r{toxinidir}/requirements.txt
-
-
-#######################################################################################
-[testenv:black]
-deps = black
-commands =
-        black --check --diff src/ tests/
-
-
-#######################################################################################
-[testenv:cover]
-deps =  {[testenv]deps}
-        -r{toxinidir}/requirements-test.txt
-        coverage
-        nose2
-commands =
-        sh -c 'rm -f nosetests.xml'
-        coverage erase
-        nose2 -C --coverage src
-        coverage report --omit='*tests*'
-        coverage html -d ./cover --omit='*tests*'
-        coverage xml -o coverage.xml --omit=*tests*
-whitelist_externals = sh
-
-
-#######################################################################################
-[testenv:flake8]
-deps =  flake8
-        flake8-import-order
-commands =
-        flake8 src/ tests/
-
-
-#######################################################################################
-[testenv:pylint]
-deps =  {[testenv]deps}
-        -r{toxinidir}/requirements-test.txt
-        pylint==2.10.2
-commands =
-    pylint -E src/ tests/
-
-
-#######################################################################################
-[testenv:safety]
-setenv =
-        LC_ALL=C.UTF-8
-        LANG=C.UTF-8
-deps =  {[testenv]deps}
-        safety
-commands =
-        - safety check --full-report
-
-
-#######################################################################################
-[testenv:yamllint]
-deps =  {[testenv]deps}
-        -r{toxinidir}/requirements-test.txt
-        yamllint
-commands = yamllint .
-
-#######################################################################################
-[testenv:build]
-passenv=HTTP_PROXY HTTPS_PROXY NO_PROXY
-whitelist_externals =
-  charmcraft
-  sh
-commands =
-  charmcraft pack
-  sh -c 'ubuntu_version=20.04; \
-        architectures="amd64-aarch64-arm64"; \
-        charm_name=`cat metadata.yaml | grep -E "^name: " | cut -f 2 -d " "`; \
-        mv $charm_name"_ubuntu-"$ubuntu_version-$architectures.charm $charm_name.charm'
-
-#######################################################################################
-[flake8]
-ignore =
-        W291,
-        W293,
-        W503,
-        E123,
-        E125,
-        E226,
-        E241,
-exclude =
-        .git,
-        __pycache__,
-        .tox,
-max-line-length = 120
-show-source = True
-builtins = _
-max-complexity = 10
-import-order-style = google
diff --git a/installers/charm/pol/.gitignore b/installers/charm/pol/.gitignore
deleted file mode 100644 (file)
index 2885df2..0000000
+++ /dev/null
@@ -1,30 +0,0 @@
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-venv
-.vscode
-build
-*.charm
-.coverage
-coverage.xml
-.stestr
-cover
-release
\ No newline at end of file
diff --git a/installers/charm/pol/.jujuignore b/installers/charm/pol/.jujuignore
deleted file mode 100644 (file)
index 3ae3e7d..0000000
+++ /dev/null
@@ -1,34 +0,0 @@
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-venv
-.vscode
-build
-*.charm
-.coverage
-coverage.xml
-.gitignore
-.stestr
-cover
-release
-tests/
-requirements*
-tox.ini
diff --git a/installers/charm/pol/.yamllint.yaml b/installers/charm/pol/.yamllint.yaml
deleted file mode 100644 (file)
index d71fb69..0000000
+++ /dev/null
@@ -1,34 +0,0 @@
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
----
-extends: default
-
-yaml-files:
-  - "*.yaml"
-  - "*.yml"
-  - ".yamllint"
-ignore: |
-  .tox
-  cover/
-  build/
-  venv
-  release/
diff --git a/installers/charm/pol/README.md b/installers/charm/pol/README.md
deleted file mode 100644 (file)
index 12e60df..0000000
+++ /dev/null
@@ -1,23 +0,0 @@
-<!-- Copyright 2020 Canonical Ltd.
-
-Licensed under the Apache License, Version 2.0 (the "License"); you may
-not use this file except in compliance with the License. You may obtain
-a copy of the License at
-
-        http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-License for the specific language governing permissions and limitations
-under the License.
-
-For those usages not covered by the Apache License, Version 2.0 please
-contact: legal@canonical.com
-
-To get in touch with the maintainers, please contact:
-osm-charmers@lists.launchpad.net -->
-
-# POL operator Charm for Kubernetes
-
-## Requirements
diff --git a/installers/charm/pol/charmcraft.yaml b/installers/charm/pol/charmcraft.yaml
deleted file mode 100644 (file)
index 0a285a9..0000000
+++ /dev/null
@@ -1,37 +0,0 @@
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-type: charm
-bases:
-  - build-on:
-      - name: ubuntu
-        channel: "20.04"
-        architectures: ["amd64"]
-    run-on:
-      - name: ubuntu
-        channel: "20.04"
-        architectures:
-          - amd64
-          - aarch64
-          - arm64
-parts:
-  charm:
-    build-packages: [git]
diff --git a/installers/charm/pol/config.yaml b/installers/charm/pol/config.yaml
deleted file mode 100644 (file)
index a2eef47..0000000
+++ /dev/null
@@ -1,69 +0,0 @@
-# Copyright 2020 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-options:
-  log_level:
-    description: "Log Level"
-    type: string
-    default: "INFO"
-  mongodb_uri:
-    type: string
-    description: MongoDB URI (external database)
-  mysql_uri:
-    type: string
-    description: |
-      Mysql URI with the following format:
-        mysql://<user>:<password>@<mysql_host>:<mysql_port>/<database>
-  image_pull_policy:
-    type: string
-    description: |
-      ImagePullPolicy configuration for the pod.
-      Possible values: always, ifnotpresent, never
-    default: always
-  debug_mode:
-    description: |
-      If true, debug mode is activated. It means that the service will not run,
-      and instead, the command for the container will be a `sleep infinity`.
-      Note: If enabled, security_context will be disabled.
-    type: boolean
-    default: false
-  debug_pubkey:
-    description: |
-      Public SSH key that will be injected to the application pod.
-    type: string
-  debug_pol_local_path:
-    description: |
-      Local full path to the POL project.
-
-      The path will be mounted to the docker image,
-      which means changes during the debugging will be saved in your local path.
-    type: string
-  debug_common_local_path:
-    description: |
-      Local full path to the COMMON project.
-
-      The path will be mounted to the docker image,
-      which means changes during the debugging will be saved in your local path.
-    type: string
-  security_context:
-    description: Enables the security context of the pods
-    type: boolean
-    default: false
diff --git a/installers/charm/pol/lib/charms/kafka_k8s/v0/kafka.py b/installers/charm/pol/lib/charms/kafka_k8s/v0/kafka.py
deleted file mode 100644 (file)
index 1baf9a8..0000000
+++ /dev/null
@@ -1,207 +0,0 @@
-# Copyright 2022 Canonical Ltd.
-# See LICENSE file for licensing details.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#    http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
-# implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""Kafka library.
-
-This [library](https://juju.is/docs/sdk/libraries) implements both sides of the
-`kafka` [interface](https://juju.is/docs/sdk/relations).
-
-The *provider* side of this interface is implemented by the
-[kafka-k8s Charmed Operator](https://charmhub.io/kafka-k8s).
-
-Any Charmed Operator that *requires* Kafka for providing its
-service should implement the *requirer* side of this interface.
-
-In a nutshell using this library to implement a Charmed Operator *requiring*
-Kafka would look like
-
-```
-$ charmcraft fetch-lib charms.kafka_k8s.v0.kafka
-```
-
-`metadata.yaml`:
-
-```
-requires:
-  kafka:
-    interface: kafka
-    limit: 1
-```
-
-`src/charm.py`:
-
-```
-from charms.kafka_k8s.v0.kafka import KafkaEvents, KafkaRequires
-from ops.charm import CharmBase
-
-
-class MyCharm(CharmBase):
-
-    on = KafkaEvents()
-
-    def __init__(self, *args):
-        super().__init__(*args)
-        self.kafka = KafkaRequires(self)
-        self.framework.observe(
-            self.on.kafka_available,
-            self._on_kafka_available,
-        )
-        self.framework.observe(
-            self.on.kafka_broken,
-            self._on_kafka_broken,
-        )
-
-    def _on_kafka_available(self, event):
-        # Get Kafka host and port
-        host: str = self.kafka.host
-        port: int = self.kafka.port
-        # host => "kafka-k8s"
-        # port => 9092
-
-    def _on_kafka_broken(self, event):
-        # Stop service
-        # ...
-        self.unit.status = BlockedStatus("need kafka relation")
-```
-
-You can file bugs
-[here](https://github.com/charmed-osm/kafka-k8s-operator/issues)!
-"""
-
-from typing import Optional
-
-from ops.charm import CharmBase, CharmEvents
-from ops.framework import EventBase, EventSource, Object
-
-# The unique Charmhub library identifier, never change it
-from ops.model import Relation
-
-LIBID = "eacc8c85082347c9aae740e0220b8376"
-
-# Increment this major API version when introducing breaking changes
-LIBAPI = 0
-
-# Increment this PATCH version before using `charmcraft publish-lib` or reset
-# to 0 if you are raising the major API version
-LIBPATCH = 3
-
-
-KAFKA_HOST_APP_KEY = "host"
-KAFKA_PORT_APP_KEY = "port"
-
-
-class _KafkaAvailableEvent(EventBase):
-    """Event emitted when Kafka is available."""
-
-
-class _KafkaBrokenEvent(EventBase):
-    """Event emitted when Kafka relation is broken."""
-
-
-class KafkaEvents(CharmEvents):
-    """Kafka events.
-
-    This class defines the events that Kafka can emit.
-
-    Events:
-        kafka_available (_KafkaAvailableEvent)
-    """
-
-    kafka_available = EventSource(_KafkaAvailableEvent)
-    kafka_broken = EventSource(_KafkaBrokenEvent)
-
-
-class KafkaRequires(Object):
-    """Requires-side of the Kafka relation."""
-
-    def __init__(self, charm: CharmBase, endpoint_name: str = "kafka") -> None:
-        super().__init__(charm, endpoint_name)
-        self.charm = charm
-        self._endpoint_name = endpoint_name
-
-        # Observe relation events
-        event_observe_mapping = {
-            charm.on[self._endpoint_name].relation_changed: self._on_relation_changed,
-            charm.on[self._endpoint_name].relation_broken: self._on_relation_broken,
-        }
-        for event, observer in event_observe_mapping.items():
-            self.framework.observe(event, observer)
-
-    def _on_relation_changed(self, event) -> None:
-        if event.relation.app and all(
-            key in event.relation.data[event.relation.app]
-            for key in (KAFKA_HOST_APP_KEY, KAFKA_PORT_APP_KEY)
-        ):
-            self.charm.on.kafka_available.emit()
-
-    def _on_relation_broken(self, _) -> None:
-        self.charm.on.kafka_broken.emit()
-
-    @property
-    def host(self) -> str:
-        relation: Relation = self.model.get_relation(self._endpoint_name)
-        return (
-            relation.data[relation.app].get(KAFKA_HOST_APP_KEY)
-            if relation and relation.app
-            else None
-        )
-
-    @property
-    def port(self) -> int:
-        relation: Relation = self.model.get_relation(self._endpoint_name)
-        return (
-            int(relation.data[relation.app].get(KAFKA_PORT_APP_KEY))
-            if relation and relation.app
-            else None
-        )
-
-
-class KafkaProvides(Object):
-    """Provides-side of the Kafka relation."""
-
-    def __init__(self, charm: CharmBase, endpoint_name: str = "kafka") -> None:
-        super().__init__(charm, endpoint_name)
-        self._endpoint_name = endpoint_name
-
-    def set_host_info(self, host: str, port: int, relation: Optional[Relation] = None) -> None:
-        """Set Kafka host and port.
-
-        This function writes in the application data of the relation, therefore,
-        only the unit leader can call it.
-
-        Args:
-            host (str): Kafka hostname or IP address.
-            port (int): Kafka port.
-            relation (Optional[Relation]): Relation to update.
-                                           If not specified, all relations will be updated.
-
-        Raises:
-            Exception: if a non-leader unit calls this function.
-        """
-        if not self.model.unit.is_leader():
-            raise Exception("only the leader set host information.")
-
-        if relation:
-            self._update_relation_data(host, port, relation)
-            return
-
-        for relation in self.model.relations[self._endpoint_name]:
-            self._update_relation_data(host, port, relation)
-
-    def _update_relation_data(self, host: str, port: int, relation: Relation) -> None:
-        """Update data in relation if needed."""
-        relation.data[self.model.app][KAFKA_HOST_APP_KEY] = host
-        relation.data[self.model.app][KAFKA_PORT_APP_KEY] = str(port)
diff --git a/installers/charm/pol/metadata.yaml b/installers/charm/pol/metadata.yaml
deleted file mode 100644 (file)
index f9f6923..0000000
+++ /dev/null
@@ -1,48 +0,0 @@
-# Copyright 2020 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-name: osm-pol
-summary: OSM Policy Module (POL)
-description: |
-  A CAAS charm to deploy OSM's Policy Module (POL).
-series:
-  - kubernetes
-tags:
-  - kubernetes
-  - osm
-  - pol
-min-juju-version: 2.8.0
-deployment:
-  type: stateless
-  service: cluster
-resources:
-  image:
-    type: oci-image
-    description: OSM docker image for POL
-    upstream-source: "opensourcemano/pol:latest"
-requires:
-  kafka:
-    interface: kafka
-  mongodb:
-    interface: mongodb
-  mysql:
-    interface: mysql
-    limit: 1
diff --git a/installers/charm/pol/requirements-test.txt b/installers/charm/pol/requirements-test.txt
deleted file mode 100644 (file)
index cf61dd4..0000000
+++ /dev/null
@@ -1,20 +0,0 @@
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-mock==4.0.3
diff --git a/installers/charm/pol/requirements.txt b/installers/charm/pol/requirements.txt
deleted file mode 100644 (file)
index 1a8928c..0000000
+++ /dev/null
@@ -1,22 +0,0 @@
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-git+https://github.com/charmed-osm/ops-lib-charmed-osm/@master
\ No newline at end of file
diff --git a/installers/charm/pol/src/charm.py b/installers/charm/pol/src/charm.py
deleted file mode 100755 (executable)
index 94f6ecb..0000000
+++ /dev/null
@@ -1,236 +0,0 @@
-#!/usr/bin/env python3
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-# pylint: disable=E0213
-
-
-import logging
-import re
-from typing import NoReturn, Optional
-
-from charms.kafka_k8s.v0.kafka import KafkaEvents, KafkaRequires
-from ops.main import main
-from opslib.osm.charm import CharmedOsmBase, RelationsMissing
-from opslib.osm.interfaces.mongo import MongoClient
-from opslib.osm.interfaces.mysql import MysqlClient
-from opslib.osm.pod import (
-    ContainerV3Builder,
-    PodRestartPolicy,
-    PodSpecV3Builder,
-)
-from opslib.osm.validator import ModelValidator, validator
-
-
-logger = logging.getLogger(__name__)
-
-PORT = 9999
-DEFAULT_MYSQL_DATABASE = "pol"
-
-
-class ConfigModel(ModelValidator):
-    log_level: str
-    mongodb_uri: Optional[str]
-    mysql_uri: Optional[str]
-    image_pull_policy: str
-    debug_mode: bool
-    security_context: bool
-
-    @validator("log_level")
-    def validate_log_level(cls, v):
-        if v not in {"INFO", "DEBUG"}:
-            raise ValueError("value must be INFO or DEBUG")
-        return v
-
-    @validator("mongoddb_uri")
-    def validate_mongodb_uri(cls, v):
-        if v and not v.startswith("mongodb://"):
-            raise ValueError("mongodb_uri is not properly formed")
-        return v
-
-    @validator("mysql_uri")
-    def validate_mysql_uri(cls, v):
-        pattern = re.compile("^mysql:\/\/.*:.*@.*:\d+\/.*$")  # noqa: W605
-        if v and not pattern.search(v):
-            raise ValueError("mysql_uri is not properly formed")
-        return v
-
-    @validator("image_pull_policy")
-    def validate_image_pull_policy(cls, v):
-        values = {
-            "always": "Always",
-            "ifnotpresent": "IfNotPresent",
-            "never": "Never",
-        }
-        v = v.lower()
-        if v not in values.keys():
-            raise ValueError("value must be always, ifnotpresent or never")
-        return values[v]
-
-
-class PolCharm(CharmedOsmBase):
-    on = KafkaEvents()
-
-    def __init__(self, *args) -> NoReturn:
-        super().__init__(
-            *args,
-            oci_image="image",
-            vscode_workspace=VSCODE_WORKSPACE,
-        )
-        if self.config.get("debug_mode"):
-            self.enable_debug_mode(
-                pubkey=self.config.get("debug_pubkey"),
-                hostpaths={
-                    "POL": {
-                        "hostpath": self.config.get("debug_pol_local_path"),
-                        "container-path": "/usr/lib/python3/dist-packages/osm_policy_module",
-                    },
-                    "osm_common": {
-                        "hostpath": self.config.get("debug_common_local_path"),
-                        "container-path": "/usr/lib/python3/dist-packages/osm_common",
-                    },
-                },
-            )
-        self.kafka = KafkaRequires(self)
-        self.framework.observe(self.on.kafka_available, self.configure_pod)
-        self.framework.observe(self.on.kafka_broken, self.configure_pod)
-
-        self.mongodb_client = MongoClient(self, "mongodb")
-        self.framework.observe(self.on["mongodb"].relation_changed, self.configure_pod)
-        self.framework.observe(self.on["mongodb"].relation_broken, self.configure_pod)
-
-        self.mysql_client = MysqlClient(self, "mysql")
-        self.framework.observe(self.on["mysql"].relation_changed, self.configure_pod)
-        self.framework.observe(self.on["mysql"].relation_broken, self.configure_pod)
-
-    def _check_missing_dependencies(self, config: ConfigModel):
-        missing_relations = []
-
-        if not self.kafka.host or not self.kafka.port:
-            missing_relations.append("kafka")
-        if not config.mongodb_uri and self.mongodb_client.is_missing_data_in_unit():
-            missing_relations.append("mongodb")
-        if not config.mysql_uri and self.mysql_client.is_missing_data_in_unit():
-            missing_relations.append("mysql")
-        if missing_relations:
-            raise RelationsMissing(missing_relations)
-
-    def build_pod_spec(self, image_info):
-        # Validate config
-        config = ConfigModel(**dict(self.config))
-
-        if config.mongodb_uri and not self.mongodb_client.is_missing_data_in_unit():
-            raise Exception("Mongodb data cannot be provided via config and relation")
-        if config.mysql_uri and not self.mysql_client.is_missing_data_in_unit():
-            raise Exception("Mysql data cannot be provided via config and relation")
-
-        # Check relations
-        self._check_missing_dependencies(config)
-
-        security_context_enabled = (
-            config.security_context if not config.debug_mode else False
-        )
-
-        # Create Builder for the PodSpec
-        pod_spec_builder = PodSpecV3Builder(
-            enable_security_context=security_context_enabled
-        )
-
-        # Add secrets to the pod
-        mongodb_secret_name = f"{self.app.name}-mongodb-secret"
-        pod_spec_builder.add_secret(
-            mongodb_secret_name,
-            {"uri": config.mongodb_uri or self.mongodb_client.connection_string},
-        )
-        mysql_secret_name = f"{self.app.name}-mysql-secret"
-        pod_spec_builder.add_secret(
-            mysql_secret_name,
-            {
-                "uri": config.mysql_uri
-                or self.mysql_client.get_root_uri(DEFAULT_MYSQL_DATABASE)
-            },
-        )
-
-        # Build Container
-        container_builder = ContainerV3Builder(
-            self.app.name,
-            image_info,
-            config.image_pull_policy,
-            run_as_non_root=security_context_enabled,
-        )
-        container_builder.add_port(name=self.app.name, port=PORT)
-        container_builder.add_envs(
-            {
-                # General configuration
-                "ALLOW_ANONYMOUS_LOGIN": "yes",
-                "OSMPOL_GLOBAL_LOGLEVEL": config.log_level,
-                # Kafka configuration
-                "OSMPOL_MESSAGE_DRIVER": "kafka",
-                "OSMPOL_MESSAGE_HOST": self.kafka.host,
-                "OSMPOL_MESSAGE_PORT": self.kafka.port,
-                # Database configuration
-                "OSMPOL_DATABASE_DRIVER": "mongo",
-            }
-        )
-        container_builder.add_secret_envs(
-            mongodb_secret_name, {"OSMPOL_DATABASE_URI": "uri"}
-        )
-        container_builder.add_secret_envs(
-            mysql_secret_name, {"OSMPOL_SQL_DATABASE_URI": "uri"}
-        )
-        container = container_builder.build()
-
-        # Add Pod restart policy
-        restart_policy = PodRestartPolicy()
-        restart_policy.add_secrets(
-            secret_names=(mongodb_secret_name, mysql_secret_name)
-        )
-        pod_spec_builder.set_restart_policy(restart_policy)
-
-        # Add container to pod spec
-        pod_spec_builder.add_container(container)
-
-        return pod_spec_builder.build()
-
-
-VSCODE_WORKSPACE = {
-    "folders": [
-        {"path": "/usr/lib/python3/dist-packages/osm_policy_module"},
-        {"path": "/usr/lib/python3/dist-packages/osm_common"},
-    ],
-    "settings": {},
-    "launch": {
-        "version": "0.2.0",
-        "configurations": [
-            {
-                "name": "POL",
-                "type": "python",
-                "request": "launch",
-                "module": "osm_policy_module.cmd.policy_module_agent",
-                "justMyCode": False,
-            }
-        ],
-    },
-}
-
-
-if __name__ == "__main__":
-    main(PolCharm)
diff --git a/installers/charm/pol/src/pod_spec.py b/installers/charm/pol/src/pod_spec.py
deleted file mode 100644 (file)
index 5ad4217..0000000
+++ /dev/null
@@ -1,198 +0,0 @@
-#!/usr/bin/env python3
-# Copyright 2020 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-import logging
-from typing import Any, Dict, List, NoReturn
-
-logger = logging.getLogger(__name__)
-
-
-def _validate_data(
-    config_data: Dict[str, Any], relation_data: Dict[str, Any]
-) -> NoReturn:
-    """Validate input data.
-
-    Args:
-        config_data (Dict[str, Any]): configuration data.
-        relation_data (Dict[str, Any]): relation data.
-    """
-    config_validators = {
-        "log_level": lambda value, _: (
-            isinstance(value, str) and value in ("INFO", "DEBUG")
-        ),
-    }
-    relation_validators = {
-        "message_host": lambda value, _: isinstance(value, str) and len(value) > 0,
-        "message_port": lambda value, _: isinstance(value, int) and value > 0,
-        "database_uri": lambda value, _: (
-            isinstance(value, str) and value.startswith("mongodb://")
-        ),
-    }
-    problems = []
-
-    for key, validator in config_validators.items():
-        valid = validator(config_data.get(key), config_data)
-
-        if not valid:
-            problems.append(key)
-
-    for key, validator in relation_validators.items():
-        valid = validator(relation_data.get(key), relation_data)
-
-        if not valid:
-            problems.append(key)
-
-    if len(problems) > 0:
-        raise ValueError("Errors found in: {}".format(", ".join(problems)))
-
-
-def _make_pod_ports(port: int) -> List[Dict[str, Any]]:
-    """Generate pod ports details.
-
-    Args:
-        port (int): port to expose.
-
-    Returns:
-        List[Dict[str, Any]]: pod port details.
-    """
-    return [{"name": "pol", "containerPort": port, "protocol": "TCP"}]
-
-
-def _make_pod_envconfig(
-    config: Dict[str, Any], relation_state: Dict[str, Any]
-) -> Dict[str, Any]:
-    """Generate pod environment configuration.
-
-    Args:
-        config (Dict[str, Any]): configuration information.
-        relation_state (Dict[str, Any]): relation state information.
-
-    Returns:
-        Dict[str, Any]: pod environment configuration.
-    """
-    envconfig = {
-        # General configuration
-        "ALLOW_ANONYMOUS_LOGIN": "yes",
-        "OSMPOL_GLOBAL_LOGLEVEL": config["log_level"],
-        # Kafka configuration
-        "OSMPOL_MESSAGE_HOST": relation_state["message_host"],
-        "OSMPOL_MESSAGE_DRIVER": "kafka",
-        "OSMPOL_MESSAGE_PORT": relation_state["message_port"],
-        # Database configuration
-        "OSMPOL_DATABASE_DRIVER": "mongo",
-        "OSMPOL_DATABASE_URI": relation_state["database_uri"],
-    }
-
-    return envconfig
-
-
-def _make_startup_probe() -> Dict[str, Any]:
-    """Generate startup probe.
-
-    Returns:
-        Dict[str, Any]: startup probe.
-    """
-    return {
-        "exec": {"command": ["/usr/bin/pgrep", "python3"]},
-        "initialDelaySeconds": 60,
-        "timeoutSeconds": 5,
-    }
-
-
-def _make_readiness_probe() -> Dict[str, Any]:
-    """Generate readiness probe.
-
-    Returns:
-        Dict[str, Any]: readiness probe.
-    """
-    return {
-        "exec": {
-            "command": ["sh", "-c", "osm-pol-healthcheck || exit 1"],
-        },
-        "periodSeconds": 10,
-        "timeoutSeconds": 5,
-        "successThreshold": 1,
-        "failureThreshold": 3,
-    }
-
-
-def _make_liveness_probe() -> Dict[str, Any]:
-    """Generate liveness probe.
-
-    Returns:
-        Dict[str, Any]: liveness probe.
-    """
-    return {
-        "exec": {
-            "command": ["sh", "-c", "osm-pol-healthcheck || exit 1"],
-        },
-        "initialDelaySeconds": 45,
-        "periodSeconds": 10,
-        "timeoutSeconds": 5,
-        "successThreshold": 1,
-        "failureThreshold": 3,
-    }
-
-
-def make_pod_spec(
-    image_info: Dict[str, str],
-    config: Dict[str, Any],
-    relation_state: Dict[str, Any],
-    app_name: str = "pol",
-    port: int = 80,
-) -> Dict[str, Any]:
-    """Generate the pod spec information.
-
-    Args:
-        image_info (Dict[str, str]): Object provided by
-                                     OCIImageResource("image").fetch().
-        config (Dict[str, Any]): Configuration information.
-        relation_state (Dict[str, Any]): Relation state information.
-        app_name (str, optional): Application name. Defaults to "pol".
-        port (int, optional): Port for the container. Defaults to 80.
-
-    Returns:
-        Dict[str, Any]: Pod spec dictionary for the charm.
-    """
-    if not image_info:
-        return None
-
-    _validate_data(config, relation_state)
-
-    ports = _make_pod_ports(port)
-    env_config = _make_pod_envconfig(config, relation_state)
-
-    return {
-        "version": 3,
-        "containers": [
-            {
-                "name": app_name,
-                "imageDetails": image_info,
-                "imagePullPolicy": "Always",
-                "ports": ports,
-                "envConfig": env_config,
-            }
-        ],
-        "kubernetesResources": {
-            "ingressResources": [],
-        },
-    }
diff --git a/installers/charm/pol/tests/__init__.py b/installers/charm/pol/tests/__init__.py
deleted file mode 100644 (file)
index 446d5ce..0000000
+++ /dev/null
@@ -1,40 +0,0 @@
-#!/usr/bin/env python3
-# Copyright 2020 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-"""Init mocking for unit tests."""
-
-import sys
-
-
-import mock
-
-
-class OCIImageResourceErrorMock(Exception):
-    pass
-
-
-sys.path.append("src")
-
-oci_image = mock.MagicMock()
-oci_image.OCIImageResourceError = OCIImageResourceErrorMock
-sys.modules["oci_image"] = oci_image
-sys.modules["oci_image"].OCIImageResource().fetch.return_value = {}
diff --git a/installers/charm/pol/tests/test_charm.py b/installers/charm/pol/tests/test_charm.py
deleted file mode 100644 (file)
index 6cf435d..0000000
+++ /dev/null
@@ -1,326 +0,0 @@
-#!/usr/bin/env python3
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-import sys
-from typing import NoReturn
-import unittest
-
-from charm import PolCharm
-from ops.model import ActiveStatus, BlockedStatus
-from ops.testing import Harness
-
-
-class TestCharm(unittest.TestCase):
-    """Pol Charm unit tests."""
-
-    def setUp(self) -> NoReturn:
-        """Test setup"""
-        self.image_info = sys.modules["oci_image"].OCIImageResource().fetch()
-        self.harness = Harness(PolCharm)
-        self.harness.set_leader(is_leader=True)
-        self.harness.begin()
-        self.config = {
-            "log_level": "INFO",
-            "mongodb_uri": "",
-        }
-        self.harness.update_config(self.config)
-
-    def test_config_changed_no_relations(
-        self,
-    ) -> NoReturn:
-        """Test ingress resources without HTTP."""
-
-        self.harness.charm.on.config_changed.emit()
-
-        # Assertions
-        self.assertIsInstance(self.harness.charm.unit.status, BlockedStatus)
-        self.assertTrue(
-            all(
-                relation in self.harness.charm.unit.status.message
-                for relation in ["mongodb", "kafka"]
-            )
-        )
-
-    def test_config_changed_non_leader(
-        self,
-    ) -> NoReturn:
-        """Test ingress resources without HTTP."""
-        self.harness.set_leader(is_leader=False)
-        self.harness.charm.on.config_changed.emit()
-
-        # Assertions
-        self.assertIsInstance(self.harness.charm.unit.status, ActiveStatus)
-
-    def test_with_relations_and_mongodb_config(
-        self,
-    ) -> NoReturn:
-        "Test with relations and mongodb config (internal)"
-        self.initialize_mysql_relation()
-        self.initialize_kafka_relation()
-        self.initialize_mongo_config()
-        # Verifying status
-        self.assertNotIsInstance(self.harness.charm.unit.status, BlockedStatus)
-
-    def test_with_relations(
-        self,
-    ) -> NoReturn:
-        "Test with relations (internal)"
-        self.initialize_kafka_relation()
-        self.initialize_mongo_relation()
-        self.initialize_mysql_relation()
-        # Verifying status
-        self.assertNotIsInstance(self.harness.charm.unit.status, BlockedStatus)
-
-    def test_exception_mongodb_relation_and_config(
-        self,
-    ) -> NoReturn:
-        "Test with relation and config for Mongodb. Must fail"
-        self.initialize_mongo_relation()
-        self.initialize_mongo_config()
-        # Verifying status
-        self.assertIsInstance(self.harness.charm.unit.status, BlockedStatus)
-
-    def test_mysql_config_success(self):
-        self.initialize_kafka_relation()
-        self.initialize_mongo_relation()
-        self.initialize_mysql_config()
-        # Verifying status
-        self.assertNotIsInstance(self.harness.charm.unit.status, BlockedStatus)
-
-    def test_mysql_config_wrong_value(self):
-        self.initialize_kafka_relation()
-        self.initialize_mongo_relation()
-        self.initialize_mysql_config(uri="wrong_uri")
-        # Verifying status
-        self.assertIsInstance(self.harness.charm.unit.status, BlockedStatus)
-        self.assertIn(
-            "mysql_uri is not properly formed",
-            self.harness.charm.unit.status.message,
-        )
-
-    def test_mysql_config_and_relation(self):
-        self.initialize_mysql_relation()
-        self.initialize_mysql_config()
-        # Verifying status
-        self.assertIsInstance(self.harness.charm.unit.status, BlockedStatus)
-        # import pdb; pdb.set_trace()
-        self.assertIn(
-            "Mysql data cannot be provided via config and relation",
-            self.harness.charm.unit.status.message,
-        )
-
-    def initialize_kafka_relation(self):
-        kafka_relation_id = self.harness.add_relation("kafka", "kafka")
-        self.harness.add_relation_unit(kafka_relation_id, "kafka/0")
-        self.harness.update_relation_data(
-            kafka_relation_id, "kafka", {"host": "kafka", "port": 9092}
-        )
-
-    def initialize_mongo_config(self):
-        self.harness.update_config({"mongodb_uri": "mongodb://mongo:27017"})
-
-    def initialize_mongo_relation(self):
-        mongodb_relation_id = self.harness.add_relation("mongodb", "mongodb")
-        self.harness.add_relation_unit(mongodb_relation_id, "mongodb/0")
-        self.harness.update_relation_data(
-            mongodb_relation_id,
-            "mongodb/0",
-            {"connection_string": "mongodb://mongo:27017"},
-        )
-
-    def initialize_mysql_config(self, uri=None):
-        self.harness.update_config(
-            {"mysql_uri": uri or "mysql://user:pass@mysql-host:3306/database"}
-        )
-
-    def initialize_mysql_relation(self):
-        mongodb_relation_id = self.harness.add_relation("mysql", "mysql")
-        self.harness.add_relation_unit(mongodb_relation_id, "mysql/0")
-        self.harness.update_relation_data(
-            mongodb_relation_id,
-            "mysql/0",
-            {
-                "user": "user",
-                "password": "pass",
-                "host": "host",
-                "port": "1234",
-                "database": "pol",
-                "root_password": "root_password",
-            },
-        )
-
-
-if __name__ == "__main__":
-    unittest.main()
-
-
-# class TestCharm(unittest.TestCase):
-#     """POL Charm unit tests."""
-
-#     def setUp(self) -> NoReturn:
-#         """Test setup"""
-#         self.harness = Harness(PolCharm)
-#         self.harness.set_leader(is_leader=True)
-#         self.harness.begin()
-
-#     def test_on_start_without_relations(self) -> NoReturn:
-#         """Test installation without any relation."""
-#         self.harness.charm.on.start.emit()
-
-#         # Verifying status
-#         self.assertIsInstance(self.harness.charm.unit.status, BlockedStatus)
-
-#         # Verifying status message
-#         self.assertGreater(len(self.harness.charm.unit.status.message), 0)
-#         self.assertTrue(
-#             self.harness.charm.unit.status.message.startswith("Waiting for ")
-#         )
-#         self.assertIn("kafka", self.harness.charm.unit.status.message)
-#         self.assertIn("mongodb", self.harness.charm.unit.status.message)
-#         self.assertTrue(self.harness.charm.unit.status.message.endswith(" relations"))
-
-#     def test_on_start_with_relations(self) -> NoReturn:
-#         """Test deployment without keystone."""
-#         expected_result = {
-#             "version": 3,
-#             "containers": [
-#                 {
-#                     "name": "pol",
-#                     "imageDetails": self.harness.charm.image.fetch(),
-#                     "imagePullPolicy": "Always",
-#                     "ports": [
-#                         {
-#                             "name": "pol",
-#                             "containerPort": 80,
-#                             "protocol": "TCP",
-#                         }
-#                     ],
-#                     "envConfig": {
-#                         "ALLOW_ANONYMOUS_LOGIN": "yes",
-#                         "OSMPOL_GLOBAL_LOGLEVEL": "INFO",
-#                         "OSMPOL_MESSAGE_HOST": "kafka",
-#                         "OSMPOL_MESSAGE_DRIVER": "kafka",
-#                         "OSMPOL_MESSAGE_PORT": 9092,
-#                         "OSMPOL_DATABASE_DRIVER": "mongo",
-#                         "OSMPOL_DATABASE_URI": "mongodb://mongo:27017",
-#                     },
-#                 }
-#             ],
-#             "kubernetesResources": {"ingressResources": []},
-#         }
-
-#         self.harness.charm.on.start.emit()
-
-#         # Check if kafka datastore is initialized
-#         self.assertIsNone(self.harness.charm.state.message_host)
-#         self.assertIsNone(self.harness.charm.state.message_port)
-
-#         # Check if mongodb datastore is initialized
-#         self.assertIsNone(self.harness.charm.state.database_uri)
-
-#         # Initializing the kafka relation
-#         kafka_relation_id = self.harness.add_relation("kafka", "kafka")
-#         self.harness.add_relation_unit(kafka_relation_id, "kafka/0")
-#         self.harness.update_relation_data(
-#             kafka_relation_id, "kafka/0", {"host": "kafka", "port": 9092}
-#         )
-
-#         # Initializing the mongo relation
-#         mongodb_relation_id = self.harness.add_relation("mongodb", "mongodb")
-#         self.harness.add_relation_unit(mongodb_relation_id, "mongodb/0")
-#         self.harness.update_relation_data(
-#             mongodb_relation_id,
-#             "mongodb/0",
-#             {"connection_string": "mongodb://mongo:27017"},
-#         )
-
-#         # Checking if kafka data is stored
-#         self.assertEqual(self.harness.charm.state.message_host, "kafka")
-#         self.assertEqual(self.harness.charm.state.message_port, 9092)
-
-#         # Checking if mongodb data is stored
-#         self.assertEqual(self.harness.charm.state.database_uri, "mongodb://mongo:27017")
-
-#         # Verifying status
-#         self.assertNotIsInstance(self.harness.charm.unit.status, BlockedStatus)
-
-#         pod_spec, _ = self.harness.get_pod_spec()
-
-#         self.assertDictEqual(expected_result, pod_spec)
-
-#     def test_on_kafka_unit_relation_changed(self) -> NoReturn:
-#         """Test to see if kafka relation is updated."""
-#         self.harness.charm.on.start.emit()
-
-#         self.assertIsNone(self.harness.charm.state.message_host)
-#         self.assertIsNone(self.harness.charm.state.message_port)
-
-#         relation_id = self.harness.add_relation("kafka", "kafka")
-#         self.harness.add_relation_unit(relation_id, "kafka/0")
-#         self.harness.update_relation_data(
-#             relation_id, "kafka/0", {"host": "kafka", "port": 9092}
-#         )
-
-#         self.assertEqual(self.harness.charm.state.message_host, "kafka")
-#         self.assertEqual(self.harness.charm.state.message_port, 9092)
-
-#         # Verifying status
-#         self.assertIsInstance(self.harness.charm.unit.status, BlockedStatus)
-
-#         # Verifying status message
-#         self.assertGreater(len(self.harness.charm.unit.status.message), 0)
-#         self.assertTrue(
-#             self.harness.charm.unit.status.message.startswith("Waiting for ")
-#         )
-#         self.assertNotIn("kafka", self.harness.charm.unit.status.message)
-#         self.assertIn("mongodb", self.harness.charm.unit.status.message)
-#         self.assertTrue(self.harness.charm.unit.status.message.endswith(" relation"))
-
-#     def test_on_mongodb_unit_relation_changed(self) -> NoReturn:
-#         """Test to see if mongodb relation is updated."""
-#         self.harness.charm.on.start.emit()
-
-#         self.assertIsNone(self.harness.charm.state.database_uri)
-
-#         relation_id = self.harness.add_relation("mongodb", "mongodb")
-#         self.harness.add_relation_unit(relation_id, "mongodb/0")
-#         self.harness.update_relation_data(
-#             relation_id, "mongodb/0", {"connection_string": "mongodb://mongo:27017"}
-#         )
-
-#         self.assertEqual(self.harness.charm.state.database_uri, "mongodb://mongo:27017")
-
-#         # Verifying status
-#         self.assertIsInstance(self.harness.charm.unit.status, BlockedStatus)
-
-#         # Verifying status message
-#         self.assertGreater(len(self.harness.charm.unit.status.message), 0)
-#         self.assertTrue(
-#             self.harness.charm.unit.status.message.startswith("Waiting for ")
-#         )
-#         self.assertIn("kafka", self.harness.charm.unit.status.message)
-#         self.assertNotIn("mongodb", self.harness.charm.unit.status.message)
-#         self.assertTrue(self.harness.charm.unit.status.message.endswith(" relation"))
-
-
-# if __name__ == "__main__":
-#     unittest.main()
diff --git a/installers/charm/pol/tests/test_pod_spec.py b/installers/charm/pol/tests/test_pod_spec.py
deleted file mode 100644 (file)
index eb5f5cf..0000000
+++ /dev/null
@@ -1,216 +0,0 @@
-#!/usr/bin/env python3
-# Copyright 2020 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-from typing import NoReturn
-import unittest
-
-import pod_spec
-
-
-class TestPodSpec(unittest.TestCase):
-    """Pod spec unit tests."""
-
-    def test_make_pod_ports(self) -> NoReturn:
-        """Testing make pod ports."""
-        port = 80
-
-        expected_result = [
-            {
-                "name": "pol",
-                "containerPort": port,
-                "protocol": "TCP",
-            }
-        ]
-
-        pod_ports = pod_spec._make_pod_ports(port)
-
-        self.assertListEqual(expected_result, pod_ports)
-
-    def test_make_pod_envconfig(self) -> NoReturn:
-        """Teting make pod envconfig."""
-        config = {
-            "log_level": "INFO",
-        }
-        relation_state = {
-            "message_host": "kafka",
-            "message_port": 9090,
-            "database_uri": "mongodb://mongo",
-        }
-
-        expected_result = {
-            "ALLOW_ANONYMOUS_LOGIN": "yes",
-            "OSMPOL_GLOBAL_LOGLEVEL": config["log_level"],
-            "OSMPOL_MESSAGE_HOST": relation_state["message_host"],
-            "OSMPOL_MESSAGE_DRIVER": "kafka",
-            "OSMPOL_MESSAGE_PORT": relation_state["message_port"],
-            "OSMPOL_DATABASE_DRIVER": "mongo",
-            "OSMPOL_DATABASE_URI": relation_state["database_uri"],
-        }
-
-        pod_envconfig = pod_spec._make_pod_envconfig(config, relation_state)
-
-        self.assertDictEqual(expected_result, pod_envconfig)
-
-    def test_make_startup_probe(self) -> NoReturn:
-        """Testing make startup probe."""
-        expected_result = {
-            "exec": {"command": ["/usr/bin/pgrep", "python3"]},
-            "initialDelaySeconds": 60,
-            "timeoutSeconds": 5,
-        }
-
-        startup_probe = pod_spec._make_startup_probe()
-
-        self.assertDictEqual(expected_result, startup_probe)
-
-    def test_make_readiness_probe(self) -> NoReturn:
-        """Testing make readiness probe."""
-        expected_result = {
-            "exec": {
-                "command": ["sh", "-c", "osm-pol-healthcheck || exit 1"],
-            },
-            "periodSeconds": 10,
-            "timeoutSeconds": 5,
-            "successThreshold": 1,
-            "failureThreshold": 3,
-        }
-
-        readiness_probe = pod_spec._make_readiness_probe()
-
-        self.assertDictEqual(expected_result, readiness_probe)
-
-    def test_make_liveness_probe(self) -> NoReturn:
-        """Testing make liveness probe."""
-        expected_result = {
-            "exec": {
-                "command": ["sh", "-c", "osm-pol-healthcheck || exit 1"],
-            },
-            "initialDelaySeconds": 45,
-            "periodSeconds": 10,
-            "timeoutSeconds": 5,
-            "successThreshold": 1,
-            "failureThreshold": 3,
-        }
-
-        liveness_probe = pod_spec._make_liveness_probe()
-
-        self.assertDictEqual(expected_result, liveness_probe)
-
-    def test_make_pod_spec(self) -> NoReturn:
-        """Testing make pod spec."""
-        image_info = {"upstream-source": "opensourcemano/pol:8"}
-        config = {
-            "log_level": "INFO",
-        }
-        relation_state = {
-            "message_host": "kafka",
-            "message_port": 9090,
-            "database_uri": "mongodb://mongo",
-        }
-        app_name = "pol"
-        port = 80
-
-        expected_result = {
-            "version": 3,
-            "containers": [
-                {
-                    "name": app_name,
-                    "imageDetails": image_info,
-                    "imagePullPolicy": "Always",
-                    "ports": [
-                        {
-                            "name": app_name,
-                            "containerPort": port,
-                            "protocol": "TCP",
-                        }
-                    ],
-                    "envConfig": {
-                        "ALLOW_ANONYMOUS_LOGIN": "yes",
-                        "OSMPOL_GLOBAL_LOGLEVEL": config["log_level"],
-                        "OSMPOL_MESSAGE_HOST": relation_state["message_host"],
-                        "OSMPOL_MESSAGE_DRIVER": "kafka",
-                        "OSMPOL_MESSAGE_PORT": relation_state["message_port"],
-                        "OSMPOL_DATABASE_DRIVER": "mongo",
-                        "OSMPOL_DATABASE_URI": relation_state["database_uri"],
-                    },
-                }
-            ],
-            "kubernetesResources": {"ingressResources": []},
-        }
-
-        spec = pod_spec.make_pod_spec(
-            image_info, config, relation_state, app_name, port
-        )
-
-        self.assertDictEqual(expected_result, spec)
-
-    def test_make_pod_spec_without_image_info(self) -> NoReturn:
-        """Testing make pod spec without image_info."""
-        image_info = None
-        config = {
-            "log_level": "INFO",
-        }
-        relation_state = {
-            "message_host": "kafka",
-            "message_port": 9090,
-            "database_uri": "mongodb://mongo",
-        }
-        app_name = "pol"
-        port = 80
-
-        spec = pod_spec.make_pod_spec(
-            image_info, config, relation_state, app_name, port
-        )
-
-        self.assertIsNone(spec)
-
-    def test_make_pod_spec_without_config(self) -> NoReturn:
-        """Testing make pod spec without config."""
-        image_info = {"upstream-source": "opensourcemano/pol:8"}
-        config = {}
-        relation_state = {
-            "message_host": "kafka",
-            "message_port": 9090,
-            "database_uri": "mongodb://mongo",
-        }
-        app_name = "pol"
-        port = 80
-
-        with self.assertRaises(ValueError):
-            pod_spec.make_pod_spec(image_info, config, relation_state, app_name, port)
-
-    def test_make_pod_spec_without_relation_state(self) -> NoReturn:
-        """Testing make pod spec without relation_state."""
-        image_info = {"upstream-source": "opensourcemano/pol:8"}
-        config = {
-            "log_level": "INFO",
-        }
-        relation_state = {}
-        app_name = "pol"
-        port = 80
-
-        with self.assertRaises(ValueError):
-            pod_spec.make_pod_spec(image_info, config, relation_state, app_name, port)
-
-
-if __name__ == "__main__":
-    unittest.main()
diff --git a/installers/charm/pol/tox.ini b/installers/charm/pol/tox.ini
deleted file mode 100644 (file)
index f3c9144..0000000
+++ /dev/null
@@ -1,128 +0,0 @@
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-#######################################################################################
-
-[tox]
-envlist = black, cover, flake8, pylint, yamllint, safety
-skipsdist = true
-
-[tox:jenkins]
-toxworkdir = /tmp/.tox
-
-[testenv]
-basepython = python3.8
-setenv =
-  VIRTUAL_ENV={envdir}
-  PYTHONPATH = {toxinidir}:{toxinidir}/lib:{toxinidir}/src
-  PYTHONDONTWRITEBYTECODE = 1
-deps =  -r{toxinidir}/requirements.txt
-
-
-#######################################################################################
-[testenv:black]
-deps = black
-commands =
-        black --check --diff src/ tests/
-
-
-#######################################################################################
-[testenv:cover]
-deps =  {[testenv]deps}
-        -r{toxinidir}/requirements-test.txt
-        coverage
-        nose2
-commands =
-        sh -c 'rm -f nosetests.xml'
-        coverage erase
-        nose2 -C --coverage src
-        coverage report --omit='*tests*'
-        coverage html -d ./cover --omit='*tests*'
-        coverage xml -o coverage.xml --omit=*tests*
-whitelist_externals = sh
-
-
-#######################################################################################
-[testenv:flake8]
-deps =  flake8
-        flake8-import-order
-commands =
-        flake8 src/ tests/
-
-
-#######################################################################################
-[testenv:pylint]
-deps =  {[testenv]deps}
-        -r{toxinidir}/requirements-test.txt
-        pylint==2.10.2
-commands =
-    pylint -E src/ tests/
-
-
-#######################################################################################
-[testenv:safety]
-setenv =
-        LC_ALL=C.UTF-8
-        LANG=C.UTF-8
-deps =  {[testenv]deps}
-        safety
-commands =
-        - safety check --full-report
-
-
-#######################################################################################
-[testenv:yamllint]
-deps =  {[testenv]deps}
-        -r{toxinidir}/requirements-test.txt
-        yamllint
-commands = yamllint .
-
-#######################################################################################
-[testenv:build]
-passenv=HTTP_PROXY HTTPS_PROXY NO_PROXY
-whitelist_externals =
-  charmcraft
-  sh
-commands =
-  charmcraft pack
-  sh -c 'ubuntu_version=20.04; \
-        architectures="amd64-aarch64-arm64"; \
-        charm_name=`cat metadata.yaml | grep -E "^name: " | cut -f 2 -d " "`; \
-        mv $charm_name"_ubuntu-"$ubuntu_version-$architectures.charm $charm_name.charm'
-
-#######################################################################################
-[flake8]
-ignore =
-        W291,
-        W293,
-        W503,
-        E123,
-        E125,
-        E226,
-        E241,
-exclude =
-        .git,
-        __pycache__,
-        .tox,
-max-line-length = 120
-show-source = True
-builtins = _
-max-complexity = 10
-import-order-style = google
diff --git a/installers/charm/release_edge.sh b/installers/charm/release_edge.sh
deleted file mode 100755 (executable)
index 67d0b31..0000000
+++ /dev/null
@@ -1,94 +0,0 @@
-#!/bin/bash
-# Copyright 2020 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-#     Unless required by applicable law or agreed to in writing, software
-#     distributed under the License is distributed on an "AS IS" BASIS,
-#     WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-#     See the License for the specific language governing permissions and
-#     limitations under the License.
-set -eux
-
-channel=edge
-tag=testing-daily
-
-# 1. Build charms
-./build.sh
-
-
-# New charms (with resources)
-charms="ng-ui nbi pla keystone ro lcm mon pol"
-for charm in $charms; do
-    echo "Releasing $charm charm"
-    cs_revision=$(charm push $charm/$charm.charm cs:~charmed-osm/$charm | tail -n +1 | head -1 | awk '{print $2}')
-    resource_revision=$(charm attach $cs_revision image=external::opensourcemano/$charm:$tag | tail -n +1 | sed 's/[^0-9]*//g')
-    image_revision_num=$(echo $resource_revision  | awk '{print $NF}')
-    resources_string="--resource image-$image_revision_num"
-    charm release --channel $channel $cs_revision $resources_string
-    echo "$charm charm released!"
-done
-
-charms="mongodb-exporter kafka-exporter mysqld-exporter"
-for charm in $charms; do
-    echo "Releasing $charm charm"
-    cs_revision=$(charm push $charm/$charm.charm cs:~charmed-osm/$charm | tail -n +1 | head -1 | awk '{print $2}')
-    resource_revision=$(charm attach $cs_revision image=external::bitnami/$charm:latest | tail -n +1 | sed 's/[^0-9]*//g')
-    image_revision_num=$(echo $resource_revision  | awk '{print $NF}')
-    resources_string="--resource image-$image_revision_num"
-    charm release --channel $channel $cs_revision $resources_string
-    echo "$charm charm released!"
-done
-
-charm="prometheus"
-echo "Releasing $charm charm"
-cs_revision=$(charm push $charm/$charm.charm cs:~charmed-osm/$charm | tail -n +1 | head -1 | awk '{print $2}')
-resource_revision=$(charm attach $cs_revision image=external::ubuntu/$charm:latest | tail -n +1 | sed 's/[^0-9]*//g')
-image_revision_num=$(echo $resource_revision  | awk '{print $NF}')
-backup_resource_revision=$(charm attach $cs_revision backup-image=external::ed1000/prometheus-backup:latest | tail -n +1 | sed 's/[^0-9]*//g')
-backup_image_revision_num=$(echo $backup_resource_revision  | awk '{print $NF}')
-resources_string="--resource image-$image_revision_num --resource backup-image-$backup_image_revision_num"
-charm release --channel $channel $cs_revision $resources_string
-echo "$charm charm released!"
-
-
-charm="grafana"
-echo "Releasing $charm charm"
-cs_revision=$(charm push $charm/$charm.charm cs:~charmed-osm/$charm | tail -n +1 | head -1 | awk '{print $2}')
-resource_revision=$(charm attach $cs_revision image=external::ubuntu/$charm:latest | tail -n +1 | sed 's/[^0-9]*//g')
-image_revision_num=$(echo $resource_revision  | awk '{print $NF}')
-resources_string="--resource image-$image_revision_num"
-charm release --channel $channel $cs_revision $resources_string
-echo "$charm charm released!"
-
-
-charm="zookeeper"
-echo "Releasing $charm charm"
-cs_revision=$(charm push $charm/$charm.charm cs:~charmed-osm/$charm | tail -n +1 | head -1 | awk '{print $2}')
-resource_revision=$(charm attach $cs_revision image=external::rocks.canonical.com:443/k8s.gcr.io/kubernetes-zookeeper:1.0-3.4.10 | tail -n +1 | sed 's/[^0-9]*//g')
-image_revision_num=$(echo $resource_revision  | awk '{print $NF}')
-resources_string="--resource image-$image_revision_num"
-charm release --channel $channel $cs_revision $resources_string
-echo "$charm charm released!"
-
-
-charm="kafka"
-echo "Releasing $charm charm"
-cs_revision=$(charm push $charm/$charm.charm cs:~charmed-osm/$charm | tail -n +1 | head -1 | awk '{print $2}')
-resource_revision=$(charm attach $cs_revision image=external::rocks.canonical.com:443/wurstmeister/kafka:2.12-2.2.1 | tail -n +1 | sed 's/[^0-9]*//g')
-image_revision_num=$(echo $resource_revision  | awk '{print $NF}')
-resources_string="--resource image-$image_revision_num"
-charm release --channel $channel $cs_revision $resources_string
-echo "$charm charm released!"
-
-
-# 3. Grant permissions
-all_charms="ng-ui nbi pla keystone ro lcm mon pol grafana prometheus mongodb-exporter kafka-exporter mysqld-exporter zookeeper kafka"
-for charm in $all_charms; do
-    echo "Granting permission for $charm charm"
-    charm grant cs:~charmed-osm/$charm --channel $channel --acl read everyone
-done
diff --git a/installers/charm/ro/.gitignore b/installers/charm/ro/.gitignore
deleted file mode 100644 (file)
index 2885df2..0000000
+++ /dev/null
@@ -1,30 +0,0 @@
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-venv
-.vscode
-build
-*.charm
-.coverage
-coverage.xml
-.stestr
-cover
-release
\ No newline at end of file
diff --git a/installers/charm/ro/.jujuignore b/installers/charm/ro/.jujuignore
deleted file mode 100644 (file)
index 3ae3e7d..0000000
+++ /dev/null
@@ -1,34 +0,0 @@
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-venv
-.vscode
-build
-*.charm
-.coverage
-coverage.xml
-.gitignore
-.stestr
-cover
-release
-tests/
-requirements*
-tox.ini
diff --git a/installers/charm/ro/.yamllint.yaml b/installers/charm/ro/.yamllint.yaml
deleted file mode 100644 (file)
index d71fb69..0000000
+++ /dev/null
@@ -1,34 +0,0 @@
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
----
-extends: default
-
-yaml-files:
-  - "*.yaml"
-  - "*.yml"
-  - ".yamllint"
-ignore: |
-  .tox
-  cover/
-  build/
-  venv
-  release/
diff --git a/installers/charm/ro/README.md b/installers/charm/ro/README.md
deleted file mode 100644 (file)
index 9cf4200..0000000
+++ /dev/null
@@ -1,23 +0,0 @@
-<!-- Copyright 2020 Canonical Ltd.
-
-Licensed under the Apache License, Version 2.0 (the "License"); you may
-not use this file except in compliance with the License. You may obtain
-a copy of the License at
-
-        http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-License for the specific language governing permissions and limitations
-under the License.
-
-For those usages not covered by the Apache License, Version 2.0 please
-contact: legal@canonical.com
-
-To get in touch with the maintainers, please contact:
-osm-charmers@lists.launchpad.net -->
-
-# RO operator Charm for Kubernetes
-
-## Requirements
diff --git a/installers/charm/ro/charmcraft.yaml b/installers/charm/ro/charmcraft.yaml
deleted file mode 100644 (file)
index 0a285a9..0000000
+++ /dev/null
@@ -1,37 +0,0 @@
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-type: charm
-bases:
-  - build-on:
-      - name: ubuntu
-        channel: "20.04"
-        architectures: ["amd64"]
-    run-on:
-      - name: ubuntu
-        channel: "20.04"
-        architectures:
-          - amd64
-          - aarch64
-          - arm64
-parts:
-  charm:
-    build-packages: [git]
diff --git a/installers/charm/ro/config.yaml b/installers/charm/ro/config.yaml
deleted file mode 100644 (file)
index 31bf8cb..0000000
+++ /dev/null
@@ -1,116 +0,0 @@
-# Copyright 2020 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-options:
-  enable_ng_ro:
-    description: Enable NG-RO
-    type: boolean
-    default: true
-  database_commonkey:
-    description: Database COMMON KEY
-    type: string
-    default: osm
-  mongodb_uri:
-    type: string
-    description: MongoDB URI (external database)
-  log_level:
-    description: "Log Level"
-    type: string
-    default: "INFO"
-  period_refresh_active:
-    type: int
-    description: |
-      Updates the VNF status from VIM for every given period of time seconds.
-      Values equal or greater than 60 is allowed.
-      Disable the updates from VIM by setting -1.
-      Example:
-        $ juju config ro period_refresh_active=-1
-        $ juju config ro period_refresh_active=100
-  mysql_host:
-    type: string
-    description: MySQL Host (external database)
-  mysql_port:
-    type: int
-    description: MySQL Port (external database)
-  mysql_user:
-    type: string
-    description: MySQL User (external database)
-  mysql_password:
-    type: string
-    description: MySQL Password (external database)
-  mysql_root_password:
-    type: string
-    description: MySQL Root Password (external database)
-  vim_database:
-    type: string
-    description: "The database name."
-    default: "mano_vim_db"
-  ro_database:
-    type: string
-    description: "The database name."
-    default: "mano_db"
-  openmano_tenant:
-    type: string
-    description: "Openmano Tenant"
-    default: "osm"
-  certificates:
-    type: string
-    description: |
-      comma-separated list of <name>:<content> certificates.
-      Where:
-        name: name of the file for the certificate
-        content: base64 content of the certificate
-      The path for the files is /certs.
-  image_pull_policy:
-    type: string
-    description: |
-      ImagePullPolicy configuration for the pod.
-      Possible values: always, ifnotpresent, never
-    default: always
-  debug_mode:
-    description: |
-      If true, debug mode is activated. It means that the service will not run,
-      and instead, the command for the container will be a `sleep infinity`.
-      Note: If enabled, security_context will be disabled.
-    type: boolean
-    default: false
-  debug_pubkey:
-    description: |
-      Public SSH key that will be injected to the application pod.
-    type: string
-  debug_ro_local_path:
-    description: |
-      Local full path to the RO project.
-
-      The path will be mounted to the docker image,
-      which means changes during the debugging will be saved in your local path.
-    type: string
-  debug_common_local_path:
-    description: |
-      Local full path to the COMMON project.
-
-      The path will be mounted to the docker image,
-      which means changes during the debugging will be saved in your local path.
-    type: string
-  security_context:
-    description: Enables the security context of the pods
-    type: boolean
-    default: false
diff --git a/installers/charm/ro/lib/charms/kafka_k8s/v0/kafka.py b/installers/charm/ro/lib/charms/kafka_k8s/v0/kafka.py
deleted file mode 100644 (file)
index 1baf9a8..0000000
+++ /dev/null
@@ -1,207 +0,0 @@
-# Copyright 2022 Canonical Ltd.
-# See LICENSE file for licensing details.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#    http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
-# implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""Kafka library.
-
-This [library](https://juju.is/docs/sdk/libraries) implements both sides of the
-`kafka` [interface](https://juju.is/docs/sdk/relations).
-
-The *provider* side of this interface is implemented by the
-[kafka-k8s Charmed Operator](https://charmhub.io/kafka-k8s).
-
-Any Charmed Operator that *requires* Kafka for providing its
-service should implement the *requirer* side of this interface.
-
-In a nutshell using this library to implement a Charmed Operator *requiring*
-Kafka would look like
-
-```
-$ charmcraft fetch-lib charms.kafka_k8s.v0.kafka
-```
-
-`metadata.yaml`:
-
-```
-requires:
-  kafka:
-    interface: kafka
-    limit: 1
-```
-
-`src/charm.py`:
-
-```
-from charms.kafka_k8s.v0.kafka import KafkaEvents, KafkaRequires
-from ops.charm import CharmBase
-
-
-class MyCharm(CharmBase):
-
-    on = KafkaEvents()
-
-    def __init__(self, *args):
-        super().__init__(*args)
-        self.kafka = KafkaRequires(self)
-        self.framework.observe(
-            self.on.kafka_available,
-            self._on_kafka_available,
-        )
-        self.framework.observe(
-            self.on.kafka_broken,
-            self._on_kafka_broken,
-        )
-
-    def _on_kafka_available(self, event):
-        # Get Kafka host and port
-        host: str = self.kafka.host
-        port: int = self.kafka.port
-        # host => "kafka-k8s"
-        # port => 9092
-
-    def _on_kafka_broken(self, event):
-        # Stop service
-        # ...
-        self.unit.status = BlockedStatus("need kafka relation")
-```
-
-You can file bugs
-[here](https://github.com/charmed-osm/kafka-k8s-operator/issues)!
-"""
-
-from typing import Optional
-
-from ops.charm import CharmBase, CharmEvents
-from ops.framework import EventBase, EventSource, Object
-
-# The unique Charmhub library identifier, never change it
-from ops.model import Relation
-
-LIBID = "eacc8c85082347c9aae740e0220b8376"
-
-# Increment this major API version when introducing breaking changes
-LIBAPI = 0
-
-# Increment this PATCH version before using `charmcraft publish-lib` or reset
-# to 0 if you are raising the major API version
-LIBPATCH = 3
-
-
-KAFKA_HOST_APP_KEY = "host"
-KAFKA_PORT_APP_KEY = "port"
-
-
-class _KafkaAvailableEvent(EventBase):
-    """Event emitted when Kafka is available."""
-
-
-class _KafkaBrokenEvent(EventBase):
-    """Event emitted when Kafka relation is broken."""
-
-
-class KafkaEvents(CharmEvents):
-    """Kafka events.
-
-    This class defines the events that Kafka can emit.
-
-    Events:
-        kafka_available (_KafkaAvailableEvent)
-    """
-
-    kafka_available = EventSource(_KafkaAvailableEvent)
-    kafka_broken = EventSource(_KafkaBrokenEvent)
-
-
-class KafkaRequires(Object):
-    """Requires-side of the Kafka relation."""
-
-    def __init__(self, charm: CharmBase, endpoint_name: str = "kafka") -> None:
-        super().__init__(charm, endpoint_name)
-        self.charm = charm
-        self._endpoint_name = endpoint_name
-
-        # Observe relation events
-        event_observe_mapping = {
-            charm.on[self._endpoint_name].relation_changed: self._on_relation_changed,
-            charm.on[self._endpoint_name].relation_broken: self._on_relation_broken,
-        }
-        for event, observer in event_observe_mapping.items():
-            self.framework.observe(event, observer)
-
-    def _on_relation_changed(self, event) -> None:
-        if event.relation.app and all(
-            key in event.relation.data[event.relation.app]
-            for key in (KAFKA_HOST_APP_KEY, KAFKA_PORT_APP_KEY)
-        ):
-            self.charm.on.kafka_available.emit()
-
-    def _on_relation_broken(self, _) -> None:
-        self.charm.on.kafka_broken.emit()
-
-    @property
-    def host(self) -> str:
-        relation: Relation = self.model.get_relation(self._endpoint_name)
-        return (
-            relation.data[relation.app].get(KAFKA_HOST_APP_KEY)
-            if relation and relation.app
-            else None
-        )
-
-    @property
-    def port(self) -> int:
-        relation: Relation = self.model.get_relation(self._endpoint_name)
-        return (
-            int(relation.data[relation.app].get(KAFKA_PORT_APP_KEY))
-            if relation and relation.app
-            else None
-        )
-
-
-class KafkaProvides(Object):
-    """Provides-side of the Kafka relation."""
-
-    def __init__(self, charm: CharmBase, endpoint_name: str = "kafka") -> None:
-        super().__init__(charm, endpoint_name)
-        self._endpoint_name = endpoint_name
-
-    def set_host_info(self, host: str, port: int, relation: Optional[Relation] = None) -> None:
-        """Set Kafka host and port.
-
-        This function writes in the application data of the relation, therefore,
-        only the unit leader can call it.
-
-        Args:
-            host (str): Kafka hostname or IP address.
-            port (int): Kafka port.
-            relation (Optional[Relation]): Relation to update.
-                                           If not specified, all relations will be updated.
-
-        Raises:
-            Exception: if a non-leader unit calls this function.
-        """
-        if not self.model.unit.is_leader():
-            raise Exception("only the leader set host information.")
-
-        if relation:
-            self._update_relation_data(host, port, relation)
-            return
-
-        for relation in self.model.relations[self._endpoint_name]:
-            self._update_relation_data(host, port, relation)
-
-    def _update_relation_data(self, host: str, port: int, relation: Relation) -> None:
-        """Update data in relation if needed."""
-        relation.data[self.model.app][KAFKA_HOST_APP_KEY] = host
-        relation.data[self.model.app][KAFKA_PORT_APP_KEY] = str(port)
diff --git a/installers/charm/ro/metadata.yaml b/installers/charm/ro/metadata.yaml
deleted file mode 100644 (file)
index 6e82e8c..0000000
+++ /dev/null
@@ -1,53 +0,0 @@
-# Copyright 2020 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-name: osm-ro
-summary: OSM Resource Orchestrator (RO)
-description: |
-  A CAAS charm to deploy OSM's Resource Orchestrator (RO).
-series:
-  - kubernetes
-tags:
-  - kubernetes
-  - osm
-  - ro
-min-juju-version: 2.8.0
-deployment:
-  type: stateless
-  service: cluster
-resources:
-  image:
-    type: oci-image
-    description: OSM docker image for RO
-    upstream-source: "opensourcemano/ro:8"
-provides:
-  ro:
-    interface: http
-requires:
-  kafka:
-    interface: kafka
-    limit: 1
-  mongodb:
-    interface: mongodb
-    limit: 1
-  mysql:
-    interface: mysql
-    limit: 1
diff --git a/installers/charm/ro/requirements-test.txt b/installers/charm/ro/requirements-test.txt
deleted file mode 100644 (file)
index cf61dd4..0000000
+++ /dev/null
@@ -1,20 +0,0 @@
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-mock==4.0.3
diff --git a/installers/charm/ro/requirements.txt b/installers/charm/ro/requirements.txt
deleted file mode 100644 (file)
index 1a8928c..0000000
+++ /dev/null
@@ -1,22 +0,0 @@
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-git+https://github.com/charmed-osm/ops-lib-charmed-osm/@master
\ No newline at end of file
diff --git a/installers/charm/ro/src/charm.py b/installers/charm/ro/src/charm.py
deleted file mode 100755 (executable)
index 028dc0a..0000000
+++ /dev/null
@@ -1,465 +0,0 @@
-#!/usr/bin/env python3
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-# pylint: disable=E0213
-
-import base64
-import logging
-from typing import Dict, NoReturn, Optional
-
-from charms.kafka_k8s.v0.kafka import KafkaEvents, KafkaRequires
-from ops.main import main
-from opslib.osm.charm import CharmedOsmBase, RelationsMissing
-from opslib.osm.interfaces.mongo import MongoClient
-from opslib.osm.interfaces.mysql import MysqlClient
-from opslib.osm.pod import (
-    ContainerV3Builder,
-    FilesV3Builder,
-    PodRestartPolicy,
-    PodSpecV3Builder,
-)
-from opslib.osm.validator import ModelValidator, validator
-
-logger = logging.getLogger(__name__)
-
-PORT = 9090
-
-
-def _check_certificate_data(name: str, content: str):
-    if not name or not content:
-        raise ValueError("certificate name and content must be a non-empty string")
-
-
-def _extract_certificates(certs_config: str):
-    certificates = {}
-    if certs_config:
-        cert_list = certs_config.split(",")
-        for cert in cert_list:
-            name, content = cert.split(":")
-            _check_certificate_data(name, content)
-            certificates[name] = content
-    return certificates
-
-
-def decode(content: str):
-    return base64.b64decode(content.encode("utf-8")).decode("utf-8")
-
-
-class ConfigModel(ModelValidator):
-    enable_ng_ro: bool
-    database_commonkey: str
-    mongodb_uri: Optional[str]
-    log_level: str
-    mysql_host: Optional[str]
-    mysql_port: Optional[int]
-    mysql_user: Optional[str]
-    mysql_password: Optional[str]
-    mysql_root_password: Optional[str]
-    vim_database: str
-    ro_database: str
-    openmano_tenant: str
-    certificates: Optional[str]
-    image_pull_policy: str
-    debug_mode: bool
-    security_context: bool
-    period_refresh_active: Optional[int]
-
-    @validator("log_level")
-    def validate_log_level(cls, v):
-        if v not in {"INFO", "DEBUG"}:
-            raise ValueError("value must be INFO or DEBUG")
-        return v
-
-    @validator("certificates")
-    def validate_certificates(cls, v):
-        # Raises an exception if it cannot extract the certificates
-        _extract_certificates(v)
-        return v
-
-    @validator("mongodb_uri")
-    def validate_mongodb_uri(cls, v):
-        if v and not v.startswith("mongodb://"):
-            raise ValueError("mongodb_uri is not properly formed")
-        return v
-
-    @validator("mysql_port")
-    def validate_mysql_port(cls, v):
-        if v and (v <= 0 or v >= 65535):
-            raise ValueError("Mysql port out of range")
-        return v
-
-    @validator("image_pull_policy")
-    def validate_image_pull_policy(cls, v):
-        values = {
-            "always": "Always",
-            "ifnotpresent": "IfNotPresent",
-            "never": "Never",
-        }
-        v = v.lower()
-        if v not in values.keys():
-            raise ValueError("value must be always, ifnotpresent or never")
-        return values[v]
-
-    @property
-    def certificates_dict(cls):
-        return _extract_certificates(cls.certificates) if cls.certificates else {}
-
-    @validator("period_refresh_active")
-    def validate_vim_refresh_period(cls, v):
-        if v and v < 60 and v != -1:
-            raise ValueError(
-                "Refresh Period is too tight, insert >= 60 seconds or disable using -1"
-            )
-        return v
-
-
-class RoCharm(CharmedOsmBase):
-    """GrafanaCharm Charm."""
-
-    on = KafkaEvents()
-
-    def __init__(self, *args) -> NoReturn:
-        """Prometheus Charm constructor."""
-        super().__init__(
-            *args,
-            oci_image="image",
-            vscode_workspace=VSCODE_WORKSPACE,
-        )
-        if self.config.get("debug_mode"):
-            self.enable_debug_mode(
-                pubkey=self.config.get("debug_pubkey"),
-                hostpaths={
-                    "osm_common": {
-                        "hostpath": self.config.get("debug_common_local_path"),
-                        "container-path": "/usr/lib/python3/dist-packages/osm_common",
-                    },
-                    **_get_ro_host_paths(self.config.get("debug_ro_local_path")),
-                },
-            )
-        self.kafka = KafkaRequires(self)
-        self.framework.observe(self.on.kafka_available, self.configure_pod)
-        self.framework.observe(self.on.kafka_broken, self.configure_pod)
-
-        self.mysql_client = MysqlClient(self, "mysql")
-        self.framework.observe(self.on["mysql"].relation_changed, self.configure_pod)
-        self.framework.observe(self.on["mysql"].relation_broken, self.configure_pod)
-
-        self.mongodb_client = MongoClient(self, "mongodb")
-        self.framework.observe(self.on["mongodb"].relation_changed, self.configure_pod)
-        self.framework.observe(self.on["mongodb"].relation_broken, self.configure_pod)
-
-        self.framework.observe(self.on["ro"].relation_joined, self._publish_ro_info)
-
-    def _publish_ro_info(self, event):
-        """Publishes RO information.
-
-        Args:
-            event (EventBase): RO relation event.
-        """
-        if self.unit.is_leader():
-            rel_data = {
-                "host": self.model.app.name,
-                "port": str(PORT),
-            }
-            for k, v in rel_data.items():
-                event.relation.data[self.app][k] = v
-
-    def _check_missing_dependencies(self, config: ConfigModel):
-        missing_relations = []
-
-        if config.enable_ng_ro:
-            if not self.kafka.host or not self.kafka.port:
-                missing_relations.append("kafka")
-            if not config.mongodb_uri and self.mongodb_client.is_missing_data_in_unit():
-                missing_relations.append("mongodb")
-        else:
-            if not config.mysql_host and self.mysql_client.is_missing_data_in_unit():
-                missing_relations.append("mysql")
-        if missing_relations:
-            raise RelationsMissing(missing_relations)
-
-    def _validate_mysql_config(self, config: ConfigModel):
-        invalid_values = []
-        if not config.mysql_user:
-            invalid_values.append("Mysql user is empty")
-        if not config.mysql_password:
-            invalid_values.append("Mysql password is empty")
-        if not config.mysql_root_password:
-            invalid_values.append("Mysql root password empty")
-
-        if invalid_values:
-            raise ValueError("Invalid values: " + ", ".join(invalid_values))
-
-    def _build_cert_files(
-        self,
-        config: ConfigModel,
-    ):
-        cert_files_builder = FilesV3Builder()
-        for name, content in config.certificates_dict.items():
-            cert_files_builder.add_file(name, decode(content), mode=0o600)
-        return cert_files_builder.build()
-
-    def build_pod_spec(self, image_info):
-        # Validate config
-        config = ConfigModel(**dict(self.config))
-
-        if config.enable_ng_ro:
-            if config.mongodb_uri and not self.mongodb_client.is_missing_data_in_unit():
-                raise Exception(
-                    "Mongodb data cannot be provided via config and relation"
-                )
-        else:
-            if config.mysql_host and not self.mysql_client.is_missing_data_in_unit():
-                raise Exception("Mysql data cannot be provided via config and relation")
-
-            if config.mysql_host:
-                self._validate_mysql_config(config)
-
-        # Check relations
-        self._check_missing_dependencies(config)
-
-        security_context_enabled = (
-            config.security_context if not config.debug_mode else False
-        )
-
-        # Create Builder for the PodSpec
-        pod_spec_builder = PodSpecV3Builder(
-            enable_security_context=security_context_enabled
-        )
-
-        # Build Container
-        container_builder = ContainerV3Builder(
-            self.app.name,
-            image_info,
-            config.image_pull_policy,
-            run_as_non_root=security_context_enabled,
-        )
-        certs_files = self._build_cert_files(config)
-
-        if certs_files:
-            container_builder.add_volume_config("certs", "/certs", certs_files)
-
-        container_builder.add_port(name=self.app.name, port=PORT)
-        container_builder.add_http_readiness_probe(
-            "/ro/" if config.enable_ng_ro else "/openmano/tenants",
-            PORT,
-            initial_delay_seconds=10,
-            period_seconds=10,
-            timeout_seconds=5,
-            failure_threshold=3,
-        )
-        container_builder.add_http_liveness_probe(
-            "/ro/" if config.enable_ng_ro else "/openmano/tenants",
-            PORT,
-            initial_delay_seconds=600,
-            period_seconds=10,
-            timeout_seconds=5,
-            failure_threshold=3,
-        )
-        container_builder.add_envs(
-            {
-                "OSMRO_LOG_LEVEL": config.log_level,
-            }
-        )
-        if config.period_refresh_active:
-            container_builder.add_envs(
-                {
-                    "OSMRO_PERIOD_REFRESH_ACTIVE": config.period_refresh_active,
-                }
-            )
-        if config.enable_ng_ro:
-            # Add secrets to the pod
-            mongodb_secret_name = f"{self.app.name}-mongodb-secret"
-            pod_spec_builder.add_secret(
-                mongodb_secret_name,
-                {
-                    "uri": config.mongodb_uri or self.mongodb_client.connection_string,
-                    "commonkey": config.database_commonkey,
-                },
-            )
-            container_builder.add_envs(
-                {
-                    "OSMRO_MESSAGE_DRIVER": "kafka",
-                    "OSMRO_MESSAGE_HOST": self.kafka.host,
-                    "OSMRO_MESSAGE_PORT": self.kafka.port,
-                    # MongoDB configuration
-                    "OSMRO_DATABASE_DRIVER": "mongo",
-                }
-            )
-            container_builder.add_secret_envs(
-                secret_name=mongodb_secret_name,
-                envs={
-                    "OSMRO_DATABASE_URI": "uri",
-                    "OSMRO_DATABASE_COMMONKEY": "commonkey",
-                },
-            )
-            restart_policy = PodRestartPolicy()
-            restart_policy.add_secrets(secret_names=(mongodb_secret_name,))
-            pod_spec_builder.set_restart_policy(restart_policy)
-
-        else:
-            container_builder.add_envs(
-                {
-                    "RO_DB_HOST": config.mysql_host or self.mysql_client.host,
-                    "RO_DB_OVIM_HOST": config.mysql_host or self.mysql_client.host,
-                    "RO_DB_PORT": config.mysql_port or self.mysql_client.port,
-                    "RO_DB_OVIM_PORT": config.mysql_port or self.mysql_client.port,
-                    "RO_DB_USER": config.mysql_user or self.mysql_client.user,
-                    "RO_DB_OVIM_USER": config.mysql_user or self.mysql_client.user,
-                    "RO_DB_PASSWORD": config.mysql_password
-                    or self.mysql_client.password,
-                    "RO_DB_OVIM_PASSWORD": config.mysql_password
-                    or self.mysql_client.password,
-                    "RO_DB_ROOT_PASSWORD": config.mysql_root_password
-                    or self.mysql_client.root_password,
-                    "RO_DB_OVIM_ROOT_PASSWORD": config.mysql_root_password
-                    or self.mysql_client.root_password,
-                    "RO_DB_NAME": config.ro_database,
-                    "RO_DB_OVIM_NAME": config.vim_database,
-                    "OPENMANO_TENANT": config.openmano_tenant,
-                }
-            )
-        container = container_builder.build()
-
-        # Add container to pod spec
-        pod_spec_builder.add_container(container)
-
-        return pod_spec_builder.build()
-
-
-VSCODE_WORKSPACE = {
-    "folders": [
-        {"path": "/usr/lib/python3/dist-packages/osm_ng_ro"},
-        {"path": "/usr/lib/python3/dist-packages/osm_common"},
-        {"path": "/usr/lib/python3/dist-packages/osm_ro_plugin"},
-        {"path": "/usr/lib/python3/dist-packages/osm_rosdn_arista_cloudvision"},
-        {"path": "/usr/lib/python3/dist-packages/osm_rosdn_dpb"},
-        {"path": "/usr/lib/python3/dist-packages/osm_rosdn_dynpac"},
-        {"path": "/usr/lib/python3/dist-packages/osm_rosdn_floodlightof"},
-        {"path": "/usr/lib/python3/dist-packages/osm_rosdn_ietfl2vpn"},
-        {"path": "/usr/lib/python3/dist-packages/osm_rosdn_juniper_contrail"},
-        {"path": "/usr/lib/python3/dist-packages/osm_rosdn_odlof"},
-        {"path": "/usr/lib/python3/dist-packages/osm_rosdn_onos_vpls"},
-        {"path": "/usr/lib/python3/dist-packages/osm_rosdn_onosof"},
-        {"path": "/usr/lib/python3/dist-packages/osm_rovim_aws"},
-        {"path": "/usr/lib/python3/dist-packages/osm_rovim_azure"},
-        {"path": "/usr/lib/python3/dist-packages/osm_rovim_gcp"},
-        {"path": "/usr/lib/python3/dist-packages/osm_rovim_openstack"},
-        {"path": "/usr/lib/python3/dist-packages/osm_rovim_openvim"},
-        {"path": "/usr/lib/python3/dist-packages/osm_rovim_vmware"},
-    ],
-    "launch": {
-        "configurations": [
-            {
-                "module": "osm_ng_ro.ro_main",
-                "name": "NG RO",
-                "request": "launch",
-                "type": "python",
-                "justMyCode": False,
-            }
-        ],
-        "version": "0.2.0",
-    },
-    "settings": {},
-}
-
-
-def _get_ro_host_paths(ro_host_path: str) -> Dict:
-    """Get RO host paths"""
-    return (
-        {
-            "NG-RO": {
-                "hostpath": f"{ro_host_path}/NG-RO",
-                "container-path": "/usr/lib/python3/dist-packages/osm_ng_ro",
-            },
-            "RO-plugin": {
-                "hostpath": f"{ro_host_path}/RO-plugin",
-                "container-path": "/usr/lib/python3/dist-packages/osm_ro_plugin",
-            },
-            "RO-SDN-arista_cloudvision": {
-                "hostpath": f"{ro_host_path}/RO-SDN-arista_cloudvision",
-                "container-path": "/usr/lib/python3/dist-packages/osm_rosdn_arista_cloudvision",
-            },
-            "RO-SDN-dpb": {
-                "hostpath": f"{ro_host_path}/RO-SDN-dpb",
-                "container-path": "/usr/lib/python3/dist-packages/osm_rosdn_dpb",
-            },
-            "RO-SDN-dynpac": {
-                "hostpath": f"{ro_host_path}/RO-SDN-dynpac",
-                "container-path": "/usr/lib/python3/dist-packages/osm_rosdn_dynpac",
-            },
-            "RO-SDN-floodlight_openflow": {
-                "hostpath": f"{ro_host_path}/RO-SDN-floodlight_openflow",
-                "container-path": "/usr/lib/python3/dist-packages/osm_rosdn_floodlightof",
-            },
-            "RO-SDN-ietfl2vpn": {
-                "hostpath": f"{ro_host_path}/RO-SDN-ietfl2vpn",
-                "container-path": "/usr/lib/python3/dist-packages/osm_rosdn_ietfl2vpn",
-            },
-            "RO-SDN-juniper_contrail": {
-                "hostpath": f"{ro_host_path}/RO-SDN-juniper_contrail",
-                "container-path": "/usr/lib/python3/dist-packages/osm_rosdn_juniper_contrail",
-            },
-            "RO-SDN-odl_openflow": {
-                "hostpath": f"{ro_host_path}/RO-SDN-odl_openflow",
-                "container-path": "/usr/lib/python3/dist-packages/osm_rosdn_odlof",
-            },
-            "RO-SDN-onos_openflow": {
-                "hostpath": f"{ro_host_path}/RO-SDN-onos_openflow",
-                "container-path": "/usr/lib/python3/dist-packages/osm_rosdn_onosof",
-            },
-            "RO-SDN-onos_vpls": {
-                "hostpath": f"{ro_host_path}/RO-SDN-onos_vpls",
-                "container-path": "/usr/lib/python3/dist-packages/osm_rosdn_onos_vpls",
-            },
-            "RO-VIM-aws": {
-                "hostpath": f"{ro_host_path}/RO-VIM-aws",
-                "container-path": "/usr/lib/python3/dist-packages/osm_rovim_aws",
-            },
-            "RO-VIM-azure": {
-                "hostpath": f"{ro_host_path}/RO-VIM-azure",
-                "container-path": "/usr/lib/python3/dist-packages/osm_rovim_azure",
-            },
-            "RO-VIM-gcp": {
-                "hostpath": f"{ro_host_path}/RO-VIM-gcp",
-                "container-path": "/usr/lib/python3/dist-packages/osm_rovim_gcp",
-            },
-            "RO-VIM-openstack": {
-                "hostpath": f"{ro_host_path}/RO-VIM-openstack",
-                "container-path": "/usr/lib/python3/dist-packages/osm_rovim_openstack",
-            },
-            "RO-VIM-openvim": {
-                "hostpath": f"{ro_host_path}/RO-VIM-openvim",
-                "container-path": "/usr/lib/python3/dist-packages/osm_rovim_openvim",
-            },
-            "RO-VIM-vmware": {
-                "hostpath": f"{ro_host_path}/RO-VIM-vmware",
-                "container-path": "/usr/lib/python3/dist-packages/osm_rovim_vmware",
-            },
-        }
-        if ro_host_path
-        else {}
-    )
-
-
-if __name__ == "__main__":
-    main(RoCharm)
diff --git a/installers/charm/ro/src/pod_spec.py b/installers/charm/ro/src/pod_spec.py
deleted file mode 100644 (file)
index 1beba17..0000000
+++ /dev/null
@@ -1,276 +0,0 @@
-#!/usr/bin/env python3
-# Copyright 2020 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-import logging
-from typing import Any, Dict, List, NoReturn
-
-logger = logging.getLogger(__name__)
-
-
-def _validate_data(
-    config_data: Dict[str, Any], relation_data: Dict[str, Any]
-) -> NoReturn:
-    """Validates passed information.
-
-    Args:
-        config_data (Dict[str, Any]): configuration information.
-        relation_data (Dict[str, Any]): relation information
-
-    Raises:
-        ValueError: when config and/or relation data is not valid.
-    """
-    config_validators = {
-        "enable_ng_ro": lambda value, _: isinstance(value, bool),
-        "database_commonkey": lambda value, values: (
-            isinstance(value, str) and len(value) > 0
-        )
-        if values.get("enable_ng_ro", True)
-        else True,
-        "log_level": lambda value, _: (
-            isinstance(value, str) and value in ("INFO", "DEBUG")
-        ),
-        "vim_database": lambda value, values: (
-            isinstance(value, str) and len(value) > 0
-        )
-        if not values.get("enable_ng_ro", True)
-        else True,
-        "ro_database": lambda value, values: (isinstance(value, str) and len(value) > 0)
-        if not values.get("enable_ng_ro", True)
-        else True,
-        "openmano_tenant": lambda value, values: (
-            isinstance(value, str) and len(value) > 0
-        )
-        if not values.get("enable_ng_ro", True)
-        else True,
-    }
-    relation_validators = {
-        "kafka_host": lambda value, _: (isinstance(value, str) and len(value) > 0)
-        if config_data.get("enable_ng_ro", True)
-        else True,
-        "kafka_port": lambda value, _: (isinstance(value, str) and len(value) > 0)
-        if config_data.get("enable_ng_ro", True)
-        else True,
-        "mongodb_connection_string": lambda value, _: (
-            isinstance(value, str) and value.startswith("mongodb://")
-        )
-        if config_data.get("enable_ng_ro", True)
-        else True,
-        "mysql_host": lambda value, _: (isinstance(value, str) and len(value) > 0)
-        if not config_data.get("enable_ng_ro", True)
-        else True,
-        "mysql_port": lambda value, _: (isinstance(value, int) and value > 0)
-        if not config_data.get("enable_ng_ro", True)
-        else True,
-        "mysql_user": lambda value, _: (isinstance(value, str) and len(value) > 0)
-        if not config_data.get("enable_ng_ro", True)
-        else True,
-        "mysql_password": lambda value, _: (isinstance(value, str) and len(value) > 0)
-        if not config_data.get("enable_ng_ro", True)
-        else True,
-        "mysql_root_password": lambda value, _: (
-            isinstance(value, str) and len(value) > 0
-        )
-        if not config_data.get("enable_ng_ro", True)
-        else True,
-    }
-    problems = []
-
-    for key, validator in config_validators.items():
-        valid = validator(config_data.get(key), config_data)
-
-        if not valid:
-            problems.append(key)
-
-    for key, validator in relation_validators.items():
-        valid = validator(relation_data.get(key), relation_data)
-
-        if not valid:
-            problems.append(key)
-
-    if len(problems) > 0:
-        raise ValueError("Errors found in: {}".format(", ".join(problems)))
-
-
-def _make_pod_ports(port: int) -> List[Dict[str, Any]]:
-    """Generate pod ports details.
-
-    Args:
-        port (int): port to expose.
-
-    Returns:
-        List[Dict[str, Any]]: pod port details.
-    """
-    return [{"name": "ro", "containerPort": port, "protocol": "TCP"}]
-
-
-def _make_pod_envconfig(
-    config: Dict[str, Any], relation_state: Dict[str, Any]
-) -> Dict[str, Any]:
-    """Generate pod environment configuration.
-
-    Args:
-        config (Dict[str, Any]): configuration information.
-        relation_state (Dict[str, Any]): relation state information.
-
-    Returns:
-        Dict[str, Any]: pod environment configuration.
-    """
-    envconfig = {
-        # General configuration
-        "OSMRO_LOG_LEVEL": config["log_level"],
-    }
-
-    if config.get("enable_ng_ro", True):
-        # Kafka configuration
-        envconfig["OSMRO_MESSAGE_DRIVER"] = "kafka"
-        envconfig["OSMRO_MESSAGE_HOST"] = relation_state["kafka_host"]
-        envconfig["OSMRO_MESSAGE_PORT"] = relation_state["kafka_port"]
-
-        # MongoDB configuration
-        envconfig["OSMRO_DATABASE_DRIVER"] = "mongo"
-        envconfig["OSMRO_DATABASE_URI"] = relation_state["mongodb_connection_string"]
-        envconfig["OSMRO_DATABASE_COMMONKEY"] = config["database_commonkey"]
-    else:
-        envconfig["RO_DB_HOST"] = relation_state["mysql_host"]
-        envconfig["RO_DB_OVIM_HOST"] = relation_state["mysql_host"]
-        envconfig["RO_DB_PORT"] = relation_state["mysql_port"]
-        envconfig["RO_DB_OVIM_PORT"] = relation_state["mysql_port"]
-        envconfig["RO_DB_USER"] = relation_state["mysql_user"]
-        envconfig["RO_DB_OVIM_USER"] = relation_state["mysql_user"]
-        envconfig["RO_DB_PASSWORD"] = relation_state["mysql_password"]
-        envconfig["RO_DB_OVIM_PASSWORD"] = relation_state["mysql_password"]
-        envconfig["RO_DB_ROOT_PASSWORD"] = relation_state["mysql_root_password"]
-        envconfig["RO_DB_OVIM_ROOT_PASSWORD"] = relation_state["mysql_root_password"]
-        envconfig["RO_DB_NAME"] = config["ro_database"]
-        envconfig["RO_DB_OVIM_NAME"] = config["vim_database"]
-        envconfig["OPENMANO_TENANT"] = config["openmano_tenant"]
-
-    return envconfig
-
-
-def _make_startup_probe() -> Dict[str, Any]:
-    """Generate startup probe.
-
-    Returns:
-        Dict[str, Any]: startup probe.
-    """
-    return {
-        "exec": {"command": ["/usr/bin/pgrep", "python3"]},
-        "initialDelaySeconds": 60,
-        "timeoutSeconds": 5,
-    }
-
-
-def _make_readiness_probe(port: int) -> Dict[str, Any]:
-    """Generate readiness probe.
-
-    Args:
-        port (int): service port.
-
-    Returns:
-        Dict[str, Any]: readiness probe.
-    """
-    return {
-        "httpGet": {
-            "path": "/openmano/tenants",
-            "port": port,
-        },
-        "periodSeconds": 10,
-        "timeoutSeconds": 5,
-        "successThreshold": 1,
-        "failureThreshold": 3,
-    }
-
-
-def _make_liveness_probe(port: int) -> Dict[str, Any]:
-    """Generate liveness probe.
-
-    Args:
-        port (int): service port.
-
-    Returns:
-        Dict[str, Any]: liveness probe.
-    """
-    return {
-        "httpGet": {
-            "path": "/openmano/tenants",
-            "port": port,
-        },
-        "initialDelaySeconds": 600,
-        "periodSeconds": 10,
-        "timeoutSeconds": 5,
-        "successThreshold": 1,
-        "failureThreshold": 3,
-    }
-
-
-def make_pod_spec(
-    image_info: Dict[str, str],
-    config: Dict[str, Any],
-    relation_state: Dict[str, Any],
-    app_name: str = "ro",
-    port: int = 9090,
-) -> Dict[str, Any]:
-    """Generate the pod spec information.
-
-    Args:
-        image_info (Dict[str, str]): Object provided by
-                                     OCIImageResource("image").fetch().
-        config (Dict[str, Any]): Configuration information.
-        relation_state (Dict[str, Any]): Relation state information.
-        app_name (str, optional): Application name. Defaults to "ro".
-        port (int, optional): Port for the container. Defaults to 9090.
-
-    Returns:
-        Dict[str, Any]: Pod spec dictionary for the charm.
-    """
-    if not image_info:
-        return None
-
-    _validate_data(config, relation_state)
-
-    ports = _make_pod_ports(port)
-    env_config = _make_pod_envconfig(config, relation_state)
-    startup_probe = _make_startup_probe()
-    readiness_probe = _make_readiness_probe(port)
-    liveness_probe = _make_liveness_probe(port)
-
-    return {
-        "version": 3,
-        "containers": [
-            {
-                "name": app_name,
-                "imageDetails": image_info,
-                "imagePullPolicy": "Always",
-                "ports": ports,
-                "envConfig": env_config,
-                "kubernetes": {
-                    "startupProbe": startup_probe,
-                    "readinessProbe": readiness_probe,
-                    "livenessProbe": liveness_probe,
-                },
-            }
-        ],
-        "kubernetesResources": {
-            "ingressResources": [],
-        },
-    }
diff --git a/installers/charm/ro/tests/__init__.py b/installers/charm/ro/tests/__init__.py
deleted file mode 100644 (file)
index 446d5ce..0000000
+++ /dev/null
@@ -1,40 +0,0 @@
-#!/usr/bin/env python3
-# Copyright 2020 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-"""Init mocking for unit tests."""
-
-import sys
-
-
-import mock
-
-
-class OCIImageResourceErrorMock(Exception):
-    pass
-
-
-sys.path.append("src")
-
-oci_image = mock.MagicMock()
-oci_image.OCIImageResourceError = OCIImageResourceErrorMock
-sys.modules["oci_image"] = oci_image
-sys.modules["oci_image"].OCIImageResource().fetch.return_value = {}
diff --git a/installers/charm/ro/tests/test_charm.py b/installers/charm/ro/tests/test_charm.py
deleted file mode 100644 (file)
index f18e768..0000000
+++ /dev/null
@@ -1,505 +0,0 @@
-#!/usr/bin/env python3
-# Copyright 2020 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-import base64
-from typing import NoReturn
-import unittest
-
-from charm import RoCharm
-from ops.model import ActiveStatus, BlockedStatus
-from ops.testing import Harness
-
-
-def encode(content: str):
-    return base64.b64encode(content.encode("ascii")).decode("utf-8")
-
-
-certificate_pem = encode(
-    """
------BEGIN CERTIFICATE-----
-MIIDazCCAlOgAwIBAgIUf1b0s3UKtrxHXH2rge7UaQyfJAMwDQYJKoZIhvcNAQEL
-BQAwRTELMAkGA1UEBhMCQVUxEzARBgNVBAgMClNvbWUtU3RhdGUxITAfBgNVBAoM
-GEludGVybmV0IFdpZGdpdHMgUHR5IEx0ZDAeFw0yMTAzMjIxNzEyMjdaFw0zMTAz
-MjAxNzEyMjdaMEUxCzAJBgNVBAYTAkFVMRMwEQYDVQQIDApTb21lLVN0YXRlMSEw
-HwYDVQQKDBhJbnRlcm5ldCBXaWRnaXRzIFB0eSBMdGQwggEiMA0GCSqGSIb3DQEB
-AQUAA4IBDwAwggEKAoIBAQCgCfCBgYAN6ON0yHDXuW407rFtJVRf0u46Jrp0Dk7J
-kkSZ1e7Kq14r7yFHazEBWv78oOdwBocvWrd8leLuf3bYGcHR65hRy6A/fbYm5Aje
-cKpwlFwaqfR4BLelwJl79jZ2rJX738cCBVrIk1nAVdOxGrXV4MTWUaKR2c+uKKvc
-OKRT+5VqCeP4N5FWeATZ/KqGu8uV9E9WhFgwIZyStemLyLaDbn5PmAQ6S9oeR5jJ
-o2gEEp/lDKvsqOWs76KFumSKa9hQs5Dw2lj0mb1UoyYK1gYc4ubzVChJadv44AU8
-MYtIjlFn1X1P+RjaKZNUIAGXkoLwYn6SizF6y6LiuFS9AgMBAAGjUzBRMB0GA1Ud
-DgQWBBRl+/23CB+FXczeAZRQyYcfOdy9YDAfBgNVHSMEGDAWgBRl+/23CB+FXcze
-AZRQyYcfOdy9YDAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3DQEBCwUAA4IBAQAd
-dkeDym6lRN8kWFtfu3IyiLF8G8sn91qNbH3Yr4TuTBhgcjYyW6PgisSbrNgA9ysE
-GoaF7ohb8GeVfCsQdK23+NpAlj/+DZ3OnGcxwXj1RUAz4yr9kanV1yuEtr1q2xJI
-UaECWr8HZlwGBAKNTGx2EXT2/2aFzgULpDcxzTKD+MRpKpMUrWhf9ULvVrclvHWe
-POLYhobUFuBHuo6rt5Rcq16j67zCX9EVTlAE3o2OECIWByK22sXdeOidYMpTkl4q
-8FrOqjNsx5d+SBPJBv/pqtBm4bA47Vx1P8tbWOQ4bXS0UmXgwpeBOU/O/ot30+KS
-JnKEy+dYyvVBKg77sRHw
------END CERTIFICATE-----
-"""
-)
-
-
-class TestCharm(unittest.TestCase):
-    """Prometheus Charm unit tests."""
-
-    def setUp(self) -> NoReturn:
-        """Test setup"""
-        self.harness = Harness(RoCharm)
-        self.harness.set_leader(is_leader=True)
-        self.harness.begin()
-        self.config = {
-            "enable_ng_ro": True,
-            "database_commonkey": "commonkey",
-            "mongodb_uri": "",
-            "log_level": "INFO",
-            "vim_database": "db_name",
-            "ro_database": "ro_db_name",
-            "openmano_tenant": "mano",
-            "certificates": f"cert1:{certificate_pem}",
-        }
-        self.harness.update_config(self.config)
-
-    def test_config_changed_no_relations(
-        self,
-    ) -> NoReturn:
-        """Test ingress resources without HTTP."""
-
-        self.harness.charm.on.config_changed.emit()
-
-        # Assertions
-        self.assertIsInstance(self.harness.charm.unit.status, BlockedStatus)
-        self.assertTrue(
-            all(
-                relation in self.harness.charm.unit.status.message
-                for relation in ["mongodb", "kafka"]
-            )
-        )
-
-        # Disable ng-ro
-        self.harness.update_config({"enable_ng_ro": False})
-        self.assertIsInstance(self.harness.charm.unit.status, BlockedStatus)
-        self.assertTrue(
-            all(
-                relation in self.harness.charm.unit.status.message
-                for relation in ["mysql"]
-            )
-        )
-
-    def test_config_changed_non_leader(
-        self,
-    ) -> NoReturn:
-        """Test ingress resources without HTTP."""
-        self.harness.set_leader(is_leader=False)
-        self.harness.charm.on.config_changed.emit()
-
-        # Assertions
-        self.assertIsInstance(self.harness.charm.unit.status, ActiveStatus)
-
-    def test_with_relations_and_mongodb_config_ng(
-        self,
-    ) -> NoReturn:
-        "Test with relations (ng-ro)"
-
-        # Initializing the kafka relation
-        kafka_relation_id = self.harness.add_relation("kafka", "kafka")
-        self.harness.add_relation_unit(kafka_relation_id, "kafka/0")
-        self.harness.update_relation_data(
-            kafka_relation_id, "kafka", {"host": "kafka", "port": 9092}
-        )
-
-        # Initializing the mongodb config
-        self.harness.update_config({"mongodb_uri": "mongodb://mongo:27017"})
-
-        # Verifying status
-        self.assertNotIsInstance(self.harness.charm.unit.status, BlockedStatus)
-
-    def test_with_relations_ng(
-        self,
-    ) -> NoReturn:
-        "Test with relations (ng-ro)"
-
-        # Initializing the kafka relation
-        kafka_relation_id = self.harness.add_relation("kafka", "kafka")
-        self.harness.add_relation_unit(kafka_relation_id, "kafka/0")
-        self.harness.update_relation_data(
-            kafka_relation_id, "kafka", {"host": "kafka", "port": 9092}
-        )
-
-        # Initializing the mongo relation
-        mongodb_relation_id = self.harness.add_relation("mongodb", "mongodb")
-        self.harness.add_relation_unit(mongodb_relation_id, "mongodb/0")
-        self.harness.update_relation_data(
-            mongodb_relation_id,
-            "mongodb/0",
-            {"connection_string": "mongodb://mongo:27017"},
-        )
-
-        # Verifying status
-        self.assertNotIsInstance(self.harness.charm.unit.status, BlockedStatus)
-
-    def test_ng_exception_mongodb_relation_and_config(
-        self,
-    ) -> NoReturn:
-        "Test NG-RO mongodb relation and config. Must fail"
-        # Initializing the mongo relation
-        mongodb_relation_id = self.harness.add_relation("mongodb", "mongodb")
-        self.harness.add_relation_unit(mongodb_relation_id, "mongodb/0")
-        self.harness.update_relation_data(
-            mongodb_relation_id,
-            "mongodb/0",
-            {"connection_string": "mongodb://mongo:27017"},
-        )
-
-        # Initializing the mongodb config
-        self.harness.update_config({"mongodb_uri": "mongodb://mongo:27017"})
-
-        # Verifying status
-        self.assertIsInstance(self.harness.charm.unit.status, BlockedStatus)
-
-
-if __name__ == "__main__":
-    unittest.main()
-
-# class TestCharm(unittest.TestCase):
-#     """RO Charm unit tests."""
-
-#     def setUp(self) -> NoReturn:
-#         """Test setup"""
-#         self.harness = Harness(RoCharm)
-#         self.harness.set_leader(is_leader=True)
-#         self.harness.begin()
-
-#     def test_on_start_without_relations_ng_ro(self) -> NoReturn:
-#         """Test installation without any relation."""
-#         self.harness.charm.on.start.emit()
-
-#         # Verifying status
-#         self.assertIsInstance(self.harness.charm.unit.status, BlockedStatus)
-
-#         # Verifying status message
-#         self.assertGreater(len(self.harness.charm.unit.status.message), 0)
-#         self.assertTrue(
-#             self.harness.charm.unit.status.message.startswith("Waiting for ")
-#         )
-#         self.assertIn("kafka", self.harness.charm.unit.status.message)
-#         self.assertIn("mongodb", self.harness.charm.unit.status.message)
-#         self.assertTrue(self.harness.charm.unit.status.message.endswith(" relations"))
-
-#     def test_on_start_without_relations_no_ng_ro(self) -> NoReturn:
-#         """Test installation without any relation."""
-#         self.harness.update_config({"enable_ng_ro": False})
-
-#         self.harness.charm.on.start.emit()
-
-#         # Verifying status
-#         self.assertIsInstance(self.harness.charm.unit.status, BlockedStatus)
-
-#         # Verifying status message
-#         self.assertGreater(len(self.harness.charm.unit.status.message), 0)
-#         self.assertTrue(
-#             self.harness.charm.unit.status.message.startswith("Waiting for ")
-#         )
-#         self.assertIn("mysql", self.harness.charm.unit.status.message)
-#         self.assertTrue(self.harness.charm.unit.status.message.endswith(" relation"))
-
-#     def test_on_start_with_relations_ng_ro(self) -> NoReturn:
-#         """Test deployment with NG-RO."""
-#         expected_result = {
-#             "version": 3,
-#             "containers": [
-#                 {
-#                     "name": "ro",
-#                     "imageDetails": self.harness.charm.image.fetch(),
-#                     "imagePullPolicy": "Always",
-#                     "ports": [
-#                         {
-#                             "name": "ro",
-#                             "containerPort": 9090,
-#                             "protocol": "TCP",
-#                         }
-#                     ],
-#                     "envConfig": {
-#                         "OSMRO_LOG_LEVEL": "INFO",
-#                         "OSMRO_MESSAGE_DRIVER": "kafka",
-#                         "OSMRO_MESSAGE_HOST": "kafka",
-#                         "OSMRO_MESSAGE_PORT": "9090",
-#                         "OSMRO_DATABASE_DRIVER": "mongo",
-#                         "OSMRO_DATABASE_URI": "mongodb://mongo",
-#                         "OSMRO_DATABASE_COMMONKEY": "osm",
-#                     },
-#                     "kubernetes": {
-#                         "startupProbe": {
-#                             "exec": {"command": ["/usr/bin/pgrep", "python3"]},
-#                             "initialDelaySeconds": 60,
-#                             "timeoutSeconds": 5,
-#                         },
-#                         "readinessProbe": {
-#                             "httpGet": {
-#                                 "path": "/openmano/tenants",
-#                                 "port": 9090,
-#                             },
-#                             "periodSeconds": 10,
-#                             "timeoutSeconds": 5,
-#                             "successThreshold": 1,
-#                             "failureThreshold": 3,
-#                         },
-#                         "livenessProbe": {
-#                             "httpGet": {
-#                                 "path": "/openmano/tenants",
-#                                 "port": 9090,
-#                             },
-#                             "initialDelaySeconds": 600,
-#                             "periodSeconds": 10,
-#                             "timeoutSeconds": 5,
-#                             "successThreshold": 1,
-#                             "failureThreshold": 3,
-#                         },
-#                     },
-#                 }
-#             ],
-#             "kubernetesResources": {"ingressResources": []},
-#         }
-
-#         self.harness.charm.on.start.emit()
-
-#         # Initializing the kafka relation
-#         relation_id = self.harness.add_relation("kafka", "kafka")
-#         self.harness.add_relation_unit(relation_id, "kafka/0")
-#         self.harness.update_relation_data(
-#             relation_id,
-#             "kafka/0",
-#             {
-#                 "host": "kafka",
-#                 "port": "9090",
-#             },
-#         )
-
-#         # Initializing the mongodb relation
-#         relation_id = self.harness.add_relation("mongodb", "mongodb")
-#         self.harness.add_relation_unit(relation_id, "mongodb/0")
-#         self.harness.update_relation_data(
-#             relation_id,
-#             "mongodb/0",
-#             {
-#                 "connection_string": "mongodb://mongo",
-#             },
-#         )
-
-#         # Verifying status
-#         self.assertNotIsInstance(self.harness.charm.unit.status, BlockedStatus)
-
-#         pod_spec, _ = self.harness.get_pod_spec()
-
-#         self.assertDictEqual(expected_result, pod_spec)
-
-#     def test_on_start_with_relations_no_ng_ro(self) -> NoReturn:
-#         """Test deployment with old RO."""
-#         self.harness.update_config({"enable_ng_ro": False})
-
-#         expected_result = {
-#             "version": 3,
-#             "containers": [
-#                 {
-#                     "name": "ro",
-#                     "imageDetails": self.harness.charm.image.fetch(),
-#                     "imagePullPolicy": "Always",
-#                     "ports": [
-#                         {
-#                             "name": "ro",
-#                             "containerPort": 9090,
-#                             "protocol": "TCP",
-#                         }
-#                     ],
-#                     "envConfig": {
-#                         "OSMRO_LOG_LEVEL": "INFO",
-#                         "RO_DB_HOST": "mysql",
-#                         "RO_DB_OVIM_HOST": "mysql",
-#                         "RO_DB_PORT": 3306,
-#                         "RO_DB_OVIM_PORT": 3306,
-#                         "RO_DB_USER": "mano",
-#                         "RO_DB_OVIM_USER": "mano",
-#                         "RO_DB_PASSWORD": "manopw",
-#                         "RO_DB_OVIM_PASSWORD": "manopw",
-#                         "RO_DB_ROOT_PASSWORD": "rootmanopw",
-#                         "RO_DB_OVIM_ROOT_PASSWORD": "rootmanopw",
-#                         "RO_DB_NAME": "mano_db",
-#                         "RO_DB_OVIM_NAME": "mano_vim_db",
-#                         "OPENMANO_TENANT": "osm",
-#                     },
-#                     "kubernetes": {
-#                         "startupProbe": {
-#                             "exec": {"command": ["/usr/bin/pgrep", "python3"]},
-#                             "initialDelaySeconds": 60,
-#                             "timeoutSeconds": 5,
-#                         },
-#                         "readinessProbe": {
-#                             "httpGet": {
-#                                 "path": "/openmano/tenants",
-#                                 "port": 9090,
-#                             },
-#                             "periodSeconds": 10,
-#                             "timeoutSeconds": 5,
-#                             "successThreshold": 1,
-#                             "failureThreshold": 3,
-#                         },
-#                         "livenessProbe": {
-#                             "httpGet": {
-#                                 "path": "/openmano/tenants",
-#                                 "port": 9090,
-#                             },
-#                             "initialDelaySeconds": 600,
-#                             "periodSeconds": 10,
-#                             "timeoutSeconds": 5,
-#                             "successThreshold": 1,
-#                             "failureThreshold": 3,
-#                         },
-#                     },
-#                 }
-#             ],
-#             "kubernetesResources": {"ingressResources": []},
-#         }
-
-#         self.harness.charm.on.start.emit()
-
-#         # Initializing the mysql relation
-#         relation_id = self.harness.add_relation("mysql", "mysql")
-#         self.harness.add_relation_unit(relation_id, "mysql/0")
-#         self.harness.update_relation_data(
-#             relation_id,
-#             "mysql/0",
-#             {
-#                 "host": "mysql",
-#                 "port": 3306,
-#                 "user": "mano",
-#                 "password": "manopw",
-#                 "root_password": "rootmanopw",
-#             },
-#         )
-
-#         # Verifying status
-#         self.assertNotIsInstance(self.harness.charm.unit.status, BlockedStatus)
-
-#         pod_spec, _ = self.harness.get_pod_spec()
-
-#         self.assertDictEqual(expected_result, pod_spec)
-
-#     def test_on_kafka_unit_relation_changed(self) -> NoReturn:
-#         """Test to see if kafka relation is updated."""
-#         self.harness.charm.on.start.emit()
-
-#         relation_id = self.harness.add_relation("kafka", "kafka")
-#         self.harness.add_relation_unit(relation_id, "kafka/0")
-#         self.harness.update_relation_data(
-#             relation_id,
-#             "kafka/0",
-#             {
-#                 "host": "kafka",
-#                 "port": 9090,
-#             },
-#         )
-
-#         # Verifying status
-#         self.assertIsInstance(self.harness.charm.unit.status, BlockedStatus)
-
-#         # Verifying status message
-#         self.assertGreater(len(self.harness.charm.unit.status.message), 0)
-#         self.assertTrue(
-#             self.harness.charm.unit.status.message.startswith("Waiting for ")
-#         )
-#         self.assertIn("mongodb", self.harness.charm.unit.status.message)
-#         self.assertTrue(self.harness.charm.unit.status.message.endswith(" relation"))
-
-#     def test_on_mongodb_unit_relation_changed(self) -> NoReturn:
-#         """Test to see if mongodb relation is updated."""
-#         self.harness.charm.on.start.emit()
-
-#         relation_id = self.harness.add_relation("mongodb", "mongodb")
-#         self.harness.add_relation_unit(relation_id, "mongodb/0")
-#         self.harness.update_relation_data(
-#             relation_id,
-#             "mongodb/0",
-#             {
-#                 "connection_string": "mongodb://mongo",
-#             },
-#         )
-
-#         # Verifying status
-#         self.assertIsInstance(self.harness.charm.unit.status, BlockedStatus)
-
-#         # Verifying status message
-#         self.assertGreater(len(self.harness.charm.unit.status.message), 0)
-#         self.assertTrue(
-#             self.harness.charm.unit.status.message.startswith("Waiting for ")
-#         )
-#         self.assertIn("kafka", self.harness.charm.unit.status.message)
-#         self.assertTrue(self.harness.charm.unit.status.message.endswith(" relation"))
-
-#     def test_on_mysql_unit_relation_changed(self) -> NoReturn:
-#         """Test to see if mysql relation is updated."""
-#         self.harness.charm.on.start.emit()
-
-#         relation_id = self.harness.add_relation("mysql", "mysql")
-#         self.harness.add_relation_unit(relation_id, "mysql/0")
-#         self.harness.update_relation_data(
-#             relation_id,
-#             "mysql/0",
-#             {
-#                 "host": "mysql",
-#                 "port": 3306,
-#                 "user": "mano",
-#                 "password": "manopw",
-#                 "root_password": "rootmanopw",
-#             },
-#         )
-
-#         # Verifying status
-#         self.assertIsInstance(self.harness.charm.unit.status, BlockedStatus)
-
-#         # Verifying status message
-#         self.assertGreater(len(self.harness.charm.unit.status.message), 0)
-#         self.assertTrue(
-#             self.harness.charm.unit.status.message.startswith("Waiting for ")
-#         )
-#         self.assertIn("kafka", self.harness.charm.unit.status.message)
-#         self.assertIn("mongodb", self.harness.charm.unit.status.message)
-#         self.assertTrue(self.harness.charm.unit.status.message.endswith(" relations"))
-
-#     def test_publish_ro_info(self) -> NoReturn:
-#         """Test to see if ro relation is updated."""
-#         expected_result = {
-#             "host": "ro",
-#             "port": "9090",
-#         }
-
-#         self.harness.charm.on.start.emit()
-
-#         relation_id = self.harness.add_relation("ro", "lcm")
-#         self.harness.add_relation_unit(relation_id, "lcm/0")
-#         relation_data = self.harness.get_relation_data(relation_id, "ro")
-
-#         self.assertDictEqual(expected_result, relation_data)
-
-
-if __name__ == "__main__":
-    unittest.main()
diff --git a/installers/charm/ro/tests/test_pod_spec.py b/installers/charm/ro/tests/test_pod_spec.py
deleted file mode 100644 (file)
index e616242..0000000
+++ /dev/null
@@ -1,389 +0,0 @@
-#!/usr/bin/env python3
-# Copyright 2020 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-
-from typing import NoReturn
-import unittest
-
-import pod_spec
-
-
-class TestPodSpec(unittest.TestCase):
-    """Pod spec unit tests."""
-
-    def test_make_pod_ports(self) -> NoReturn:
-        """Testing make pod ports."""
-        port = 9090
-
-        expected_result = [
-            {
-                "name": "ro",
-                "containerPort": port,
-                "protocol": "TCP",
-            }
-        ]
-
-        pod_ports = pod_spec._make_pod_ports(port)
-
-        self.assertListEqual(expected_result, pod_ports)
-
-    def test_make_pod_envconfig_ng_ro(self) -> NoReturn:
-        """Teting make pod envconfig."""
-        config = {
-            "enable_ng_ro": True,
-            "database_commonkey": "osm",
-            "log_level": "INFO",
-        }
-        relation_state = {
-            "kafka_host": "kafka",
-            "kafka_port": "9090",
-            "mongodb_connection_string": "mongodb://mongo",
-        }
-
-        expected_result = {
-            "OSMRO_LOG_LEVEL": config["log_level"],
-            "OSMRO_MESSAGE_DRIVER": "kafka",
-            "OSMRO_MESSAGE_HOST": relation_state["kafka_host"],
-            "OSMRO_MESSAGE_PORT": relation_state["kafka_port"],
-            "OSMRO_DATABASE_DRIVER": "mongo",
-            "OSMRO_DATABASE_URI": relation_state["mongodb_connection_string"],
-            "OSMRO_DATABASE_COMMONKEY": config["database_commonkey"],
-        }
-
-        pod_envconfig = pod_spec._make_pod_envconfig(config, relation_state)
-
-        self.assertDictEqual(expected_result, pod_envconfig)
-
-    def test_make_pod_envconfig_no_ng_ro(self) -> NoReturn:
-        """Teting make pod envconfig."""
-        config = {
-            "log_level": "INFO",
-            "enable_ng_ro": False,
-            "vim_database": "mano_vim_db",
-            "ro_database": "mano_db",
-            "openmano_tenant": "osm",
-        }
-        relation_state = {
-            "mysql_host": "mysql",
-            "mysql_port": 3306,
-            "mysql_user": "mano",
-            "mysql_password": "manopw",
-            "mysql_root_password": "rootmanopw",
-        }
-
-        expected_result = {
-            "OSMRO_LOG_LEVEL": config["log_level"],
-            "RO_DB_HOST": relation_state["mysql_host"],
-            "RO_DB_OVIM_HOST": relation_state["mysql_host"],
-            "RO_DB_PORT": relation_state["mysql_port"],
-            "RO_DB_OVIM_PORT": relation_state["mysql_port"],
-            "RO_DB_USER": relation_state["mysql_user"],
-            "RO_DB_OVIM_USER": relation_state["mysql_user"],
-            "RO_DB_PASSWORD": relation_state["mysql_password"],
-            "RO_DB_OVIM_PASSWORD": relation_state["mysql_password"],
-            "RO_DB_ROOT_PASSWORD": relation_state["mysql_root_password"],
-            "RO_DB_OVIM_ROOT_PASSWORD": relation_state["mysql_root_password"],
-            "RO_DB_NAME": config["ro_database"],
-            "RO_DB_OVIM_NAME": config["vim_database"],
-            "OPENMANO_TENANT": config["openmano_tenant"],
-        }
-
-        pod_envconfig = pod_spec._make_pod_envconfig(config, relation_state)
-
-        self.assertDictEqual(expected_result, pod_envconfig)
-
-    def test_make_startup_probe(self) -> NoReturn:
-        """Testing make startup probe."""
-        expected_result = {
-            "exec": {"command": ["/usr/bin/pgrep", "python3"]},
-            "initialDelaySeconds": 60,
-            "timeoutSeconds": 5,
-        }
-
-        startup_probe = pod_spec._make_startup_probe()
-
-        self.assertDictEqual(expected_result, startup_probe)
-
-    def test_make_readiness_probe(self) -> NoReturn:
-        """Testing make readiness probe."""
-        port = 9090
-
-        expected_result = {
-            "httpGet": {
-                "path": "/openmano/tenants",
-                "port": port,
-            },
-            "periodSeconds": 10,
-            "timeoutSeconds": 5,
-            "successThreshold": 1,
-            "failureThreshold": 3,
-        }
-
-        readiness_probe = pod_spec._make_readiness_probe(port)
-
-        self.assertDictEqual(expected_result, readiness_probe)
-
-    def test_make_liveness_probe(self) -> NoReturn:
-        """Testing make liveness probe."""
-        port = 9090
-
-        expected_result = {
-            "httpGet": {
-                "path": "/openmano/tenants",
-                "port": port,
-            },
-            "initialDelaySeconds": 600,
-            "periodSeconds": 10,
-            "timeoutSeconds": 5,
-            "successThreshold": 1,
-            "failureThreshold": 3,
-        }
-
-        liveness_probe = pod_spec._make_liveness_probe(port)
-
-        self.assertDictEqual(expected_result, liveness_probe)
-
-    def test_make_pod_spec_ng_ro(self) -> NoReturn:
-        """Testing make pod spec."""
-        image_info = {"upstream-source": "opensourcemano/ro:8"}
-        config = {
-            "database_commonkey": "osm",
-            "log_level": "INFO",
-            "enable_ng_ro": True,
-        }
-        relation_state = {
-            "kafka_host": "kafka",
-            "kafka_port": "9090",
-            "mongodb_connection_string": "mongodb://mongo",
-        }
-        app_name = "ro"
-        port = 9090
-
-        expected_result = {
-            "version": 3,
-            "containers": [
-                {
-                    "name": app_name,
-                    "imageDetails": image_info,
-                    "imagePullPolicy": "Always",
-                    "ports": [
-                        {
-                            "name": app_name,
-                            "containerPort": port,
-                            "protocol": "TCP",
-                        }
-                    ],
-                    "envConfig": {
-                        "OSMRO_LOG_LEVEL": config["log_level"],
-                        "OSMRO_MESSAGE_DRIVER": "kafka",
-                        "OSMRO_MESSAGE_HOST": relation_state["kafka_host"],
-                        "OSMRO_MESSAGE_PORT": relation_state["kafka_port"],
-                        "OSMRO_DATABASE_DRIVER": "mongo",
-                        "OSMRO_DATABASE_URI": relation_state[
-                            "mongodb_connection_string"
-                        ],
-                        "OSMRO_DATABASE_COMMONKEY": config["database_commonkey"],
-                    },
-                    "kubernetes": {
-                        "startupProbe": {
-                            "exec": {"command": ["/usr/bin/pgrep", "python3"]},
-                            "initialDelaySeconds": 60,
-                            "timeoutSeconds": 5,
-                        },
-                        "readinessProbe": {
-                            "httpGet": {
-                                "path": "/openmano/tenants",
-                                "port": port,
-                            },
-                            "periodSeconds": 10,
-                            "timeoutSeconds": 5,
-                            "successThreshold": 1,
-                            "failureThreshold": 3,
-                        },
-                        "livenessProbe": {
-                            "httpGet": {
-                                "path": "/openmano/tenants",
-                                "port": port,
-                            },
-                            "initialDelaySeconds": 600,
-                            "periodSeconds": 10,
-                            "timeoutSeconds": 5,
-                            "successThreshold": 1,
-                            "failureThreshold": 3,
-                        },
-                    },
-                }
-            ],
-            "kubernetesResources": {"ingressResources": []},
-        }
-
-        spec = pod_spec.make_pod_spec(
-            image_info, config, relation_state, app_name, port
-        )
-
-        self.assertDictEqual(expected_result, spec)
-
-    def test_make_pod_spec_no_ng_ro(self) -> NoReturn:
-        """Testing make pod spec."""
-        image_info = {"upstream-source": "opensourcemano/ro:8"}
-        config = {
-            "log_level": "INFO",
-            "enable_ng_ro": False,
-            "vim_database": "mano_vim_db",
-            "ro_database": "mano_db",
-            "openmano_tenant": "osm",
-        }
-        relation_state = {
-            "mysql_host": "mysql",
-            "mysql_port": 3306,
-            "mysql_user": "mano",
-            "mysql_password": "manopw",
-            "mysql_root_password": "rootmanopw",
-        }
-        app_name = "ro"
-        port = 9090
-
-        expected_result = {
-            "version": 3,
-            "containers": [
-                {
-                    "name": app_name,
-                    "imageDetails": image_info,
-                    "imagePullPolicy": "Always",
-                    "ports": [
-                        {
-                            "name": app_name,
-                            "containerPort": port,
-                            "protocol": "TCP",
-                        }
-                    ],
-                    "envConfig": {
-                        "OSMRO_LOG_LEVEL": config["log_level"],
-                        "RO_DB_HOST": relation_state["mysql_host"],
-                        "RO_DB_OVIM_HOST": relation_state["mysql_host"],
-                        "RO_DB_PORT": relation_state["mysql_port"],
-                        "RO_DB_OVIM_PORT": relation_state["mysql_port"],
-                        "RO_DB_USER": relation_state["mysql_user"],
-                        "RO_DB_OVIM_USER": relation_state["mysql_user"],
-                        "RO_DB_PASSWORD": relation_state["mysql_password"],
-                        "RO_DB_OVIM_PASSWORD": relation_state["mysql_password"],
-                        "RO_DB_ROOT_PASSWORD": relation_state["mysql_root_password"],
-                        "RO_DB_OVIM_ROOT_PASSWORD": relation_state[
-                            "mysql_root_password"
-                        ],
-                        "RO_DB_NAME": config["ro_database"],
-                        "RO_DB_OVIM_NAME": config["vim_database"],
-                        "OPENMANO_TENANT": config["openmano_tenant"],
-                    },
-                    "kubernetes": {
-                        "startupProbe": {
-                            "exec": {"command": ["/usr/bin/pgrep", "python3"]},
-                            "initialDelaySeconds": 60,
-                            "timeoutSeconds": 5,
-                        },
-                        "readinessProbe": {
-                            "httpGet": {
-                                "path": "/openmano/tenants",
-                                "port": port,
-                            },
-                            "periodSeconds": 10,
-                            "timeoutSeconds": 5,
-                            "successThreshold": 1,
-                            "failureThreshold": 3,
-                        },
-                        "livenessProbe": {
-                            "httpGet": {
-                                "path": "/openmano/tenants",
-                                "port": port,
-                            },
-                            "initialDelaySeconds": 600,
-                            "periodSeconds": 10,
-                            "timeoutSeconds": 5,
-                            "successThreshold": 1,
-                            "failureThreshold": 3,
-                        },
-                    },
-                }
-            ],
-            "kubernetesResources": {"ingressResources": []},
-        }
-
-        spec = pod_spec.make_pod_spec(
-            image_info, config, relation_state, app_name, port
-        )
-
-        self.assertDictEqual(expected_result, spec)
-
-    def test_make_pod_spec_without_image_info(self) -> NoReturn:
-        """Testing make pod spec without image_info."""
-        image_info = None
-        config = {
-            "enable_ng_ro": True,
-            "database_commonkey": "osm",
-            "log_level": "INFO",
-        }
-        relation_state = {
-            "kafka_host": "kafka",
-            "kafka_port": 9090,
-            "mongodb_connection_string": "mongodb://mongo",
-        }
-        app_name = "ro"
-        port = 9090
-
-        spec = pod_spec.make_pod_spec(
-            image_info, config, relation_state, app_name, port
-        )
-
-        self.assertIsNone(spec)
-
-    def test_make_pod_spec_without_config(self) -> NoReturn:
-        """Testing make pod spec without config."""
-        image_info = {"upstream-source": "opensourcemano/ro:8"}
-        config = {}
-        relation_state = {
-            "kafka_host": "kafka",
-            "kafka_port": 9090,
-            "mongodb_connection_string": "mongodb://mongo",
-        }
-        app_name = "ro"
-        port = 9090
-
-        with self.assertRaises(ValueError):
-            pod_spec.make_pod_spec(image_info, config, relation_state, app_name, port)
-
-    def test_make_pod_spec_without_relation_state(self) -> NoReturn:
-        """Testing make pod spec without relation_state."""
-        image_info = {"upstream-source": "opensourcemano/ro:8"}
-        config = {
-            "enable_ng_ro": True,
-            "database_commonkey": "osm",
-            "log_level": "INFO",
-        }
-        relation_state = {}
-        app_name = "ro"
-        port = 9090
-
-        with self.assertRaises(ValueError):
-            pod_spec.make_pod_spec(image_info, config, relation_state, app_name, port)
-
-
-if __name__ == "__main__":
-    unittest.main()
diff --git a/installers/charm/ro/tox.ini b/installers/charm/ro/tox.ini
deleted file mode 100644 (file)
index f3c9144..0000000
+++ /dev/null
@@ -1,128 +0,0 @@
-# Copyright 2021 Canonical Ltd.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# For those usages not covered by the Apache License, Version 2.0 please
-# contact: legal@canonical.com
-#
-# To get in touch with the maintainers, please contact:
-# osm-charmers@lists.launchpad.net
-##
-#######################################################################################
-
-[tox]
-envlist = black, cover, flake8, pylint, yamllint, safety
-skipsdist = true
-
-[tox:jenkins]
-toxworkdir = /tmp/.tox
-
-[testenv]
-basepython = python3.8
-setenv =
-  VIRTUAL_ENV={envdir}
-  PYTHONPATH = {toxinidir}:{toxinidir}/lib:{toxinidir}/src
-  PYTHONDONTWRITEBYTECODE = 1
-deps =  -r{toxinidir}/requirements.txt
-
-
-#######################################################################################
-[testenv:black]
-deps = black
-commands =
-        black --check --diff src/ tests/
-
-
-#######################################################################################
-[testenv:cover]
-deps =  {[testenv]deps}
-        -r{toxinidir}/requirements-test.txt
-        coverage
-        nose2
-commands =
-        sh -c 'rm -f nosetests.xml'
-        coverage erase
-        nose2 -C --coverage src
-        coverage report --omit='*tests*'
-        coverage html -d ./cover --omit='*tests*'
-        coverage xml -o coverage.xml --omit=*tests*
-whitelist_externals = sh
-
-
-#######################################################################################
-[testenv:flake8]
-deps =  flake8
-        flake8-import-order
-commands =
-        flake8 src/ tests/
-
-
-#######################################################################################
-[testenv:pylint]
-deps =  {[testenv]deps}
-        -r{toxinidir}/requirements-test.txt
-        pylint==2.10.2
-commands =
-    pylint -E src/ tests/
-
-
-#######################################################################################
-[testenv:safety]
-setenv =
-        LC_ALL=C.UTF-8
-        LANG=C.UTF-8
-deps =  {[testenv]deps}
-        safety
-commands =
-        - safety check --full-report
-
-
-#######################################################################################
-[testenv:yamllint]
-deps =  {[testenv]deps}
-        -r{toxinidir}/requirements-test.txt
-        yamllint
-commands = yamllint .
-
-#######################################################################################
-[testenv:build]
-passenv=HTTP_PROXY HTTPS_PROXY NO_PROXY
-whitelist_externals =
-  charmcraft
-  sh
-commands =
-  charmcraft pack
-  sh -c 'ubuntu_version=20.04; \
-        architectures="amd64-aarch64-arm64"; \
-        charm_name=`cat metadata.yaml | grep -E "^name: " | cut -f 2 -d " "`; \
-        mv $charm_name"_ubuntu-"$ubuntu_version-$architectures.charm $charm_name.charm'
-
-#######################################################################################
-[flake8]
-ignore =
-        W291,
-        W293,
-        W503,
-        E123,
-        E125,
-        E226,
-        E241,
-exclude =
-        .git,
-        __pycache__,
-        .tox,
-max-line-length = 120
-show-source = True
-builtins = _
-max-complexity = 10
-import-order-style = google
diff --git a/installers/charm/update-bundle-revisions.sh b/installers/charm/update-bundle-revisions.sh
deleted file mode 100755 (executable)
index 1a8d8cb..0000000
+++ /dev/null
@@ -1,35 +0,0 @@
-##
-# Copyright 2019 ETSI
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-##
-
-charms=`cat bundles/osm/bundle.yaml  | grep cs | grep -v k8s | awk '{print $2}' | tr -d \"`
-for charm_uri in $charms; do
-    charm_without_rev=`echo $charm_uri| rev | cut -d "-" -f 2-5 | rev`
-    latest_revision=`charm show --channel edge $charm_without_rev | grep Revision | awk '{print $2}'`
-    new_charm_uri=$charm_without_rev-$latest_revision
-    old_uri=`echo $charm_uri | sed 's/\//\\\\\//g'`
-    new_uri=`echo $new_charm_uri | sed 's/\//\\\\\//g'`
-    sed -i "s/"$old_uri"/"$new_uri"/g" bundles/osm/bundle.yaml
-done
-
-charms=`cat bundles/osm-ha/bundle.yaml  | grep cs | grep -v k8s | awk '{print $2}' | tr -d \"`
-for charm_uri in $charms; do
-    charm_without_rev=`echo $charm_uri| rev | cut -d "-" -f 2-5 | rev`
-    latest_revision=`charm show --channel edge $charm_without_rev | grep Revision | awk '{print $2}'`
-    new_charm_uri=$charm_without_rev-$latest_revision
-    old_uri=`echo $charm_uri | sed 's/\//\\\\\//g'`
-    new_uri=`echo $new_charm_uri | sed 's/\//\\\\\//g'`
-    sed -i "s/"$old_uri"/"$new_uri"/g" bundles/osm-ha/bundle.yaml
-done
\ No newline at end of file
index 0d7d5eb..95d0e96 100644 (file)
@@ -25,11 +25,6 @@ bases:
         channel: "20.04"
 parts:
   charm:
-    build-environment:
-    - CRYPTOGRAPHY_DONT_BUILD_RUST: 1
+    charm-binary-python-packages: [cryptography, bcrypt]
     build-packages:
-      - build-essential
-      - libssl-dev
-      - libffi-dev
-      - python3-dev
-      - cargo
+      - libffi-dev
\ No newline at end of file
index 2e1a6dd..7f5495b 100644 (file)
@@ -47,10 +47,6 @@ ignore = ["W503", "E402", "E501", "D107"]
 # D100, D101, D102, D103: Ignore missing docstrings in tests
 per-file-ignores = ["tests/*:D100,D101,D102,D103,D104"]
 docstring-convention = "google"
-# Check for properly formatted copyright header in each file
-copyright-check = "True"
-copyright-author = "Canonical Ltd."
-copyright-regexp = "Copyright\\s\\d{4}([-,]\\d{4})*\\s+%(author)s"
 
 [tool.bandit]
 tests = ["B201", "B301"]
index 66e845a..387a2e0 100644 (file)
@@ -14,6 +14,6 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 #######################################################################################
-ops >= 1.2.0
-juju
-pyyaml
\ No newline at end of file
+ops < 2.2
+juju < 3
+pyyaml
index ebd43f5..394386e 100644 (file)
@@ -16,7 +16,9 @@
 # limitations under the License.
 #######################################################################################
 
+import asyncio
 import logging
+import shlex
 from pathlib import Path
 
 import pytest
@@ -26,6 +28,36 @@ from pytest_operator.plugin import OpsTest
 logger = logging.getLogger(__name__)
 
 METADATA = yaml.safe_load(Path("./metadata.yaml").read_text())
+VCA_APP = "osm-vca"
+
+LCM_CHARM = "osm-lcm"
+LCM_APP = "lcm"
+KAFKA_CHARM = "kafka-k8s"
+KAFKA_APP = "kafka"
+MONGO_DB_CHARM = "mongodb-k8s"
+MONGO_DB_APP = "mongodb"
+RO_CHARM = "osm-ro"
+RO_APP = "ro"
+ZOOKEEPER_CHARM = "zookeeper-k8s"
+ZOOKEEPER_APP = "zookeeper"
+LCM_APPS = [KAFKA_APP, MONGO_DB_APP, ZOOKEEPER_APP, RO_APP, LCM_APP]
+MON_CHARM = "osm-mon"
+MON_APP = "mon"
+KEYSTONE_CHARM = "osm-keystone"
+KEYSTONE_APP = "keystone"
+MARIADB_CHARM = "charmed-osm-mariadb-k8s"
+MARIADB_APP = "mariadb"
+PROMETHEUS_CHARM = "osm-prometheus"
+PROMETHEUS_APP = "prometheus"
+MON_APPS = [
+    KAFKA_APP,
+    ZOOKEEPER_APP,
+    KEYSTONE_APP,
+    MONGO_DB_APP,
+    MARIADB_APP,
+    PROMETHEUS_APP,
+    MON_APP,
+]
 
 
 @pytest.mark.abort_on_fail
@@ -34,16 +66,121 @@ async def test_build_and_deploy(ops_test: OpsTest):
 
     Assert on the unit status before any relations/configurations take place.
     """
-    await ops_test.model.set_config({"update-status-hook-interval": "10s"})
-
     charm = await ops_test.build_charm(".")
-    await ops_test.model.deploy(charm, application_name="osm-vca-integrator-k8s")
-    await ops_test.model.wait_for_idle(
-        apps=["osm-vca-integrator-k8s"], status="blocked", timeout=1000
+    await ops_test.model.deploy(charm, application_name=VCA_APP)
+    async with ops_test.fast_forward():
+        await ops_test.model.wait_for_idle(
+            apps=[VCA_APP],
+            status="blocked",
+        )
+    assert ops_test.model.applications[VCA_APP].units[0].workload_status == "blocked"
+
+
+@pytest.mark.abort_on_fail
+async def test_vca_configuration(ops_test: OpsTest):
+    controllers = (Path.home() / ".local/share/juju/controllers.yaml").read_text()
+    accounts = (Path.home() / ".local/share/juju/accounts.yaml").read_text()
+    public_key = (Path.home() / ".local/share/juju/ssh/juju_id_rsa.pub").read_text()
+    await ops_test.model.applications[VCA_APP].set_config(
+        {
+            "controllers": controllers,
+            "accounts": accounts,
+            "public-key": public_key,
+            "k8s-cloud": "microk8s",
+        }
     )
-    assert (
-        ops_test.model.applications["osm-vca-integrator-k8s"].units[0].workload_status == "blocked"
+    async with ops_test.fast_forward():
+        await ops_test.model.wait_for_idle(
+            apps=[VCA_APP],
+            status="active",
+        )
+
+
+@pytest.mark.abort_on_fail
+async def test_vca_integration_lcm(ops_test: OpsTest):
+    lcm_deploy_cmd = f"juju deploy {LCM_CHARM} {LCM_APP} --resource lcm-image=opensourcemano/lcm:testing-daily --channel=latest/beta --series=focal"
+    ro_deploy_cmd = f"juju deploy {RO_CHARM} {RO_APP} --resource ro-image=opensourcemano/ro:testing-daily --channel=latest/beta --series=focal"
+
+    await asyncio.gather(
+        # LCM and RO charms have to be deployed differently since
+        # bug https://github.com/juju/python-libjuju/pull/820
+        # fails to parse assumes
+        ops_test.run(*shlex.split(lcm_deploy_cmd), check=True),
+        ops_test.run(*shlex.split(ro_deploy_cmd), check=True),
+        ops_test.model.deploy(KAFKA_CHARM, application_name=KAFKA_APP, channel="stable"),
+        ops_test.model.deploy(MONGO_DB_CHARM, application_name=MONGO_DB_APP, channel="edge"),
+        ops_test.model.deploy(ZOOKEEPER_CHARM, application_name=ZOOKEEPER_APP, channel="stable"),
     )
+    async with ops_test.fast_forward():
+        await ops_test.model.wait_for_idle(
+            apps=LCM_APPS,
+        )
+    # wait for MongoDB to be active before relating RO to it
+    async with ops_test.fast_forward():
+        await ops_test.model.wait_for_idle(apps=[MONGO_DB_APP], status="active")
+    logger.info("Adding relations")
+    await ops_test.model.add_relation(KAFKA_APP, ZOOKEEPER_APP)
+    await ops_test.model.add_relation(RO_APP, MONGO_DB_APP)
+    await ops_test.model.add_relation(RO_APP, KAFKA_APP)
+    # LCM specific
+    await ops_test.model.add_relation(LCM_APP, MONGO_DB_APP)
+    await ops_test.model.add_relation(LCM_APP, KAFKA_APP)
+    await ops_test.model.add_relation(LCM_APP, RO_APP)
+
+    async with ops_test.fast_forward():
+        await ops_test.model.wait_for_idle(
+            apps=LCM_APPS,
+            status="active",
+        )
+
+    logger.info("Adding relation VCA LCM")
+    await ops_test.model.add_relation(VCA_APP, LCM_APP)
+    async with ops_test.fast_forward():
+        await ops_test.model.wait_for_idle(
+            apps=[VCA_APP, LCM_APP],
+            status="active",
+        )
+
+
+@pytest.mark.abort_on_fail
+async def test_vca_integration_mon(ops_test: OpsTest):
+    keystone_deploy_cmd = f"juju deploy {KEYSTONE_CHARM} {KEYSTONE_APP} --resource keystone-image=opensourcemano/keystone:testing-daily"
+    mon_deploy_cmd = f"juju deploy {MON_CHARM} {MON_APP} --resource mon-image=opensourcemano/mon:testing-daily --channel=latest/beta --series=focal"
+    await asyncio.gather(
+        # MON charm has to be deployed differently since
+        # bug https://github.com/juju/python-libjuju/issues/820
+        # fails to parse assumes
+        ops_test.run(*shlex.split(mon_deploy_cmd), check=True),
+        ops_test.model.deploy(MARIADB_CHARM, application_name=MARIADB_APP, channel="stable"),
+        ops_test.model.deploy(PROMETHEUS_CHARM, application_name=PROMETHEUS_APP, channel="stable"),
+        # Keystone charm has to be deployed differently since
+        # bug https://github.com/juju/python-libjuju/issues/766
+        # prevents setting correctly the resources
+        ops_test.run(*shlex.split(keystone_deploy_cmd), check=True),
+    )
+    async with ops_test.fast_forward():
+        await ops_test.model.wait_for_idle(
+            apps=MON_APPS,
+        )
+
+    logger.info("Adding relations")
+    await ops_test.model.add_relation(MARIADB_APP, KEYSTONE_APP)
+    # MON specific
+    await ops_test.model.add_relation(MON_APP, MONGO_DB_APP)
+    await ops_test.model.add_relation(MON_APP, KAFKA_APP)
+    await ops_test.model.add_relation(MON_APP, KEYSTONE_APP)
+    await ops_test.model.add_relation(MON_APP, PROMETHEUS_APP)
+
+    async with ops_test.fast_forward():
+        await ops_test.model.wait_for_idle(
+            apps=MON_APPS,
+            status="active",
+        )
 
-    logger.debug("Setting update-status-hook-interval to 60m")
-    await ops_test.model.set_config({"update-status-hook-interval": "60m"})
+    logger.info("Adding relation VCA MON")
+    await ops_test.model.add_relation(VCA_APP, MON_APP)
+    async with ops_test.fast_forward():
+        await ops_test.model.wait_for_idle(
+            apps=[VCA_APP, MON_APP],
+            status="active",
+        )
index 1893353..a8eb8bc 100644 (file)
@@ -27,6 +27,7 @@ lib_path = {toxinidir}/lib/charms/osm_vca_integrator
 all_path = {[vars]src_path} {[vars]tst_path} {[vars]lib_path}
 
 [testenv]
+basepython = python3.8
 setenv =
   PYTHONPATH = {toxinidir}:{toxinidir}/lib:{[vars]src_path}
   PYTHONBREAKPOINT=ipdb.set_trace
@@ -51,7 +52,6 @@ deps =
     black
     flake8
     flake8-docstrings
-    flake8-copyright
     flake8-builtins
     pylint
     pyproject-flake8
@@ -62,7 +62,7 @@ deps =
     -r{toxinidir}/requirements.txt
 commands =
     codespell {[vars]lib_path}
-    codespell {toxinidir}/. --skip {toxinidir}/.git --skip {toxinidir}/.tox \
+    codespell {toxinidir} --skip {toxinidir}/.git --skip {toxinidir}/.tox \
       --skip {toxinidir}/build --skip {toxinidir}/lib --skip {toxinidir}/venv \
       --skip {toxinidir}/.mypy_cache --skip {toxinidir}/icon.svg
     pylint -E {[vars]src_path}
@@ -98,9 +98,9 @@ commands =
 description = Run integration tests
 deps =
     pytest
-    juju
+    juju<3
     pytest-operator
     -r{toxinidir}/requirements.txt
     -r{toxinidir}/requirements-dev.txt
 commands =
-    pytest -v --tb native --ignore={[vars]tst_path}unit --log-cli-level=INFO -s {posargs}
+    pytest -v --tb native --ignore={[vars]tst_path}unit --log-cli-level=INFO -s {posargs} --cloud microk8s
index 8878689..9172ac3 100755 (executable)
@@ -20,7 +20,7 @@ JUJU_VERSION=2.9
 JUJU_AGENT_VERSION=2.9.34
 K8S_CLOUD_NAME="k8s-cloud"
 KUBECTL="microk8s.kubectl"
-MICROK8S_VERSION=1.23
+MICROK8S_VERSION=1.26
 OSMCLIENT_VERSION=latest
 IMAGES_OVERLAY_FILE=~/.osm/images-overlay.yaml
 PASSWORD_OVERLAY_FILE=~/.osm/password-overlay.yaml
@@ -137,7 +137,7 @@ EOF
     else
         sg ${KUBEGRP} -c "echo ${DEFAULT_IP}-${DEFAULT_IP} | microk8s.enable metallb"
         sg ${KUBEGRP} -c "microk8s.enable ingress"
-        sg ${KUBEGRP} -c "microk8s.enable storage dns"
+        sg ${KUBEGRP} -c "microk8s.enable hostpath-storage dns"
         TIME_TO_WAIT=30
         start_time="$(date -u +%s)"
         while true
index 24e2004..69e0516 100644 (file)
@@ -63,4 +63,4 @@ spec:
           value: mongodb://mongodb-k8s:27017/?replicaSet=rs0
         envFrom:
         - secretRef:
-           name: mon-secret
+            name: mon-secret
diff --git a/installers/docker/osm_pods/ng-mon.yaml b/installers/docker/osm_pods/ng-mon.yaml
new file mode 100644 (file)
index 0000000..121c0c5
--- /dev/null
@@ -0,0 +1,68 @@
+#######################################################################################
+# Copyright ETSI Contributors and Others.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+# implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#######################################################################################
+apiVersion: v1
+kind: Service
+metadata:
+  name: mon
+spec:
+  clusterIP: None
+  ports:
+  - port: 8662
+    protocol: TCP
+    targetPort: 8662
+  selector:
+    app: mon
+  type: ClusterIP
+---
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+  name: mon
+  labels:
+    app: mon
+spec:
+  replicas: 1
+  selector:
+    matchLabels:
+      app: mon
+  template:
+    metadata:
+      labels:
+        app: mon
+    spec:
+      initContainers:
+      - name: kafka-mongo-test
+        image: alpine:latest
+        command: ["sh", "-c", "until (nc -zvw1 kafka 9092 && nc -zvw1 mongodb-k8s 27017); do sleep 3; done; exit 0"]
+      containers:
+      - name: mon
+        command: ["/bin/bash"]
+        args: ["scripts/dashboarder-start.sh"]
+        image: opensourcemano/mon:13
+        ports:
+        - containerPort: 8662
+          protocol: TCP
+        env:
+        - name: OSMMON_MESSAGE_HOST
+          value: kafka
+        - name: OSMMON_MESSAGE_PORT
+          value: "9092"
+        - name: OSMMON_DATABASE_URI
+          value: mongodb://mongodb-k8s:27017/?replicaSet=rs0
+        envFrom:
+        - secretRef:
+            name: mon-secret
index 0172aaf..77ccbd1 100644 (file)
@@ -30,10 +30,13 @@ spec:
   type: NodePort
 ---
 apiVersion: v1
+kind: ConfigMap
+metadata:
+  name: prom
 data:
-  osm_rules.yml: |
+  osm_metric_rules.yml: |
     groups:
-      - name: osm_rules
+      - name: osm_metric_rules
         rules:
         - record: vm_status_extended
           expr: (last_over_time(vm_status[1m]) * on (vm_id, vim_id) group_left(ns_id, vnf_id, vdu_id, project_id, job, vdu_name, vnf_member_index) last_over_time(ns_topology[1m])) or (last_over_time(ns_topology[1m]) * -1)
@@ -47,6 +50,16 @@ data:
           expr: (0 * (count (vm_status_extended==0) by (ns_id)>=0)) or (min by (ns_id) (vm_status_extended))
           labels:
             job: osm_prometheus
+  osm_alert_rules.yml: |
+    groups:
+      - name: osm_alert_rules
+        rules:
+        - alert: vdu_down
+          expr: vm_status_extended != 1
+          for: 3m
+          annotations:
+            summary: "VDU {{ $labels.vm_id }} in VIM {{ $labels.vim_id }} is down"
+            description: "VDU {{ $labels.vm_id }} in VIM {{ $labels.vim_id }} has been down for more than 3 minutes. NS instance id is {{ $labels.ns_id }}"
   prometheus.yml: |
     # Copyright 2018 The Prometheus Authors
     # Copyright 2018 Whitestack
@@ -75,12 +88,12 @@ data:
       alertmanagers:
       - static_configs:
         - targets:
-          - alertmanager:9093
+          - alertmanager:9093
 
     # Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
     rule_files:
-      - "osm_rules.yml"
-      # - "second_rules.yml"
+      - "osm_metric_rules.yml"
+      - "osm_alert_rules.yml"
 
     # A scrape configuration containing exactly one endpoint to scrape:
     # Here it's Prometheus itself.
@@ -94,9 +107,6 @@ data:
         static_configs:
         - targets:
           - pushgateway-prometheus-pushgateway:9091
-kind: ConfigMap
-metadata:
-  name: prom
 ---
 apiVersion: apps/v1
 kind: StatefulSet
@@ -119,7 +129,7 @@ spec:
       - name: prometheus-init-config
         image: busybox
         command: ["/bin/sh", "-c"]
-        args: ['if [ ! -f "/etc/prometheus/prometheus.yml" ]; then cp /config/prometheus.yml /etc/prometheus; fi; cp /config/osm_rules.yml /etc/prometheus']
+        args: ['if [ ! -f "/etc/prometheus/prometheus.yml" ]; then cp /config/prometheus.yml /etc/prometheus; fi; cp /config/osm_metric_rules.yml /config/osm_alert_rules.yml /etc/prometheus']
         volumeMounts:
           - name: prom-config
             mountPath: /etc/prometheus
index 9b3754a..4871be4 100644 (file)
@@ -50,4 +50,4 @@ spec:
            value: mongodb://mongodb-k8s:27017/?replicaSet=rs0
         envFrom:
         - secretRef:
-             name: pol-secret
+            name: pol-secret
diff --git a/installers/docker/osm_pods/webhook-translator.yaml b/installers/docker/osm_pods/webhook-translator.yaml
new file mode 100644 (file)
index 0000000..eb41f58
--- /dev/null
@@ -0,0 +1,55 @@
+#######################################################################################
+# Copyright ETSI Contributors and Others.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+# implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#######################################################################################
+
+apiVersion: v1
+kind: Service
+metadata:
+  name: webhook-translator
+spec:
+  ports:
+  - nodePort: 9998
+    port: 80
+    targetPort: 80
+  selector:
+    app: webhook-translator
+  type: NodePort
+---
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+  name: webhook-translator
+  labels:
+    app: webhook-translator
+spec:
+  replicas: 1
+  selector:
+    matchLabels:
+      app: webhook-translator
+  template:
+    metadata:
+      labels:
+        app: webhook-translator
+    spec:
+      containers:
+      - name: webhook-translator
+        image: gerardogarcia/webhook-translator:13
+        ports:
+        - containerPort: 80
+          protocol: TCP
+        envFrom:
+        - secretRef:
+            name: webhook-translator-secret
index 8c651e2..03be9a6 100755 (executable)
@@ -30,7 +30,7 @@ function usage(){
     echo -e "                     -b tags/v1.1.0     (a specific tag)"
     echo -e "                     ..."
     echo -e "     -a <apt proxy url>: use this apt proxy url when downloading apt packages (air-gapped installation)"
-    echo -e "     -s <stack name> or <namespace>  user defined stack name when installed using swarm or namespace when installed using k8s, default is osm"
+    echo -e "     -s <namespace>  namespace when installed using k8s, default is osm"
     echo -e "     -H <VCA host>   use specific juju host controller IP"
     echo -e "     -S <VCA secret> use VCA/juju secret key"
     echo -e "     -P <VCA pubkey> use VCA/juju public key file"
@@ -112,12 +112,12 @@ function set_vca_variables() {
     OSM_VCA_CLOUDNAME="lxd-cloud"
     [ -n "$OSM_VCA_HOST" ] && OSM_VCA_CLOUDNAME="localhost"
     if [ -z "$OSM_VCA_HOST" ]; then
-        [ -z "$CONTROLLER_NAME" ] && OSM_VCA_HOST=`sg lxd -c "juju show-controller $OSM_STACK_NAME"|grep api-endpoints|awk -F\' '{print $2}'|awk -F\: '{print $1}'`
+        [ -z "$CONTROLLER_NAME" ] && OSM_VCA_HOST=`sg lxd -c "juju show-controller $OSM_NAMESPACE"|grep api-endpoints|awk -F\' '{print $2}'|awk -F\: '{print $1}'`
         [ -n "$CONTROLLER_NAME" ] && OSM_VCA_HOST=`juju show-controller $CONTROLLER_NAME |grep api-endpoints|awk -F\' '{print $2}'|awk -F\: '{print $1}'`
         [ -z "$OSM_VCA_HOST" ] && FATAL "Cannot obtain juju controller IP address"
     fi
     if [ -z "$OSM_VCA_SECRET" ]; then
-        [ -z "$CONTROLLER_NAME" ] && OSM_VCA_SECRET=$(parse_juju_password $OSM_STACK_NAME)
+        [ -z "$CONTROLLER_NAME" ] && OSM_VCA_SECRET=$(parse_juju_password $OSM_NAMESPACE)
         [ -n "$CONTROLLER_NAME" ] && OSM_VCA_SECRET=$(parse_juju_password $CONTROLLER_NAME)
         [ -z "$OSM_VCA_SECRET" ] && FATAL "Cannot obtain juju secret"
     fi
@@ -126,7 +126,7 @@ function set_vca_variables() {
         [ -z "$OSM_VCA_PUBKEY" ] && FATAL "Cannot obtain juju public key"
     fi
     if [ -z "$OSM_VCA_CACERT" ]; then
-        [ -z "$CONTROLLER_NAME" ] && OSM_VCA_CACERT=$(juju controllers --format json | jq -r --arg controller $OSM_STACK_NAME '.controllers[$controller]["ca-cert"]' | base64 | tr -d \\n)
+        [ -z "$CONTROLLER_NAME" ] && OSM_VCA_CACERT=$(juju controllers --format json | jq -r --arg controller $OSM_NAMESPACE '.controllers[$controller]["ca-cert"]' | base64 | tr -d \\n)
         [ -n "$CONTROLLER_NAME" ] && OSM_VCA_CACERT=$(juju controllers --format json | jq -r --arg controller $CONTROLLER_NAME '.controllers[$controller]["ca-cert"]' | base64 | tr -d \\n)
         [ -z "$OSM_VCA_CACERT" ] && FATAL "Cannot obtain juju CA certificate"
     fi
@@ -327,6 +327,7 @@ function generate_docker_env_files() {
     sudo cp $OSM_DOCKER_WORK_DIR/ro.env{,~}
     if [ -n "${INSTALL_NGSA}" ]; then
         sudo cp $OSM_DOCKER_WORK_DIR/ngsa.env{,~}
+        sudo cp $OSM_DOCKER_WORK_DIR/webhook-translator.env{,~}
     fi
 
     echo "Generating docker env files"
@@ -475,6 +476,14 @@ function generate_docker_env_files() {
         echo "OSMMON_DATABASE_COMMONKEY=${OSM_DATABASE_COMMONKEY}" | sudo tee -a $OSM_DOCKER_WORK_DIR/ngsa.env
     fi
 
+    # Webhook-translator
+    if [ -n "${INSTALL_NGSA}" ] && [ ! -f $OSM_DOCKER_WORK_DIR/webhook-translator.env ]; then
+        echo "AIRFLOW_HOST=airflow-webserver" | sudo tee -a $OSM_DOCKER_WORK_DIR/webhook-translator.env
+        echo "AIRFLOW_PORT=8080" | sudo tee -a $OSM_DOCKER_WORK_DIR/webhook-translator.env
+        echo "AIRFLOW_USER=admin" | sudo tee -a $OSM_DOCKER_WORK_DIR/webhook-translator.env
+        echo "AIRFLOW_PASS=admin" | sudo tee -a $OSM_DOCKER_WORK_DIR/webhook-translator.env
+    fi
+
     echo "Finished generation of docker env files"
     [ -z "${DEBUG_INSTALL}" ] || DEBUG end of function
 }
@@ -482,16 +491,17 @@ function generate_docker_env_files() {
 #creates secrets from env files which will be used by containers
 function kube_secrets(){
     [ -z "${DEBUG_INSTALL}" ] || DEBUG beginning of function
-    kubectl create ns $OSM_STACK_NAME
-    kubectl create secret generic lcm-secret -n $OSM_STACK_NAME --from-env-file=$OSM_DOCKER_WORK_DIR/lcm.env
-    kubectl create secret generic mon-secret -n $OSM_STACK_NAME --from-env-file=$OSM_DOCKER_WORK_DIR/mon.env
-    kubectl create secret generic nbi-secret -n $OSM_STACK_NAME --from-env-file=$OSM_DOCKER_WORK_DIR/nbi.env
-    kubectl create secret generic ro-db-secret -n $OSM_STACK_NAME --from-env-file=$OSM_DOCKER_WORK_DIR/ro-db.env
-    kubectl create secret generic ro-secret -n $OSM_STACK_NAME --from-env-file=$OSM_DOCKER_WORK_DIR/ro.env
-    kubectl create secret generic keystone-secret -n $OSM_STACK_NAME --from-env-file=$OSM_DOCKER_WORK_DIR/keystone.env
-    kubectl create secret generic pol-secret -n $OSM_STACK_NAME --from-env-file=$OSM_DOCKER_WORK_DIR/pol.env
+    kubectl create ns $OSM_NAMESPACE
+    kubectl create secret generic lcm-secret -n $OSM_NAMESPACE --from-env-file=$OSM_DOCKER_WORK_DIR/lcm.env
+    kubectl create secret generic mon-secret -n $OSM_NAMESPACE --from-env-file=$OSM_DOCKER_WORK_DIR/mon.env
+    kubectl create secret generic nbi-secret -n $OSM_NAMESPACE --from-env-file=$OSM_DOCKER_WORK_DIR/nbi.env
+    kubectl create secret generic ro-db-secret -n $OSM_NAMESPACE --from-env-file=$OSM_DOCKER_WORK_DIR/ro-db.env
+    kubectl create secret generic ro-secret -n $OSM_NAMESPACE --from-env-file=$OSM_DOCKER_WORK_DIR/ro.env
+    kubectl create secret generic keystone-secret -n $OSM_NAMESPACE --from-env-file=$OSM_DOCKER_WORK_DIR/keystone.env
+    kubectl create secret generic pol-secret -n $OSM_NAMESPACE --from-env-file=$OSM_DOCKER_WORK_DIR/pol.env
     if [ -n "${INSTALL_NGSA}" ]; then
-        kubectl create secret generic ngsa-secret -n $OSM_STACK_NAME --from-env-file=$OSM_DOCKER_WORK_DIR/ngsa.env
+        kubectl create secret generic ngsa-secret -n $OSM_NAMESPACE --from-env-file=$OSM_DOCKER_WORK_DIR/ngsa.env
+        kubectl create secret generic webhook-translator-secret -n $OSM_NAMESPACE --from-env-file=$OSM_DOCKER_WORK_DIR/webhook-translator.env
     fi
     [ -z "${DEBUG_INSTALL}" ] || DEBUG end of function
 }
@@ -499,22 +509,22 @@ function kube_secrets(){
 #deploys osm pods and services
 function deploy_osm_services() {
     [ -z "${DEBUG_INSTALL}" ] || DEBUG beginning of function
-    kubectl apply -n $OSM_STACK_NAME -f $OSM_K8S_WORK_DIR
+    kubectl apply -n $OSM_NAMESPACE -f $OSM_K8S_WORK_DIR
     [ -z "${DEBUG_INSTALL}" ] || DEBUG end of function
 }
 
 #deploy charmed services
 function deploy_charmed_services() {
     [ -z "${DEBUG_INSTALL}" ] || DEBUG beginning of function
-    juju add-model $OSM_STACK_NAME $OSM_VCA_K8S_CLOUDNAME
-    juju deploy ch:mongodb-k8s -m $OSM_STACK_NAME
+    juju add-model $OSM_NAMESPACE $OSM_VCA_K8S_CLOUDNAME
+    juju deploy ch:mongodb-k8s -m $OSM_NAMESPACE
     [ -z "${DEBUG_INSTALL}" ] || DEBUG end of function
 }
 
 function deploy_osm_pla_service() {
     [ -z "${DEBUG_INSTALL}" ] || DEBUG beginning of function
     # corresponding to deploy_osm_services
-    kubectl apply -n $OSM_STACK_NAME -f $OSM_DOCKER_WORK_DIR/osm_pla
+    kubectl apply -n $OSM_NAMESPACE -f $OSM_DOCKER_WORK_DIR/osm_pla
     [ -z "${DEBUG_INSTALL}" ] || DEBUG end of function
 }
 
@@ -540,6 +550,8 @@ function parse_yaml() {
             image=${module}
             if [ "$module" == "ng-prometheus" ]; then
                 image="prometheus"
+            elif [ "$module" == "ng-mon" ]; then
+                image="mon"
             fi
             echo "Updating K8s manifest file from opensourcemano\/${image}:.* to ${DOCKER_REGISTRY_URL}${DOCKER_USER}\/${image}:${TAG}"
             sudo sed -i "s#opensourcemano/${image}:.*#${DOCKER_REGISTRY_URL}${DOCKER_USER}/${image}:${TAG}#g" ${OSM_K8S_WORK_DIR}/${module}.yaml
@@ -549,7 +561,7 @@ function parse_yaml() {
 }
 
 function update_manifest_files() {
-    osm_services="nbi lcm ro pol mon ng-ui keystone pla prometheus ng-prometheus"
+    osm_services="nbi lcm ro pol mon ng-mon ng-ui keystone pla prometheus ng-prometheus"
     list_of_services=""
     for module in $osm_services; do
         module_upper="${module^^}"
@@ -564,10 +576,15 @@ function update_manifest_files() {
         parse_yaml $MODULE_DOCKER_TAG $list_of_services_to_rebuild
     fi
     # The manifest for prometheus is prometheus.yaml or ng-prometheus.yaml, depending on the installation option
+    # If NG-SA is installed, it will include ng-mon (only mon-dashboarder), ng-prometheus and webhook translator. It won't include pol, mon and prometheus
     if [ -n "$INSTALL_NGSA" ]; then
         sudo rm -f ${OSM_K8S_WORK_DIR}/prometheus.yaml
+        sudo rm -f ${OSM_K8S_WORK_DIR}/mon.yaml
+        sudo rm -f ${OSM_K8S_WORK_DIR}/pol.yaml
     else
+        sudo rm -f ${OSM_K8S_WORK_DIR}/ng-mon.yaml
         sudo rm -f ${OSM_K8S_WORK_DIR}/ng-prometheus.yaml
+        sudo rm -f ${OSM_K8S_WORK_DIR}/webhook-translator.yaml
     fi
     [ -z "${DEBUG_INSTALL}" ] || DEBUG end of function
 }
@@ -709,9 +726,6 @@ function install_osm() {
 
     find_devops_folder
 
-    # TODO: the use of stacks come from docker-compose. We should probably remove
-    [ "${OSM_STACK_NAME}" == "osm" ] || OSM_DOCKER_WORK_DIR="$OSM_WORK_DIR/stack/$OSM_STACK_NAME"
-
     track start release $RELEASE none none docker_tag $OSM_DOCKER_TAG none none installation_type $OSM_INSTALLATION_TYPE none none
 
     track checks checkingroot_ok
@@ -760,7 +774,7 @@ function install_osm() {
     FATAL_TRACK k8scluster "install_kubeadm_cluster.sh failed"
     track k8scluster k8scluster_ok
 
-    JUJU_OPTS="-D ${OSM_DEVOPS} -s ${OSM_STACK_NAME} -i ${OSM_DEFAULT_IP} ${DEBUG_INSTALL} ${INSTALL_NOJUJU} ${INSTALL_CACHELXDIMAGES}"
+    JUJU_OPTS="-D ${OSM_DEVOPS} -s ${OSM_NAMESPACE} -i ${OSM_DEFAULT_IP} ${DEBUG_INSTALL} ${INSTALL_NOJUJU} ${INSTALL_CACHELXDIMAGES}"
     [ -n "${OSM_VCA_HOST}" ] && JUJU_OPTS="$JUJU_OPTS -H ${OSM_VCA_HOST}"
     [ -n "${LXD_CLOUD_FILE}" ] && JUJU_OPTS="$JUJU_OPTS -l ${LXD_CLOUD_FILE}"
     [ -n "${LXD_CRED_FILE}" ] && JUJU_OPTS="$JUJU_OPTS -L ${LXD_CRED_FILE}"
@@ -814,9 +828,9 @@ function install_osm() {
     track osmclient osmclient_ok
 
     echo -e "Checking OSM health state..."
-    $OSM_DEVOPS/installers/osm_health.sh -s ${OSM_STACK_NAME} -k || \
+    $OSM_DEVOPS/installers/osm_health.sh -s ${OSM_NAMESPACE} -k || \
     (echo -e "OSM is not healthy, but will probably converge to a healthy state soon." && \
-    echo -e "Check OSM status with: kubectl -n ${OSM_STACK_NAME} get all" && \
+    echo -e "Check OSM status with: kubectl -n ${OSM_NAMESPACE} get all" && \
     track healthchecks osm_unhealthy didnotconverge)
     track healthchecks after_healthcheck_ok
 
@@ -935,7 +949,7 @@ function dump_vars(){
     echo "OSM_DOCKER_WORK_DIR=$OSM_DOCKER_WORK_DIR"
     echo "OSM_HELM_WORK_DIR=$OSM_HELM_WORK_DIR"
     echo "OSM_K8S_WORK_DIR=$OSM_K8S_WORK_DIR"
-    echo "OSM_STACK_NAME=$OSM_STACK_NAME"
+    echo "OSM_NAMESPACE=$OSM_NAMESPACE"
     echo "OSM_VCA_HOST=$OSM_VCA_HOST"
     echo "OSM_VCA_PUBKEY=$OSM_VCA_PUBKEY"
     echo "OSM_VCA_SECRET=$OSM_VCA_SECRET"
@@ -1012,7 +1026,7 @@ OSM_VCA_SECRET=
 OSM_VCA_PUBKEY=
 OSM_VCA_CLOUDNAME="localhost"
 OSM_VCA_K8S_CLOUDNAME="k8scloud"
-OSM_STACK_NAME=osm
+OSM_NAMESPACE=osm
 NO_HOST_PORTS=""
 DOCKER_NOBUILD=""
 REPOSITORY_KEY="OSM%20ETSI%20Release%20Key.gpg"
@@ -1022,7 +1036,7 @@ OSM_DOCKER_WORK_DIR="${OSM_WORK_DIR}/docker"
 OSM_K8S_WORK_DIR="${OSM_DOCKER_WORK_DIR}/osm_pods"
 OSM_HELM_WORK_DIR="${OSM_WORK_DIR}/helm"
 OSM_HOST_VOL="/var/lib/osm"
-OSM_NAMESPACE_VOL="${OSM_HOST_VOL}/${OSM_STACK_NAME}"
+OSM_NAMESPACE_VOL="${OSM_HOST_VOL}/${OSM_NAMESPACE}"
 OSM_DOCKER_TAG=latest
 DOCKER_USER=opensourcemano
 PULL_IMAGES="y"
@@ -1119,7 +1133,7 @@ while getopts ":a:b:r:n:k:u:R:D:o:O:m:N:H:S:s:t:U:P:A:l:L:K:d:p:T:f:F:-: hy" o;
             OSM_VCA_SECRET="${OPTARG}"
             ;;
         s)
-            OSM_STACK_NAME="${OPTARG}" && [[ ! "${OPTARG}" =~ $RE_CHECK ]] && echo "Namespace $OPTARG is invalid. Regex used for validation is $RE_CHECK" && exit 0
+            OSM_NAMESPACE="${OPTARG}" && [[ ! "${OPTARG}" =~ $RE_CHECK ]] && echo "Namespace $OPTARG is invalid. Regex used for validation is $RE_CHECK" && exit 0
             ;;
         t)
             OSM_DOCKER_TAG="${OPTARG}"
@@ -1256,7 +1270,7 @@ fi
 [ -n "$TO_REBUILD" ] && [ "$TO_REBUILD" == " PLA" ] && [ -z "$INSTALL_PLA" ] && FATAL "Incompatible option: -m PLA cannot be used without --pla option"
 # if develop, we force master
 [ -z "$COMMIT_ID" ] && [ -n "$DEVELOP" ] && COMMIT_ID="master"
-OSM_K8S_WORK_DIR="$OSM_DOCKER_WORK_DIR/osm_pods" && OSM_NAMESPACE_VOL="${OSM_HOST_VOL}/${OSM_STACK_NAME}"
+OSM_K8S_WORK_DIR="$OSM_DOCKER_WORK_DIR/osm_pods" && OSM_NAMESPACE_VOL="${OSM_HOST_VOL}/${OSM_NAMESPACE}"
 [ -n "$INSTALL_ONLY" ] && [ -n "$INSTALL_K8S_MONITOR" ] && install_k8s_monitoring
 [ -n "$INSTALL_ONLY" ] && [ -n "$INSTALL_NGSA" ] && install_osm_ngsa_service
 [ -n "$INSTALL_ONLY" ] && echo -e "\nDONE" && exit 0
index 0a62abf..9bde121 100644 (file)
@@ -1,19 +1,16 @@
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
 #
-#   http://www.apache.org/licenses/LICENSE-2.0
+#   Licensed under the Apache License, Version 2.0 (the "License");
+#   you may not use this file except in compliance with the License.
+#   You may obtain a copy of the License at
+#
+#       http://www.apache.org/licenses/LICENSE-2.0
+#
+#   Unless required by applicable law or agreed to in writing, software
+#   distributed under the License is distributed on an "AS IS" BASIS,
+#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#   See the License for the specific language governing permissions and
+#   limitations under the License.
 #
-# Unless required by applicable law or agreed to in writing,
-# software distributed under the License is distributed on an
-# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-# KIND, either express or implied.  See the License for the
-# specific language governing permissions and limitations
-# under the License.
 ---
 defaultAirflowRepository: opensourcemano/airflow
 defaultAirflowTag: "13"
diff --git a/installers/helm/values/alertmanager-values.yaml b/installers/helm/values/alertmanager-values.yaml
new file mode 100644 (file)
index 0000000..2e438cb
--- /dev/null
@@ -0,0 +1,47 @@
+#
+#   Licensed under the Apache License, Version 2.0 (the "License");
+#   you may not use this file except in compliance with the License.
+#   You may obtain a copy of the License at
+#
+#       http://www.apache.org/licenses/LICENSE-2.0
+#
+#   Unless required by applicable law or agreed to in writing, software
+#   distributed under the License is distributed on an "AS IS" BASIS,
+#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#   See the License for the specific language governing permissions and
+#   limitations under the License.
+#
+---
+extraArgs:
+  log.level: debug
+service:
+  type: NodePort
+  nodePort: 9093
+  port: 9093
+config:
+  receivers:
+    - name: default-receiver
+    - name: vdu-webhook
+      webhook_configs:
+       - url: http://webhook-translator/alert_vdu
+    - name: scaleout-webhook
+      webhook_configs:
+       - url: http://webhook-translator/scaleout_vdu
+    - name: scalein-webhook
+      webhook_configs:
+       - url: http://webhook-translator/scalein_vdu
+  route:
+    group_wait: 10s
+    group_interval: 2m
+    receiver: default-receiver
+    repeat_interval: 3h
+    routes:
+    - receiver: vdu-webhook
+      matchers:
+      - alertname = "vdu_down"
+    - receiver: 'scaleout-webhook'
+      matchers:
+      - alertname =~ "^scaleout_.*"
+    - receiver: 'scalein-webhook'
+      matchers:
+      - alertname =~ "^scalein_.*"
index dd9eb3b..68c8628 100755 (executable)
@@ -73,11 +73,11 @@ snap-https-proxy: ${HTTPS_PROXY}
 EOF
         JUJU_BOOTSTRAP_OPTS="--model-default /tmp/.osm/model-config.yaml"
     fi
-    juju bootstrap -v --debug $OSM_VCA_K8S_CLOUDNAME $OSM_STACK_NAME  \
+    juju bootstrap -v --debug $OSM_VCA_K8S_CLOUDNAME $OSM_NAMESPACE  \
             --config controller-service-type=loadbalancer \
             --agent-version=$JUJU_AGENT_VERSION \
             ${JUJU_BOOTSTRAP_OPTS} \
-    || FATAL "Failed to bootstrap controller $OSM_STACK_NAME in cloud $OSM_VCA_K8S_CLOUDNAME"
+    || FATAL "Failed to bootstrap controller $OSM_NAMESPACE in cloud $OSM_VCA_K8S_CLOUDNAME"
     [ -z "${DEBUG_INSTALL}" ] || DEBUG end of function
 }
 
@@ -109,8 +109,8 @@ credentials:
       client-key: /tmp/.osm/client.key
 EOF
     lxc config trust add local: /tmp/.osm/client.crt
-    juju add-cloud -c $OSM_STACK_NAME $OSM_VCA_CLOUDNAME $LXD_CLOUD --force
-    juju add-credential -c $OSM_STACK_NAME $OSM_VCA_CLOUDNAME -f $LXD_CREDENTIALS
+    juju add-cloud -c $OSM_NAMESPACE $OSM_VCA_CLOUDNAME $LXD_CLOUD --force
+    juju add-credential -c $OSM_NAMESPACE $OSM_VCA_CLOUDNAME -f $LXD_CREDENTIALS
     sg lxd -c "lxd waitready"
     juju controller-config features=[k8s-operators]
     if [ -n "${OSM_BEHIND_PROXY}" ] ; then
@@ -160,7 +160,7 @@ JUJU_AGENT_VERSION=2.9.34
 JUJU_VERSION=2.9
 OSM_BEHIND_PROXY=""
 OSM_DEVOPS=
-OSM_STACK_NAME=osm
+OSM_NAMESPACE=osm
 OSM_VCA_HOST=
 OSM_VCA_CLOUDNAME="localhost"
 OSM_VCA_K8S_CLOUDNAME="k8scloud"
@@ -175,7 +175,7 @@ while getopts ":D:i:s:H:l:L:K:-: hP" o; do
             DEFAULT_IP="${OPTARG}"
             ;;
         s)
-            OSM_STACK_NAME="${OPTARG}" && [[ ! "${OPTARG}" =~ $RE_CHECK ]] && echo "Namespace $OPTARG is invalid. Regex used for validation is $RE_CHECK" && exit 0
+            OSM_NAMESPACE="${OPTARG}" && [[ ! "${OPTARG}" =~ $RE_CHECK ]] && echo "Namespace $OPTARG is invalid. Regex used for validation is $RE_CHECK" && exit 0
             ;;
         H)
             OSM_VCA_HOST="${OPTARG}"
@@ -278,7 +278,7 @@ EOF
             juju add-credential -c $CONTROLLER_NAME $OSM_VCA_CLOUDNAME -f ~/.osm/lxd-credentials.yaml || juju update-credential lxd-cloud -c $CONTROLLER_NAME -f ~/.osm/lxd-credentials.yaml
         fi
     fi
-    [ -z "$CONTROLLER_NAME" ] && OSM_VCA_HOST=`sg lxd -c "juju show-controller $OSM_STACK_NAME"|grep api-endpoints|awk -F\' '{print $2}'|awk -F\: '{print $1}'`
+    [ -z "$CONTROLLER_NAME" ] && OSM_VCA_HOST=`sg lxd -c "juju show-controller $OSM_NAMESPACE"|grep api-endpoints|awk -F\' '{print $2}'|awk -F\: '{print $1}'`
     [ -n "$CONTROLLER_NAME" ] && OSM_VCA_HOST=`juju show-controller $CONTROLLER_NAME |grep api-endpoints|awk -F\' '{print $2}'|awk -F\: '{print $1}'`
     [ -z "$OSM_VCA_HOST" ] && FATAL "Cannot obtain juju controller IP address"
 fi
index 648a1be..03b7d79 100755 (executable)
@@ -114,7 +114,7 @@ function check_and_track_k8s_ready_before_helm() {
 #Helm releases can be found here: https://github.com/helm/helm/releases
 function install_helm() {
     [ -z "${DEBUG_INSTALL}" ] || DEBUG beginning of function
-    HELM_VERSION="v3.7.2"
+    HELM_VERSION="v3.11.3"
     if ! [[ "$(helm version --short 2>/dev/null)" =~ ^v3.* ]]; then
         # Helm is not installed. Install helm
         echo "Helm3 is not installed, installing ..."
index 5d7ad68..b90c3dc 100755 (executable)
@@ -18,6 +18,7 @@ set +eux
 # Helm chart 1.6.0 correspondes to Airflow 2.3.0
 AIRFLOW_HELM_VERSION=1.6.0
 PROMPUSHGW_HELM_VERSION=1.18.2
+ALERTMANAGER_HELM_VERSION=0.22.0
 
 # Install Airflow helm chart
 function install_airflow() {
@@ -58,6 +59,22 @@ function install_prometheus_pushgateway() {
     [ -z "${DEBUG_INSTALL}" ] || DEBUG end of function
 }
 
+# Install Prometheus AlertManager helm chart
+function install_prometheus_alertmanager() {
+    [ -z "${DEBUG_INSTALL}" ] || DEBUG end of function
+    if ! helm -n osm status alertmanager 2> /dev/null ; then
+        # if it does not exist, install
+        helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
+        helm repo update
+        helm -n osm install alertmanager prometheus-community/alertmanager -f ${OSM_HELM_WORK_DIR}/alertmanager-values.yaml --version ${ALERTMANAGER_HELM_VERSION}
+    else
+        # if it exists, upgrade
+        helm repo update
+        helm -n osm upgrade alertmanager prometheus-community/alertmanager -f ${OSM_HELM_WORK_DIR}/alertmanager-values.yaml --version ${ALERTMANAGER_HELM_VERSION}
+    fi
+    [ -z "${DEBUG_INSTALL}" ] || DEBUG end of function
+}
+
 # main
 
 OSM_DEVOPS="/usr/share/osm-devops"
@@ -106,4 +123,6 @@ install_airflow
 track deploy_osm airflow_ok
 install_prometheus_pushgateway
 track deploy_osm pushgateway_ok
+install_prometheus_alertmanager
+track deploy_osm alertmanager_ok
 
index 3af413b..eac9b80 100755 (executable)
@@ -56,7 +56,7 @@ function uninstall_osm() {
         # uninstall OSM MONITORING
         uninstall_k8s_monitoring
     fi
-    remove_k8s_namespace $OSM_STACK_NAME
+    remove_k8s_namespace $OSM_NAMESPACE
     echo "Now osm docker images and volumes will be deleted"
     # TODO: clean-up of images should take into account if other tags were used for specific modules
     newgrp docker << EONG
@@ -67,12 +67,12 @@ EONG
 
     sg docker -c "docker image rm ${DOCKER_REGISTRY_URL}${DOCKER_USER}/ng-ui:${OSM_DOCKER_TAG}"
 
-    OSM_NAMESPACE_VOL="${OSM_HOST_VOL}/${OSM_STACK_NAME}"
+    OSM_NAMESPACE_VOL="${OSM_HOST_VOL}/${OSM_NAMESPACE}"
     remove_volumes $OSM_NAMESPACE_VOL
 
     echo "Removing $OSM_DOCKER_WORK_DIR"
     sudo rm -rf $OSM_DOCKER_WORK_DIR
-    [ -z "$CONTROLLER_NAME" ] && sg lxd -c "juju kill-controller -t 0 -y $OSM_STACK_NAME"
+    [ -z "$CONTROLLER_NAME" ] && sg lxd -c "juju kill-controller -t 0 -y $OSM_NAMESPACE"
 
     remove_crontab_job
 
@@ -147,7 +147,7 @@ OSM_VCA_SECRET=
 OSM_VCA_PUBKEY=
 OSM_VCA_CLOUDNAME="localhost"
 OSM_VCA_K8S_CLOUDNAME="k8scloud"
-OSM_STACK_NAME=osm
+OSM_NAMESPACE=osm
 NO_HOST_PORTS=""
 DOCKER_NOBUILD=""
 REPOSITORY_KEY="OSM%20ETSI%20Release%20Key.gpg"
@@ -156,7 +156,7 @@ OSM_WORK_DIR="/etc/osm"
 OSM_DOCKER_WORK_DIR="/etc/osm/docker"
 OSM_K8S_WORK_DIR="${OSM_DOCKER_WORK_DIR}/osm_pods"
 OSM_HOST_VOL="/var/lib/osm"
-OSM_NAMESPACE_VOL="${OSM_HOST_VOL}/${OSM_STACK_NAME}"
+OSM_NAMESPACE_VOL="${OSM_HOST_VOL}/${OSM_NAMESPACE}"
 OSM_DOCKER_TAG=latest
 DOCKER_USER=opensourcemano
 PULL_IMAGES="y"
@@ -250,7 +250,7 @@ while getopts ":a:b:r:n:k:u:R:D:o:O:m:N:H:S:s:t:U:P:A:l:L:K:d:p:T:f:F:-: hy" o;
             OSM_VCA_SECRET="${OPTARG}"
             ;;
         s)
-            OSM_STACK_NAME="${OPTARG}" && [[ ! "${OPTARG}" =~ $RE_CHECK ]] && echo "Namespace $OPTARG is invalid. Regex used for validation is $RE_CHECK" && exit 0
+            OSM_NAMESPACE="${OPTARG}" && [[ ! "${OPTARG}" =~ $RE_CHECK ]] && echo "Namespace $OPTARG is invalid. Regex used for validation is $RE_CHECK" && exit 0
             ;;
         t)
             OSM_DOCKER_TAG="${OPTARG}"
index 0b41169..390c769 100644 (file)
@@ -143,6 +143,12 @@ def archive(artifactory_server,mdg,branch,status) {
           "props": "${properties}",
           "flat": false
         },
+        {
+          "pattern": "dist/*.whl",
+          "target": "${repo_prefix}${mdg}/${branch}/${BUILD_NUMBER}/",
+          "props": "${properties}",
+          "flat": false
+        },
         {
           "pattern": "pool/*/*.deb",
           "target": "${repo_prefix}${mdg}/${branch}/${BUILD_NUMBER}/",
index ddee6f4..201768a 100644 (file)
@@ -141,6 +141,7 @@ def ci_pipeline(mdg,url_prefix,project,branch,refspec,revision,do_stage_3,artifa
             'installers/charm/osm-ro',
             'installers/charm/osm-temporal',
             'installers/charm/osm-temporal-ui',
+            'installers/charm/osm-update-db-operator',
             'installers/charm/prometheus',
             'installers/charm/vca-integrator-operator',
         ]
index 620faba..e0cddea 100644 (file)
@@ -638,7 +638,7 @@ EOF"""
                                 parallelSteps[module] = {
                                     dir("$module") {
                                         sh("docker pull ${INTERNAL_DOCKER_REGISTRY}opensourcemano/${moduleName}:${moduleTag}")
-                                        sh("""docker tag opensourcemano/${moduleName}:${moduleTag} \
+                                        sh("""docker tag ${INTERNAL_DOCKER_REGISTRY}opensourcemano/${moduleName}:${moduleTag} \
                                            opensourcemano/${moduleName}:${dockerTag}""")
                                         sh "docker push opensourcemano/${moduleName}:${dockerTag}"
                                     }
@@ -690,6 +690,7 @@ EOF"""
                                 'osm-pol',
                                 'osm-ro',
                                 'osm-prometheus',
+                                'osm-update-db-operator',
                                 'osm-vca-integrator',
                             ]
                             for (charm in charms) {
index ab4b147..4ff12f2 100755 (executable)
@@ -18,7 +18,7 @@
 
 APT_PROXY=""
 DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )"
-HTTPDDIR="$( cd "${HOME}/snap/qhttp/common" &> /dev/null && pwd )"
+HTTPDDIR="${HOME}/.osm/httpd"
 HTTPPORT=8000
 KUBECFG="~/.osm/microk8s-config.yaml"
 NO_CACHE=""
@@ -35,10 +35,11 @@ function check_arguments(){
             --apt-proxy) APT_PROXY="$2" && shift ;;
             --devel-tag) DEVEL_TAG="$2" && shift ;;
             --help | -h) show_help && exit 0 ;;
-            --httpddir) HTTPDIR="$2" && shift;;
+            --httpddir) HTTPDDIR="$2" && shift;;
             --install-local-registry) 'install_local_registry' ;;
             --install-microstack) 'install_microstack' ;;
             --install-qhttpd) INSTALL_HTTPD='install_qhttpd' ;;
+            --run-httpserver) INSTALL_HTTPD='run_httpserver' ;;
             --kubecfg) KUBECFG="$2" && shift ;;
             --module) TARGET_MODULE="$2" && shift;;
             --no-cache) NO_CACHE="--no-cache" ;;
@@ -75,7 +76,8 @@ OPTIONS:
   --debug                       enable set -x for this script
   --install-local-registry      install and enable Microk8s local registry on port 32000
   --install-microstack          install Microstack and configure to run robot tests
-  --install-qhttpd              install QHTTPD as an HTTP server on port ${HTTPPORT}
+  --install-qhttpd              (deprecated, use --run-httpserver instead) install QHTTPD as an HTTP server on port ${HTTPPORT}
+  --run-httpserver              run HTTP server on port ${HTTPPORT}
   --kubecfg                     path to kubecfg.yaml (uses Charmed OSM by default)
   --no-cache                    do not use any cache when building docker images
   --module                      only build this comma delimited list of modules
@@ -104,9 +106,9 @@ Let's assume that we have different repos cloned in the folder workspace:
   git clone "https://osm.etsi.org/gerrit/osm/IM
   git clone "https://osm.etsi.org/gerrit/osm/N2VC
 
-First we install a light HTTP server to serve the artifacts:
+First we run a light HTTP server to serve the artifacts:
 
-  devops/tools/local-build.sh --install-qhttpd
+  devops/tools/local-build.sh --run-httpserver
 
 Then we generate the artifacts (debian packages) for the different repos: common, IM, N2VC, RO, LCM, NBI
 
@@ -168,6 +170,10 @@ function install_microstack() {
          --disk-format=qcow2 ubuntu20.04
 }
 
+function create_httpddir() {
+    mkdir -p ${HTTPDDIR}
+}
+
 function install_qhttpd() {
     sudo snap install qhttp
     EXISTING_PID=$(ps auxw | grep "http.server $HTTPPORT" | grep -v grep | awk '{print $2}')
@@ -177,6 +183,14 @@ function install_qhttpd() {
     nohup qhttp -p ${HTTPPORT} &
 }
 
+function run_httpserver() {
+    EXISTING_PID=$(ps auxw | grep "http.server $HTTPPORT" | grep -v grep | awk '{print $2}')
+    if [ ! -z $EXISTING_PID ] ; then
+        kill $EXISTING_PID
+    fi
+    nohup python3 -m http.server ${HTTPPORT} --directory "${HTTPDDIR}" &>/dev/null &
+}
+
 function stage_2() {
     print_section "Performing Stage 2"
     MODULES="common devops IM LCM MON N2VC NBI NG-UI NG-SA osmclient PLA POL RO tests"
@@ -264,7 +278,13 @@ function stage_3() {
     fi
 
     HOSTIP=$(ip -4 addr show docker0 | grep -Po 'inet \K[\d.]+')
-    for file in ~/snap/qhttp/common/*.deb ; do
+    [ -z "$DEFAULT_IF" ] && DEFAULT_IF=$(ip route list|awk '$1=="default" {print $5; exit}')
+    [ -z "$DEFAULT_IF" ] && DEFAULT_IF=$(route -n |awk '$1~/^0.0.0.0/ {print $8; exit}')
+    DEFAULT_IP=$(ip -o -4 a s ${DEFAULT_IF} |awk '{split($4,a,"/"); print a[1]; exit}')
+    HOSTIP=${HOSTIP:=${DEFAULT_IP}}
+    echo $HOSTIP
+
+    for file in ${HTTPDDIR}/*.deb ; do
         file=`basename ${file}`
         name=`echo ${file} | cut -d_ -f1 | sed "s/-/_/g" | sed "s/.deb//"`;
         name=${name^^}_URL
@@ -436,6 +456,7 @@ if [ "$0" != "$BASH_SOURCE" ]; then
 else
     check_arguments $@
 
+    create_httpddir
     eval "${INSTALL_HTTPD}"
     eval "${INSTALL_LOCAL_REGISTRY}"
     eval "${INSTALL_MICROSTACK}"