Commit 1a16ab9d authored by lavado's avatar lavado
Browse files

Merge branch 'squid-cnf-quickstart' into 'master'

Add packages for squid KDU in vnf-onboarding quickstarts

See merge request !157
parents 207a22a5 700b4234
Pipeline #894 failed with stage
in 1 minute and 31 seconds
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURBVENDQWVtZ0F3SUJBZ0lKQU5OaFh3Z2loSjlwTUEwR0NTcUdTSWIzRFFFQkN3VUFNQmN4RlRBVEJnTlYKQkFNTURERXdMakUxTWk0eE9ETXVNVEFlRncweU1UQTFNRFV4T0RBMU5UUmFGdzB6TVRBMU1ETXhPREExTlRSYQpNQmN4RlRBVEJnTlZCQU1NRERFd0xqRTFNaTR4T0RNdU1UQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQCkFEQ0NBUW9DZ2dFQkFNTFgyMmkxYW5BaGlUb2ZOZ3BDOGF3M3JvZ0xZMjZRTEU2VHpoN0ZuRHA1OVZzbHRUSWoKZkVLNmp5UGQwekNGczZ0VFVzaHRjOEdjcEJnaVI2eVUvVnlqRER5OUZQdHc0SW9tQW5ndXRpbFdKLzBpQitucgoyaEhndWk0bCs3RkFSTVJWWkNyYXp3aHZTS3JTY2xkcHNSNzJCV1RueEhOb3d5ZzUxc3I5UDh4VjdSY1lKMVlNClZTTEdHcmVQN0dPOXRsVTk0b1llaGdRM1lDQkwwam1aVFRXSHcxYzlzdTJnMXA0d0E1TVpTSGl0WDQ5YkNrd1oKS3piclVhYndEaERPT0FWQ2hIYjRjeEk4U2VON1pVbTFJcGMwTlZiSTlVRlg1Y1dzQjJOZlowaVI2aXdtcnREbgpCQ1d1TkU4b3J4Wm94V3NYOTBmZmdKUG02TUVtc3V3MFZIMENBd0VBQWFOUU1FNHdIUVlEVlIwT0JCWUVGQis5CmlZeEM1djF1STl4VStpQytmR0F0MGtURU1COEdBMVVkSXdRWU1CYUFGQis5aVl4QzV2MXVJOXhVK2lDK2ZHQXQKMGtURU1Bd0dBMVVkRXdRRk1BTUJBZjh3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUNaTXQ2bERSS0RvU1MxUQo5akNXUmxWKzZXa013WUJmN0phQ0xpbFYrd1poSXVaMCtrWm9DUjk3K1ZjdWJ1RkJJTGVoQVZwajFUcTNTSVNsCjgydzhYNGgrMzBsSm5YZkcyVDA5MW1tbVdaeXNxM2RQN3RLa0Vqalk3UXlaL1ArbW9kMDIvYWxDdlNEcGhFdjcKL0F0RndNNFM5TXdoSGFNeVc0b0N1UTlPTU5nYUlva1dIK1F5RzJyVzUrS1JhcHN4Ri80TDNnOTZqaFROZlJCbApEcnlOc2VEWkhocHBJeWJFZ1R6Wlo5amU5V1MzYUVDRnRMNllLWVdGTUV2UFhXNUJ4cXoxY0tRVWdLbTZwRCt5CnJ0eStCRm0wOXJLZzFSL3A2RFhROS9INmlnMGUvRXY5Uk1HM1E1dENDWlJFeXdhMU5JVlpMZzFRWmFiM3FqS04KNVNMaWhFbz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://192.168.0.12:16443
name: microk8s-cluster
contexts:
- context:
cluster: microk8s-cluster
user: admin
name: microk8s
current-context: microk8s
kind: Config
preferences: {}
users:
- name: admin
user:
token: Nk5McmtVRFo2ZjdoQU9oVmdSOHcxVUJ4U0FycDBmSzVDZHJSaTF2L29vMD0K
File added
description: Squid Bundle
bundle: kubernetes
applications:
squid:
charm: ./charms/squid-operator
scale: 1
options:
enable-exporter: true
# prometheus:
# charm: ./charms/prometheus-operator
# scale: 1
# grafana:
# charm: ./charms/grafana-operator
# scale: 1
# relations:
# - - prometheus:target
# - squid:prometheus-target
# - - grafana:grafana-source
# - prometheus:grafana-source
[flake8]
max-line-length = 99
select: E,W,F,C,N
exclude:
venv
.git
build
dist
*.egg_info
This diff is collapsed.
# Grafana Charm
## Description
This is the Grafana charm for Kubernetes using the Operator Framework.
## Usage
Initial setup (ensure microk8s is a clean slate with `microk8s.reset` or a fresh install with `snap install microk8s --classic`:
```bash
microk8s.enable dns storage registry dashboard
juju bootstrap microk8s mk8s
juju add-model lma
juju create-storage-pool operator-storage kubernetes storage-class=microk8s-hostpath
```
Deploy Grafana on its own:
```bash
git clone git@github.com:canonical/grafana-operator.git
cd grafana-operator
charmcraft build
juju deploy ./grafana.charm --resource grafana-image=grafana/grafana:7.2.1
```
View the dashboard in a browser:
1. `juju status` to check the IP of the of the running Grafana application
2. Navigate to `http://IP_ADDRESS:3000`
3. Log in with the default credentials username=admin, password=admin.
Add Prometheus as a datasource:
```bash
git clone git@github.com:canonical/prometheus-operator.git
cd prometheus-operator
charmcraft build
juju deploy ./prometheus.charm
juju add-relation grafana prometheus
watch -c juju status --color # wait for things to settle down
```
> Once the deployed charm and relation settles, you should be able to see Prometheus data propagating to the Grafana dashboard.
### High Availability Grafana
This charm is written to support a high-availability Grafana cluster, but a database relation is required (MySQL or Postgresql).
If HA is not required, there is no need to add a database relation.
> NOTE: HA should not be considered for production use.
...
## Developing
Create and activate a virtualenv,
and install the development requirements,
virtualenv -p python3 venv
source venv/bin/activate
pip install -r requirements-dev.txt
## Testing
Just run `run_tests`:
./run_tests
options:
port:
description: The port grafana will be listening on
type: int
default: 3000
grafana_log_level:
type: string
description: |
Logging level for Grafana. Options are “debug”, “info”,
“warn”, “error”, and “critical”.
default: info
\ No newline at end of file
#!/bin/sh
JUJU_DISPATCH_PATH="${JUJU_DISPATCH_PATH:-$0}" PYTHONPATH=lib:venv ./src/charm.py
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!-- Generator: Adobe Illustrator 23.0.4, SVG Export Plug-In . SVG Version: 6.00 Build 0) -->
<svg id="Layer_1" style="enable-background:new 0 0 85.12 92.46" xmlns="http://www.w3.org/2000/svg" xml:space="preserve" height="250px" viewBox="0 0 85.12 92.46" width="250px" version="1.1" y="0px" x="0px" xmlns:xlink="http://www.w3.org/1999/xlink">
<style type="text/css">
.st0{fill:url(#SVGID_1_);}
</style>
<linearGradient id="SVGID_1_" y2="28.783" gradientUnits="userSpaceOnUse" x2="42.562" y1="113.26" x1="42.562">
<stop stop-color="#FFF200" offset="0"/>
<stop stop-color="#F15A29" offset="1"/>
</linearGradient>
<path class="st0" d="m85.01 40.8c-0.14-1.55-0.41-3.35-0.93-5.32-0.51-1.97-1.28-4.13-2.39-6.37-1.12-2.24-2.57-4.57-4.47-6.82-0.74-0.88-1.54-1.76-2.42-2.6 1.3-5.17-1.59-9.65-1.59-9.65-4.98-0.31-8.14 1.54-9.31 2.39-0.2-0.08-0.39-0.17-0.59-0.25-0.85-0.34-1.72-0.66-2.61-0.95-0.89-0.28-1.81-0.54-2.74-0.76-0.94-0.22-1.89-0.4-2.86-0.55-0.17-0.03-0.34-0.05-0.51-0.07-2.18-6.95-8.41-9.85-8.41-9.85-6.95 4.41-8.27 10.57-8.27 10.57s-0.03 0.14-0.07 0.36c-0.38 0.11-0.77 0.22-1.15 0.34-0.53 0.16-1.06 0.36-1.59 0.55-0.53 0.21-1.06 0.41-1.58 0.64-1.05 0.45-2.09 0.96-3.1 1.53-0.99 0.55-1.95 1.16-2.9 1.82-0.14-0.06-0.24-0.11-0.24-0.11-9.62-3.68-18.17 0.75-18.17 0.75-0.78 10.24 3.84 16.68 4.76 17.86-0.23 0.63-0.44 1.27-0.64 1.92-0.71 2.32-1.24 4.7-1.57 7.16-0.05 0.35-0.09 0.71-0.13 1.07-8.9 4.38-11.53 13.38-11.53 13.38 7.42 8.53 16.07 9.06 16.07 9.06 0.01-0.01 0.02-0.01 0.02-0.02 1.1 1.96 2.37 3.83 3.8 5.57 0.6 0.73 1.23 1.43 1.88 2.11-2.71 7.74 0.38 14.18 0.38 14.18 8.26 0.31 13.69-3.61 14.83-4.52 0.82 0.28 1.66 0.53 2.5 0.74 2.54 0.65 5.14 1.04 7.74 1.15 0.65 0.03 1.3 0.04 1.95 0.04h0.31l0.21-0.01 0.41-0.01 0.4-0.02 0.01 0.01c3.89 5.55 10.74 6.34 10.74 6.34 4.87-5.13 5.15-10.22 5.15-11.33v-0.07-0.15s0 0 0 0c0-0.08-0.01-0.15-0.01-0.23 1.02-0.72 2-1.49 2.92-2.31 1.95-1.76 3.65-3.77 5.06-5.93 0.13-0.2 0.26-0.41 0.39-0.62 5.51 0.32 9.39-3.41 9.39-3.41-0.91-5.74-4.18-8.54-4.87-9.07 0 0-0.03-0.02-0.07-0.05s-0.06-0.05-0.06-0.05c-0.04-0.02-0.08-0.05-0.12-0.08 0.03-0.35 0.06-0.69 0.08-1.04 0.04-0.62 0.06-1.24 0.06-1.85v-0.46-0.23-0.12-0.16l-0.02-0.38-0.03-0.52c-0.01-0.18-0.02-0.34-0.04-0.5-0.01-0.16-0.03-0.32-0.05-0.48l-0.06-0.48-0.07-0.47c-0.09-0.63-0.21-1.26-0.36-1.88-0.58-2.47-1.54-4.82-2.82-6.93s-2.86-3.98-4.65-5.56-3.79-2.85-5.9-3.79c-2.1-0.95-4.31-1.55-6.51-1.83-1.1-0.14-2.2-0.2-3.28-0.19l-0.41 0.01h-0.1-0.14l-0.17 0.01-0.4 0.03c-0.15 0.01-0.31 0.02-0.45 0.04-0.56 0.05-1.11 0.13-1.66 0.23-2.18 0.41-4.24 1.2-6.06 2.28-1.82 1.09-3.39 2.45-4.68 3.98-1.28 1.54-2.28 3.24-2.96 5-0.69 1.76-1.07 3.58-1.18 5.35-0.03 0.44-0.04 0.88-0.03 1.32 0 0.11 0 0.22 0.01 0.33l0.01 0.35c0.02 0.21 0.03 0.42 0.05 0.63 0.09 0.9 0.25 1.75 0.49 2.58 0.48 1.66 1.25 3.15 2.2 4.43s2.08 2.33 3.28 3.15 2.49 1.41 3.76 1.79 2.54 0.54 3.74 0.53c0.15 0 0.3 0 0.44-0.01 0.08 0 0.16-0.01 0.24-0.01s0.16-0.01 0.24-0.01c0.13-0.01 0.25-0.03 0.38-0.04 0.03 0 0.07-0.01 0.11-0.01l0.12-0.02c0.08-0.01 0.15-0.02 0.23-0.03 0.16-0.02 0.29-0.05 0.43-0.08s0.28-0.05 0.42-0.09c0.27-0.06 0.54-0.14 0.8-0.22 0.52-0.17 1.01-0.38 1.46-0.61s0.87-0.5 1.26-0.77c0.11-0.08 0.22-0.16 0.33-0.25 0.42-0.33 0.48-0.94 0.15-1.35-0.29-0.36-0.79-0.45-1.19-0.23-0.1 0.05-0.2 0.11-0.3 0.16-0.35 0.17-0.71 0.32-1.09 0.45-0.39 0.12-0.79 0.22-1.2 0.29-0.21 0.03-0.42 0.06-0.63 0.08-0.11 0.01-0.21 0.02-0.32 0.02s-0.22 0.01-0.32 0.01-0.21 0-0.31-0.01c-0.13-0.01-0.26-0.01-0.39-0.02h-0.01-0.04l-0.09 0.02c-0.06-0.01-0.12-0.01-0.17-0.02-0.12-0.01-0.23-0.03-0.35-0.04-0.93-0.13-1.88-0.4-2.79-0.82-0.91-0.41-1.79-0.98-2.57-1.69-0.79-0.71-1.48-1.56-2.01-2.52-0.54-0.96-0.92-2.03-1.09-3.16-0.09-0.56-0.13-1.14-0.11-1.71 0.01-0.16 0.01-0.31 0.02-0.47v-0.03-0.06l0.01-0.12c0.01-0.08 0.01-0.15 0.02-0.23 0.03-0.31 0.08-0.62 0.13-0.92 0.43-2.45 1.65-4.83 3.55-6.65 0.47-0.45 0.98-0.87 1.53-1.25 0.55-0.37 1.12-0.7 1.73-0.98 0.6-0.28 1.23-0.5 1.88-0.68 0.65-0.17 1.31-0.29 1.98-0.35 0.34-0.03 0.67-0.04 1.01-0.04h0.23l0.27 0.01 0.17 0.01h0.03 0.07l0.27 0.02c0.73 0.06 1.46 0.16 2.17 0.32 1.43 0.32 2.83 0.85 4.13 1.57 2.6 1.44 4.81 3.69 6.17 6.4 0.69 1.35 1.16 2.81 1.4 4.31 0.06 0.38 0.1 0.76 0.13 1.14l0.02 0.29 0.01 0.29c0.01 0.1 0.01 0.19 0.01 0.29 0 0.09 0.01 0.2 0 0.27v0.25l-0.01 0.28c-0.01 0.19-0.02 0.49-0.03 0.67-0.03 0.42-0.07 0.83-0.12 1.24s-0.12 0.82-0.19 1.22c-0.08 0.4-0.17 0.81-0.27 1.21-0.2 0.8-0.46 1.59-0.76 2.36-0.61 1.54-1.42 3-2.4 4.36-1.96 2.7-4.64 4.9-7.69 6.29-1.52 0.69-3.13 1.19-4.78 1.47-0.82 0.14-1.66 0.22-2.5 0.25l-0.15 0.01h-0.13-0.27-0.41-0.21-0.01-0.08c-0.45-0.01-0.9-0.03-1.34-0.07-1.79-0.13-3.55-0.45-5.27-0.95-1.71-0.49-3.38-1.16-4.95-2-3.14-1.68-5.95-3.98-8.15-6.76-1.11-1.38-2.07-2.87-2.87-4.43s-1.42-3.2-1.89-4.88c-0.46-1.68-0.75-3.39-0.86-5.12l-0.02-0.32-0.01-0.08v-0.07-0.14l-0.01-0.28v-0.07-0.1-0.2l-0.01-0.4v-0.08-0.03-0.16c0-0.21 0.01-0.42 0.01-0.63 0.03-0.85 0.1-1.73 0.21-2.61s0.26-1.76 0.44-2.63 0.39-1.74 0.64-2.59c0.49-1.71 1.1-3.36 1.82-4.92 1.44-3.12 3.34-5.88 5.61-8.09 0.57-0.55 1.16-1.08 1.77-1.57s1.25-0.95 1.9-1.37c0.65-0.43 1.32-0.82 2.02-1.18 0.34-0.19 0.7-0.35 1.05-0.52 0.18-0.08 0.36-0.16 0.53-0.24 0.18-0.08 0.36-0.16 0.54-0.23 0.72-0.3 1.46-0.56 2.21-0.8 0.19-0.06 0.38-0.11 0.56-0.17 0.19-0.06 0.38-0.1 0.57-0.16 0.38-0.11 0.76-0.2 1.14-0.29 0.19-0.05 0.39-0.08 0.58-0.13 0.19-0.04 0.38-0.08 0.58-0.12 0.19-0.04 0.39-0.07 0.58-0.11l0.29-0.05 0.29-0.04c0.2-0.03 0.39-0.06 0.59-0.09 0.22-0.04 0.44-0.05 0.66-0.09 0.18-0.02 0.48-0.06 0.65-0.08 0.14-0.01 0.28-0.03 0.41-0.04l0.28-0.03 0.14-0.01 0.16-0.01c0.22-0.01 0.44-0.03 0.66-0.04l0.33-0.02h0.02 0.07l0.14-0.01c0.19-0.01 0.38-0.02 0.56-0.03 0.75-0.02 1.5-0.02 2.24 0 1.48 0.06 2.93 0.22 4.34 0.48 2.82 0.53 5.49 1.43 7.89 2.62 2.41 1.18 4.57 2.63 6.44 4.2 0.12 0.1 0.23 0.2 0.35 0.3 0.11 0.1 0.23 0.2 0.34 0.3 0.23 0.2 0.44 0.41 0.66 0.61s0.43 0.41 0.64 0.62c0.2 0.21 0.41 0.41 0.61 0.63 0.8 0.84 1.53 1.69 2.19 2.55 1.33 1.71 2.39 3.44 3.24 5.07 0.05 0.1 0.11 0.2 0.16 0.3l0.15 0.3c0.1 0.2 0.2 0.4 0.29 0.6s0.19 0.39 0.27 0.59c0.09 0.2 0.17 0.39 0.25 0.58 0.32 0.76 0.61 1.49 0.84 2.18 0.39 1.11 0.67 2.11 0.89 2.98 0.09 0.35 0.42 0.58 0.78 0.55 0.37-0.03 0.66-0.34 0.66-0.71 0.04-0.95 0.01-2.05-0.09-3.3z"/>
</svg>
bases:
- architectures:
- amd64
channel: '20.04'
name: ubuntu
charmcraft-started-at: '2021-05-31T06:47:43.483382Z'
charmcraft-version: 0.10.0
name: grafana
summary: Data visualization and observability with Grafana
maintainers:
- Justin Clark <justin.clark@canonical.com>
description: |
Grafana provides dashboards for monitoring data and this
charm is written to allow for HA on Kubernetes and can take
multiple data sources (for example, Prometheus).
tags:
- lma
- grafana
- prometheus
- monitoring
- observability
series:
- kubernetes
provides:
grafana-source:
interface: grafana-datasource
grafana-dashboard:
interface: grafana-dash
requires:
database:
interface: db
limit: 1
peers:
grafana:
interface: grafana-peers
storage:
sqlitedb:
type: filesystem
location: /var/lib/grafana
deployment:
service: loadbalancer
\ No newline at end of file
ops
git+https://github.com/juju-solutions/resource-oci-image/@c5778285d332edf3d9a538f9d0c06154b7ec1b0b#egg=oci-image
\ No newline at end of file
#!/bin/sh -e
# Copyright 2020 Justin
# See LICENSE file for licensing details.
if [ -z "$VIRTUAL_ENV" -a -d venv/ ]; then
. venv/bin/activate
fi
if [ -z "$PYTHONPATH" ]; then
export PYTHONPATH=src
else
export PYTHONPATH="src:$PYTHONPATH"
fi
flake8
python3 -m unittest -v "$@"
#! /usr/bin/env python3
# -*- coding: utf-8 -*-
import logging
import hashlib
import textwrap
from oci_image import OCIImageResource, OCIImageResourceError
from ops.charm import CharmBase
from ops.framework import StoredState
from ops.main import main
from ops.model import ActiveStatus, MaintenanceStatus, BlockedStatus
log = logging.getLogger()
# These are the required and optional relation data fields
# In other words, when relating to this charm, these are the fields
# that will be processed by this charm.
REQUIRED_DATASOURCE_FIELDS = {
'private-address', # the hostname/IP of the data source server
'port', # the port of the data source server
'source-type', # the data source type (e.g. prometheus)
}
OPTIONAL_DATASOURCE_FIELDS = {
'source-name', # a human-readable name of the source
}
# https://grafana.com/docs/grafana/latest/administration/configuration/#database
REQUIRED_DATABASE_FIELDS = {
'type', # mysql, postgres or sqlite3 (sqlite3 doesn't work for HA)
'host', # in the form '<url_or_ip>:<port>', e.g. 127.0.0.1:3306
'name',
'user',
'password',
}
# verify with Grafana documentation to ensure fields have valid values
# as this charm will not directly handle these cases
# TODO: fill with optional fields
OPTIONAL_DATABASE_FIELDS = set()
VALID_DATABASE_TYPES = {'mysql', 'postgres', 'sqlite3'}
def get_container(pod_spec, container_name):
"""Find and return the first container in pod_spec whose name is
container_name, otherwise return None."""
for container in pod_spec['containers']:
if container['name'] == container_name:
return container
raise ValueError("Unable to find container named '{}' in pod spec".format(
container_name))
class GrafanaK8s(CharmBase):
"""Charm to run Grafana on Kubernetes.
This charm allows for high-availability
(as long as a non-sqlite database relation is present).
Developers of this charm should be aware of the Grafana provisioning docs:
https://grafana.com/docs/grafana/latest/administration/provisioning/
"""
datastore = StoredState()
def __init__(self, *args):
log.debug('Initializing charm.')
super().__init__(*args)
# -- get image information
self.image = OCIImageResource(self, 'grafana-image')
# -- standard hooks
self.framework.observe(self.on.config_changed, self.on_config_changed)
self.framework.observe(self.on.update_status, self.on_update_status)
self.framework.observe(self.on.stop, self._on_stop)
# -- grafana-source relation observations
self.framework.observe(self.on['grafana-source'].relation_changed,
self.on_grafana_source_changed)
self.framework.observe(self.on['grafana-source'].relation_broken,
self.on_grafana_source_broken)
# -- grafana (peer) relation observations
self.framework.observe(self.on['grafana'].relation_changed,
self.on_peer_changed)
# self.framework.observe(self.on['grafana'].relation_departed,
# self.on_peer_departed)
# -- database relation observations
self.framework.observe(self.on['database'].relation_changed,
self.on_database_changed)
self.framework.observe(self.on['database'].relation_broken,
self.on_database_broken)
# -- initialize states --
self.datastore.set_default(sources=dict()) # available data sources
self.datastore.set_default(source_names=set()) # unique source names
self.datastore.set_default(sources_to_delete=set())
self.datastore.set_default(database=dict()) # db configuration
@property
def has_peer(self) -> bool:
rel = self.model.get_relation('grafana')
return len(rel.units) > 0 if rel is not None else False
@property
def has_db(self) -> bool:
"""Only consider a DB connection if we have config info."""
return len(self.datastore.database) > 0
def _on_stop(self, _):
"""Go into maintenance state if the unit is stopped."""
self.unit.status = MaintenanceStatus('Pod is terminating.')
def on_config_changed(self, _):
self.configure_pod()
def on_update_status(self, _):
"""Various health checks of the charm."""
self._check_high_availability()
def on_grafana_source_changed(self, event):
""" Get relation data for Grafana source and set k8s pod spec.
This event handler (if the unit is the leader) will get data for
an incoming grafana-source relation and make the relation data
is available in the app's datastore object (StoredState).
"""
# if this unit is the leader, set the required data
# of the grafana-source in this charm's datastore
if not self.unit.is_leader():
return
# if there is no available unit, remove data-source info if it exists
if event.unit is None:
log.warning("event unit can't be None when setting data sources.")
return
# dictionary of all the required/optional datasource field values
# using this as a more generic way of getting data source fields
datasource_fields = \
{field: event.relation.data[event.unit].get(field) for field in
REQUIRED_DATASOURCE_FIELDS | OPTIONAL_DATASOURCE_FIELDS}
missing_fields = [field for field
in REQUIRED_DATASOURCE_FIELDS
if datasource_fields.get(field) is None]
# check the relation data for missing required fields
if len(missing_fields) > 0:
log.error("Missing required data fields for grafana-source "
"relation: {}".format(missing_fields))
self._remove_source_from_datastore(event.relation.id)
return
# specifically handle optional fields if necessary
# check if source-name was not passed or if we have already saved the provided name
if datasource_fields['source-name'] is None\
or datasource_fields['source-name'] in self.datastore.source_names:
default_source_name = '{}_{}'.format(
event.app.name,
event.relation.id
)
log.warning("No name 'grafana-source' or provided name is already in use. "
"Using safe default: {}.".format(default_source_name))
datasource_fields['source-name'] = default_source_name
self.datastore.source_names.add(datasource_fields['source-name'])
# set the first grafana-source as the default (needed for pod config)
# if `self.datastore.sources` is currently empty, this is the first
datasource_fields['isDefault'] = 'false'
if not dict(self.datastore.sources):
datasource_fields['isDefault'] = 'true'
# add unit name so the source can be removed might be a
# duplicate of 'source-name', but this will guarantee lookup
datasource_fields['unit_name'] = event.unit.name
# add the new datasource relation data to the current state
new_source_data = {
field: value for field, value in datasource_fields.items()
if value is not None
}
self.datastore.sources.update({event.relation.id: new_source_data})
self.configure_pod()
def on_grafana_source_broken(self, event):
"""When a grafana-source is removed, delete from the datastore."""
if self.unit.is_leader():
self._remove_source_from_datastore(event.relation.id)
self.configure_pod()
def on_peer_changed(self, _):
# TODO: https://grafana.com/docs/grafana/latest/tutorials/ha_setup/
# According to these docs ^, as long as we have a DB, HA should
# work out of the box if we are OK with "Sticky Sessions"
# but having "Stateless Sessions" could require more config
# if the config changed, set a new pod spec
self.configure_pod()
def on_peer_departed(self, _):
"""Sets pod spec with new info."""
# TODO: setting pod spec shouldn't do anything now,
# but if we ever need to change config based peer units,
# we will want to make sure configure_pod() is called
self.configure_pod()
def on_database_changed(self, event):
"""Sets configuration information for database connection."""
if not self.unit.is_leader():
return
if event.unit is None:
log.warning("event unit can't be None when setting db config.")
return
# save the necessary configuration of this database connection
database_fields = \
{field: event.relation.data[event.unit].get(field) for field in
REQUIRED_DATABASE_FIELDS | OPTIONAL_DATABASE_FIELDS}
# if any required fields are missing, warn the user and return
missing_fields = [field for field
in REQUIRED_DATABASE_FIELDS
if database_fields.get(field) is None]
if len(missing_fields) > 0:
log.error("Missing required data fields for related database "
"relation: {}".format(missing_fields))
return
# check if the passed database type is not in VALID_DATABASE_TYPES
if database_fields['type'] not in VALID_DATABASE_TYPES:
log.error('Grafana can only accept databases of the following '
'types: {}'.format(VALID_DATABASE_TYPES))
return
# add the new database relation data to the datastore
self.datastore.database.update({
field: value for field, value in database_fields.items()
if value is not None
})
self.configure_pod()
def on_database_broken(self, _):
"""Removes database connection info from datastore.
We are guaranteed to only have one DB connection, so clearing
datastore.database is all we need for the change to be propagated
to the pod spec."""
if not self.unit.is_leader():
return
# remove the existing database info from datastore
self.datastore.database = dict()
# set pod spec because datastore config has changed
self.configure_pod()
def _remove_source_from_datastore(self, rel_id):
"""Remove the grafana-source from the datastore.
Once removed from the datastore, this datasource will not
part of the next pod spec."""
log.info('Removing all data for relation: {}'.format(rel_id))
removed_source = self.datastore.sources.pop(rel_id, None)
if removed_source is None:
log.warning('Could not remove source for relation: {}'.format(
rel_id))
else:
# free name from charm's set of source names
# and save to set which will be used in set_pod_spec
self.datastore.source_names.remove(removed_source['source-name'])
self.datastore.sources_to_delete.add(removed_source['source-name'])
def _check_high_availability(self):
"""Checks whether the configuration allows for HA."""
if self.has_peer:
if self.has_db:
log.info('high availability possible.')
status = MaintenanceStatus('Grafana ready for HA.')
else:
log.warning('high availability not possible '
'with current configuration.')
status = BlockedStatus('Need database relation for HA.')
else:
log.info('running Grafana on single node.')
status = MaintenanceStatus('Grafana ready on single node.')
# make sure we don't have a maintenance status overwrite
# a currently active status
if isinstance(status, MaintenanceStatus) \
and isinstance(self.unit.status, ActiveStatus):
return status
self.unit.status = status
return status
def _make_delete_datasources_config_text(self) -> str:
"""Generate text of data sources to delete."""
if not self.datastore.sources_to_delete:
return "\n"
delete_datasources_text = textwrap.dedent("""
deleteDatasources:""")
for name in self.datastore.sources_to_delete:
delete_datasources_text += textwrap.dedent("""
- name: {}
orgId: 1""".format(name))
# clear datastore.sources_to_delete and return text result
self.datastore.sources_to_delete.clear()
return delete_datasources_text + '\n\n'
def _make_data_source_config_text(self) -> str:
"""Build config based on Data Sources section of provisioning docs."""
# get starting text for the config file and sources to delete
delete_text = self._make_delete_datasources_config_text()
config_text = textwrap.dedent("""
apiVersion: 1
""")
config_text += delete_text
if self.datastore.sources:
config_text += "datasources:"
for rel_id, source_info in self.datastore.sources.items():
# TODO: handle more optional fields and verify that current
# defaults are what we want (e.g. "access")
config_text += textwrap.dedent("""
- name: {0}
type: {1}
access: proxy
url: http://{2}:{3}
isDefault: {4}
editable: true
orgId: 1""").format(
source_info['source-name'],
source_info['source-type'],
source_info['private-address'],
source_info['port'],
source_info['isDefault'],
)
# check if there these are empty
return config_text + '\n'
def _update_pod_data_source_config_file(self, pod_spec):
"""Adds datasources to pod configuration."""
file_text = self._make_data_source_config_text()
data_source_file_meta = {
'name': 'grafana-datasources',
'mountPath': '/etc/grafana/provisioning/datasources',
'files': [{
'path': 'datasources.yaml',
'content': file_text,
}]
}
container = get_container(pod_spec, self.app.name)
container['volumeConfig'].append(data_source_file_meta)
# get hash string of the new file text and put into container config
# if this changes, it will trigger a pod restart
file_text_hash = hashlib.md5(file_text.encode()).hexdigest()
if 'DATASOURCES_YAML' in container['envConfig'] \
and container['envConfig']['DATASOURCES_YAML'] != file_text_hash:
log.info('datasources.yaml hash has changed. '
'Triggering pod restart.')
container['envConfig']['DATASOURCES_YAML'] = file_text_hash
def _make_config_ini_text(self):
"""Create the text of the config.ini file.
More information about this can be found in the Grafana docs:
https://grafana.com/docs/grafana/latest/administration/configuration/
"""
config_text = textwrap.dedent("""
[paths]
provisioning = /etc/grafana/provisioning
[log]
mode = console
level = {0}
""".format(
self.model.config['grafana_log_level'],
))
# if there is a database available, add that information
if self.datastore.database:
db_config = self.datastore.database
config_text += textwrap.dedent("""
[database]
type = {0}
host = {1}
name = {2}
user = {3}
password = {4}
url = {0}://{3}:{4}@{1}/{2}""".format(
db_config['type'],
db_config['host'],
db_config['name'],
db_config['user'],
db_config['password'],
))
return config_text
def _update_pod_config_ini_file(self, pod_spec):
file_text = self._make_config_ini_text()
config_ini_file_meta = {
'name': 'grafana-config-ini',
'mountPath': '/etc/grafana',
'files': [{
'path': 'grafana.ini',
'content': file_text
}]
}
container = get_container(pod_spec, self.app.name)
container['volumeConfig'].append(config_ini_file_meta)
# get hash string of the new file text and put into container config
# if this changes, it will trigger a pod restart
file_text_hash = hashlib.md5(file_text.encode()).hexdigest()
if 'GRAFANA_INI' in container['envConfig'] \
and container['envConfig']['GRAFANA_INI'] != file_text_hash:
log.info('grafana.ini hash has changed. Triggering pod restart.')
container['envConfig']['GRAFANA_INI'] = file_text_hash
def _build_pod_spec(self):
"""Builds the pod spec based on available info in datastore`."""
config = self.model.config
spec = {
'version': 3,
'containers': [{
'name': self.app.name,
'image': "ubuntu/grafana:latest",
'ports': [{
'containerPort': config['port'],
'protocol': 'TCP'
}],
'volumeConfig': [],
'envConfig': {}, # used to store hashes of config file text
'kubernetes': {
'readinessProbe': {
'httpGet': {
'path': '/api/health',
'port': config['port']
},
'initialDelaySeconds': 10,
'timeoutSeconds': 30
},
},
}]
}
return spec
def configure_pod(self):
"""Set Juju / Kubernetes pod spec built from `_build_pod_spec()`."""
# check for valid high availability (or single node) configuration
self._check_high_availability()
# in the case where we have peers but no DB connection,
# don't set the pod spec until it is resolved
if self.unit.status == BlockedStatus('Need database relation for HA.'):
log.error('Application is in a blocked state. '
'Please resolve before pod spec can be set.')
return
if not self.unit.is_leader():
self.unit.status = ActiveStatus()
return
# general pod spec component updates
self.unit.status = MaintenanceStatus('Building pod spec.')
pod_spec = self._build_pod_spec()
if not pod_spec:
return
self._update_pod_data_source_config_file(pod_spec)
self._update_pod_config_ini_file(pod_spec)
# set the pod spec with Juju
self.model.pod.set_spec(pod_spec)
self.unit.status = ActiveStatus()
if __name__ == '__main__':
main(GrafanaK8s)
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment