Skip to content
Snippets Groups Projects
Commit b9fb8a34 authored by Mark Beierl's avatar Mark Beierl Committed by Gerrit Code Review
Browse files

Merge "Fix bug 964: Duplicated dashboards. Set fixed port for Grafana"

parents 40e73bfe 5bed3ad4
No related branches found
No related tags found
No related merge requests found
......@@ -16,54 +16,51 @@ Author: Jose Manuel Palacios (jmpalacios@minsait.com)
Author: Jose Antonio Martinez (jamartinezv@minsait.com)
-->
# Monitoring in Kubernetes based OSM
# OSM Monitoring
## Introduction
This implementation deploys a PM stack based on Prometheus Operator plus a series of exporters for monitoring the OSM nodes and third party software modules (Kafka, mongodb and mysql)
This is an utility to monitor the OSM nodes and pods in the Kubernetes deployment. Metrics are stored in Prometheus and accessible in Grafana. Note that those "Prometheus" instance is not the same in the OSM core, but different one, aimed at the monitoring of the platform itself.
In a high level, it consists of two scripts that deploy/undeploy the required objects in a previously existing Kubernetes based OSM installation.
Those scripts use already existing and freely available software: Helm, Kubernetes Operator and a set of exporters and dashboards pretty much standard. Helm server part (tiller) and charts deployed depends on Kubernetes version 1.15.x. Charts versions are pre-configured in an installation script and can be easily changed.
## Requirements
As a result, there will be 3 folders in Grafana:
OSM must be/have been deployed using the Kubernetes installer (that is, with the -c k8s option).
- Summary: with a quick view of the platform global status.
- OSM Third Party Modules: dashboards for MongoDB, MyslqDB and Kafka.
- Kubernetes cluster: dashboards for pods, namespaces, nodes, etc.
## Versions
## Requirements
For reference, the versions for the external components used are as follows:
- Kubernetes 1.15.X
- OSM Kubernetes version Release 7
* PROMETHEUS_OPERATOR=6.18.0
* PROMETHEUS_MONGODB_EXPORTER=2.3.0
* PROMETHEUS_MYSQL_EXPORTER=0.5.1
* HELM_CLIENT=2.15.2
## Components
## Functionality
- Installs the helm client on the host where the script is run (if not already installed)
- Creates a service account in the k8s cluster to be used by tiller, with sufficient permissions to be able to deploy kubernetes objects.
- Installs the helm server part (tiller) and assigns to tiller the previously created service account (if not already installed)
- Creates a namespace (monitoring) where all the components that are part of the OSM deployment monitoring `pack` will be installed.
- Installs prometheus-operator using the `stable/prometheus-operator` chart which is located at the default helm repository (<https://kubernetes-charts.storage.googleapis.com/>). This installs a set of basic metrics for CPU, memory, etc. of hosts and pods. It also includes grafana and dashboards.
- Installs an exporter for mongodb using the `stable/prometheus-mongodb-exporter` chart, which is located at the default helm repository (<https://kubernetes-charts.storage.googleapis.com/>).
- Adds a dashboard for mongodb to grafana through a local yaml file.
- Installs an exporter for mysql using the `stable/prometheus-mysql-exporter` chart which is located at the default helm repository (<https://kubernetes-charts.storage.googleapis.com/>).
- Adds a dashboard for mysql to grafana through a local yaml file.
- Installs an exporter for kafka using a custom-build helm chart with a deployment and its corresponding service and service monitor with local yaml files. We take the kafka exporter from <https://hub.docker.com/r/danielqsj/kafka-exporter>.
- Add a dashboard for kafka to grafana through a local yaml file.
Kubernetes cluster metrics (for nodes, pods, deployments, etc.) are stored in the dedicated Prometheus instance and accessible using Grafana.
## Versions
"Prometheus-operator" (<https://github.com/helm/charts/tree/master/stable/prometheus-operator>) provides the basic components and the monitoring of the basic Kubernetes resources. Additional "exporters" are used to gather metrics from Kafka, Mysql and Mongodb.
It is important to note that Grafana is not installed with this chart because we are using Grafana installed with OSM core.
We use the following versions:
## Install procedure
- PROMETHEUS_OPERATOR=6.18.0
- PROMETHEUS_MONGODB_EXPORTER=2.3.0
- PROMETHEUS_MYSQL_EXPORTER=0.5.1
- HELM_CLIENT=2.15.2
There are two ways to install the monitoring component based on the OSM global installer (<https://osm-download.etsi.org/ftp/osm-7.0-seven/install_osm.sh>)
## Install
* Using the --k8s_monitor switch in the OSM installation:
Note: This implementation is dependent on the Kubernetes OSM deployment, and the installation script must be executed AFTER the Kubernetes deployment has been completed. Notice that it is not applicable to the basic docker deployment.
```bash
./install_osm.sh -c k8s --k8s_monitor
```
* As a separated component (K8s based OSM only):
```bash
./install_osm.sh -o k8s_monitor
```
All the components will be installed in the "monitoring" namespace. In addition, for debugging purposes, there is a standalone script is available in `devops/installers/k8s/install_osm_k8s_monitoring.sh`. To see the available options, type --help.
```sh
usage: ./install_osm_k8s_monitoring.sh [OPTIONS]
Install OSM Monitoring
OPTIONS
......@@ -74,17 +71,26 @@ Install OSM Monitoring
-h / --help : print this help
```
## Uninstall
## Access to Grafana
The Grafana console can be accessed on the IP address of any node using port 3000, since a NodePort service is used: `http://<ip_your_osm_host>:3000`
The initial credentials are:
* Username: admin
* Password: admin
## Uninstall procedure
To uninstall the utility you must use the installation script.
Use the uninstall script
```sh
./uninstall_osm_k8s_monitoring.sh
./install_osm.sh -o k8s_monitor --uninstall
```
It will uninstall all components of this utility. To see the options type --help.
In addition, for debugging purposes, there is a standalone script is available in `devops/installers/k8s/uninstall_osm_k8s_monitoring.sh`. To see the available options type --help.
```sh
```bash
usage: ./uninstall_osm_k8s_monitoring.sh [OPTIONS]
Uninstall OSM Monitoring
OPTIONS
......@@ -94,20 +100,36 @@ Uninstall OSM Monitoring
-h / --help : print this help
```
## Access to Grafana Web Monitoring
## Grafana Dashboards
To view the WEB with the different dashboards it is necessary to connect to the service "grafana" installed with this utility
and view the NodePort that uses. If the utility is installed with the default namespace "monitoring" you must type this:
Dashboard are organized in two folders:
```sh
kubectl get all --namespace monitoring
```
* The folder "Kubernetes cluster" contains the dashboards available upstream as part of the standard prometheus operator helm installation:
You must see the NodePort (greater than 30000) that uses the grafana service and type in your WEB browser:
* Kubernetes components (api server, kubelet, pods, etc)
* Nodes of the cluster.
* Prometheus operator components.
```sh
http://<ip_your_osm_host>:<nodeport>
* The folder "Open Source MANO" contains additional dashboards customized for OSM:
* Summary with a quick view of the overall status.
* Host information
* Third party components: Kafka, MongoDB, MySQL.
## Adding new dashboards
New dashboards for OSM components should be included in "Open Source MANO" folder. Once we have the dashboard json file, please follow the instructions below to incorporate it into Grafana.
```bash
kubectl -n monitoring create configmap <configmap-name> --from-file=<dashboard-json-file>
kubectl -n monitoring patch configmap <configmap-name> --patch '{"metadata": {"labels": {"grafana_dashboard": "1"},{"annotations": {k8s-sidecar-target-directory: "/tmp/dashboards/Open Source MANO"}}}}'
```
where <configmap-name> and <dashboard-json-file> needs to be replaced with desired values. A proposal is that <configmap-name> begins with "osm-monitoring-osm-"
Once configmap is created and patched, we can download the manifest file for future use with next command:
```
- Username: admin
- Password: prom-operator
kubectl -n monitoring get configmap <configmap-name> -o yaml > <confimap-file>
```
Grafana Sidecar will read the label `grafana_dashboard: "1"` in the configmap and upload the dashboard information to Grafana.
The current dashboards can also be updated. It is only needed to modify/update the required yaml file available in `devops/installers/k8s` and apply them via kubectl. As an example `kubectl -n monitoring apply -f summary-dashboard.yaml` will update the changes made in the summary dashboard.
......@@ -75,4 +75,9 @@ do
rm $i
done
# Deleting Grafana dependence to avoid it installation
sed -i -e '/.*- name: grafana.*/,+3d' $CHARTS_DIR/prometheus-operator/requirements.yaml
sed -i -e '/.*- name: grafana.*/,+2d' $CHARTS_DIR/prometheus-operator/requirements.lock
rm -rf $CHARTS_DIR/prometheus-operator/charts/grafana
exit 0
# Copyright 2019 Minsait - Indra S.A.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Author: Jose Manuel Palacios (jmpalacios@minsait.com)
# Author: Jose Antonio Martinez (jamartinezv@minsait.com)
apiVersion: v1
data:
provider.yaml: |-
apiVersion: 1
providers:
- name: 'Kubernetes Cluster'
orgId: 1
folder: 'Kubernetes Cluster'
type: file
disableDeletion: false
options:
path: /tmp/dashboards
- name: 'OSM Third Party Modules'
orgId: 1
folder: 'OSM Third Party Modules'
type: file
disableDeletion: false
options:
path: '/tmp/dashboards/OSM Third Party Modules'
- name: 'OSM Modules'
orgId: 1
folder: 'OSM Modules'
type: file
disableDeletion: false
options:
path: '/tmp/dashboards/OSM Modules'
- name: 'Summary'
orgId: 1
folder: 'Summary'
type: file
disableDeletion: false
options:
path: /tmp/dashboards/Summary
kind: ConfigMap
metadata:
labels:
app: grafana
chart: grafana-3.8.19
heritage: Tiller
release: osm-monitoring
name: osm-monitoring-grafana-config-dashboards
\ No newline at end of file
......@@ -46,7 +46,7 @@ NAMESPACE=monitoring
HELM=""
DEBUG=""
DUMP_VARS=""
SERVICE_TYPE=""
SERVICE_TYPE=""
while getopts ":h-:n:s:" o; do
case "${o}" in
h)
......@@ -111,8 +111,8 @@ helm > /dev/null 2>&1
if [ $? != 0 ] ; then
echo "Helm is not installed, installing ....."
curl https://get.helm.sh/helm-v2.15.2-linux-amd64.tar.gz --output helm-v2.15.2.tar.gz
tar -zxvf helm-v2.15.2.tar.gz
sudo mv linux-amd64/helm /usr/local/bin/helm
tar -zxvf helm-v2.15.2.tar.gz
sudo mv linux-amd64/helm /usr/local/bin/helm
rm -r linux-amd64
rm helm-v2.15.2.tar.gz
fi
......@@ -130,10 +130,10 @@ if [ $? == 1 ] ; then
while true
do
tiller_status=`kubectl -n kube-system get deployment.apps/tiller-deploy --no-headers | awk '{print $2'}`
if [ ! -z "$tiller_status" ]
if [ ! -z "$tiller_status" ]
then
if [ $tiller_status == "1/1" ]
then
then
echo "Go...."
break
fi
......@@ -150,11 +150,7 @@ kubectl create namespace $NAMESPACE
# Prometheus operator installation
$HERE/change-charts-prometheus-operator.sh
echo "Creating stable/prometheus-operator"
helm install --namespace $NAMESPACE --version=$V_OPERATOR --name osm-monitoring --set kubelet.serviceMonitor.https=true,prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues=false $HERE/helm_charts/prometheus-operator
# Change osm-monitoring-grafana-config-dashboards to have folders
kubectl -n $NAMESPACE delete configmap osm-monitoring-grafana-config-dashboards
kubectl -n $NAMESPACE apply -f $HERE/grafanaproviders.yaml
helm install --namespace $NAMESPACE --version=$V_OPERATOR --name osm-monitoring --set kubelet.serviceMonitor.https=true,prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues=false,alertmanager.service.type=$SERVICE_TYPE,prometheus.service.type=$SERVICE_TYPE,grafana.serviceMonitor.selfMonitor=false $HERE/helm_charts/prometheus-operator
# Exporters installation
......@@ -165,14 +161,14 @@ helm install --namespace $NAMESPACE --version=$V_MONGODB_EXPORTER --name osm-mon
#dashboard:
kubectl -n $NAMESPACE apply -f $HERE/mongodb-exporter-dashboard.yaml
# Mysql
# Mysql
# exporter
echo "Creating stable/prometheus-mysql-exporter"
helm install --namespace $NAMESPACE --version=$V_MYSQL_EXPORTER --name osm-mysql-exporter --set serviceMonitor.enabled=true,mysql.user="root",mysql.pass=`kubectl -n osm get secret ro-db-secret -o yaml | grep MYSQL_ROOT_PASSWORD | awk '{print $2}' | base64 -d`,mysql.host="mysql.osm",mysql.port="3306" stable/prometheus-mysql-exporter
#dashboard:
kubectl -n $NAMESPACE apply -f $HERE/mysql-exporter-dashboard.yaml
# Kafka
# Kafka
# exporter
helm install --namespace $NAMESPACE --name osm-kafka-exporter $HERE/helm_charts/prometheus-kafka-exporter
# dashboard:
......@@ -181,22 +177,6 @@ kubectl -n $NAMESPACE apply -f $HERE/kafka-exporter-dashboard.yaml
# Deploy summary dashboard
kubectl -n $NAMESPACE apply -f $HERE/summary-dashboard.yaml
# Patch prometheus, alertmanager and grafana with service type
# By default is created with ClusterIP type
if [ $SERVICE_TYPE == "NodePort" ] ; then
kubectl --namespace $NAMESPACE patch service osm-monitoring-grafana -p '{"spec":{"type":"NodePort"}}'
kubectl --namespace $NAMESPACE patch service osm-monitoring-prometheus-alertmanager -p '{"spec":{"type":"NodePort"}}'
kubectl --namespace $NAMESPACE patch service osm-monitoring-prometheus-prometheus -p '{"spec":{"type":"NodePort"}}'
fi
if [ $SERVICE_TYPE == "LoadBalancer" ] ; then
kubectl --namespace $NAMESPACE patch service osm-monitoring-grafana -p '{"spec":{"type":"LoadBalancer"}}'
kubectl --namespace $NAMESPACE patch service osm-monitoring-prometheus-alertmanager -p '{"spec":{"type":"LoadBalancer"}}'
kubectl --namespace $NAMESPACE patch service osm-monitoring-prometheus-prometheus -p '{"spec":{"type":"LoadBalancer"}}'
fi
# Restart grafana to be sure patches are applied
echo "Restarting grafana POD..."
pod_grafana=`kubectl -n monitoring get pods | grep grafana | awk '{print $1}'`
kubectl --namespace $NAMESPACE delete pod $pod_grafana
# Deploy nodes dashboards
kubectl -n $NAMESPACE apply -f $HERE/nodes-dashboard.yaml
......@@ -23,7 +23,7 @@ metadata:
heritage: Tiller
name: osm-monitoring-prometheus-kafka-exporter-grafana
annotations:
k8s-sidecar-target-directory: "/tmp/dashboards/OSM Third Party Modules"
k8s-sidecar-target-directory: "/tmp/dashboards/Open Source MANO"
data:
kafka-exporter-dashboard.json: |-
{
......@@ -44,8 +44,8 @@ data:
"editable": true,
"gnetId": 7589,
"graphTooltip": 0,
"id": 36,
"iteration": 1569330292834,
"id": 10,
"iteration": 1578848023483,
"links": [],
"panels": [
{
......@@ -94,7 +94,7 @@ data:
"steppedLine": false,
"targets": [
{
"expr": "sum(rate(kafka_topic_partition_current_offset{instance=\"$instance\", topic=~\"$topic\"}[1m])) by (topic)",
"expr": "sum(kafka_topic_partition_current_offset - kafka_topic_partition_oldest_offset{instance=\"$instance\", topic=~\"$topic\"}) by (topic)",
"format": "time_series",
"intervalFactor": 1,
"legendFormat": "{{topic}}",
......@@ -105,7 +105,7 @@ data:
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Message in per second",
"title": "Messages stored per topic",
"tooltip": {
"shared": true,
"sort": 0,
......@@ -166,6 +166,7 @@ data:
"rightSide": false,
"show": true,
"sideWidth": 480,
"sort": "max",
"sortDesc": true,
"total": false,
"values": true
......@@ -192,7 +193,7 @@ data:
"instant": false,
"interval": "",
"intervalFactor": 1,
"legendFormat": "{{consumergroup}} (topic: {{topic}})",
"legendFormat": " {{topic}} ({{consumergroup}})",
"refId": "A"
}
],
......@@ -292,7 +293,7 @@ data:
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Message in per minute",
"title": "Messages produced per minute",
"tooltip": {
"shared": true,
"sort": 0,
......@@ -378,7 +379,7 @@ data:
"expr": "sum(delta(kafka_consumergroup_current_offset{instance=~'$instance',topic=~\"$topic\"}[5m])/5) by (consumergroup, topic)",
"format": "time_series",
"intervalFactor": 1,
"legendFormat": "{{consumergroup}} (topic: {{topic}})",
"legendFormat": " {{topic}} ({{consumergroup}})",
"refId": "A"
}
],
......@@ -386,7 +387,7 @@ data:
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Message consume per minute",
"title": "Messages consumed per minute",
"tooltip": {
"shared": true,
"sort": 0,
......@@ -521,15 +522,12 @@ data:
"refresh": "5s",
"schemaVersion": 19,
"style": "dark",
"tags": [
"Kafka"
],
"tags": [],
"templating": {
"list": [
{
"allValue": null,
"current": {
"selected": true,
"text": "osm-kafka-exporter-service",
"value": "osm-kafka-exporter-service"
},
......@@ -554,11 +552,6 @@ data:
},
{
"allValue": null,
"current": {
"selected": false,
"text": "10.244.0.87:9092",
"value": "10.244.0.87:9092"
},
"datasource": "Prometheus",
"definition": "",
"hide": 0,
......@@ -581,6 +574,7 @@ data:
{
"allValue": null,
"current": {
"tags": [],
"text": "All",
"value": [
"$__all"
......@@ -637,7 +631,7 @@ data:
]
},
"timezone": "browser",
"title": "Kafka Exporter Overview",
"title": "Kafka",
"uid": "jwPKIsniz",
"version": 1
"version": 2
}
......@@ -23,7 +23,7 @@ metadata:
heritage: Tiller
name: osm-monitoring-prometheus-mongodb-exporter-grafana
annotations:
k8s-sidecar-target-directory: "/tmp/dashboards/OSM Third Party Modules"
k8s-sidecar-target-directory: "/tmp/dashboards/Open Source MANO"
data:
mongodb-exporter-dashboard.json: |-
{
......@@ -40,12 +40,12 @@ data:
}
]
},
"description": "MongoDB Prometheus Exporter Dashboard. \r\nWorks well with https://github.com/dcu/mongodb_exporter\r\n\r\nIf you have the node_exporter running on the mongo instance, you will also get some useful alert panels related to disk io and cpu.",
"description": "MongoDB Prometheus Exporter Dashboard.",
"editable": true,
"gnetId": 2583,
"graphTooltip": 1,
"id": 29,
"iteration": 1569257185850,
"id": 9,
"iteration": 1577555358996,
"links": [],
"panels": [
{
......@@ -56,10 +56,192 @@ data:
"x": 0,
"y": 0
},
"id": 22,
"panels": [],
"repeat": "env",
"title": "Health",
"type": "row"
},
{
"cacheTimeout": null,
"colorBackground": false,
"colorValue": true,
"colors": [
"rgba(245, 54, 54, 0.9)",
"rgba(237, 129, 40, 0.89)",
"rgba(50, 172, 45, 0.97)"
],
"datasource": "Prometheus",
"decimals": null,
"format": "s",
"gauge": {
"maxValue": 100,
"minValue": 0,
"show": false,
"thresholdLabels": false,
"thresholdMarkers": true
},
"gridPos": {
"h": 4,
"w": 12,
"x": 0,
"y": 1
},
"id": 10,
"interval": null,
"links": [],
"mappingType": 1,
"mappingTypes": [
{
"name": "value to text",
"value": 1
},
{
"name": "range to text",
"value": 2
}
],
"maxDataPoints": 100,
"nullPointMode": "connected",
"nullText": null,
"options": {},
"postfix": "",
"postfixFontSize": "50%",
"prefix": "",
"prefixFontSize": "50%",
"rangeMaps": [
{
"from": "null",
"text": "N/A",
"to": "null"
}
],
"sparkline": {
"fillColor": "rgba(31, 118, 189, 0.18)",
"full": false,
"lineColor": "rgb(31, 120, 193)",
"show": false
},
"tableColumn": "",
"targets": [
{
"expr": "mongodb_instance_uptime_seconds{instance=~\"$instance\"}",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "",
"refId": "A",
"step": 1800
}
],
"thresholds": "0,360",
"title": "Uptime",
"type": "singlestat",
"valueFontSize": "80%",
"valueMaps": [
{
"op": "=",
"text": "N/A",
"value": "null"
}
],
"valueName": "current"
},
{
"cacheTimeout": null,
"colorBackground": false,
"colorValue": false,
"colors": [
"rgba(245, 54, 54, 0.9)",
"rgba(237, 129, 40, 0.89)",
"rgba(50, 172, 45, 0.97)"
],
"datasource": "Prometheus",
"format": "none",
"gauge": {
"maxValue": 100,
"minValue": 0,
"show": false,
"thresholdLabels": false,
"thresholdMarkers": true
},
"gridPos": {
"h": 4,
"w": 12,
"x": 12,
"y": 1
},
"id": 1,
"interval": null,
"links": [],
"mappingType": 1,
"mappingTypes": [
{
"name": "value to text",
"value": 1
},
{
"name": "range to text",
"value": 2
}
],
"maxDataPoints": 100,
"nullPointMode": "connected",
"nullText": null,
"options": {},
"postfix": "",
"postfixFontSize": "50%",
"prefix": "",
"prefixFontSize": "50%",
"rangeMaps": [
{
"from": "null",
"text": "N/A",
"to": "null"
}
],
"sparkline": {
"fillColor": "rgba(31, 118, 189, 0.18)",
"full": true,
"lineColor": "rgb(31, 120, 193)",
"show": true
},
"tableColumn": "",
"targets": [
{
"expr": "mongodb_connections{instance=~\"$instance\",state=\"current\"}",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "",
"metric": "mongodb_connections",
"refId": "A",
"step": 1800
}
],
"thresholds": "",
"title": "Open Connections",
"type": "singlestat",
"valueFontSize": "80%",
"valueMaps": [
{
"op": "=",
"text": "N/A",
"value": "null"
}
],
"valueName": "avg"
},
{
"collapsed": false,
"gridPos": {
"h": 1,
"w": 24,
"x": 0,
"y": 5
},
"id": 20,
"panels": [],
"repeat": "env",
"title": "Query Metrics for $env",
"title": "Operations",
"type": "row"
},
{
......@@ -74,7 +256,7 @@ data:
"h": 6,
"w": 10,
"x": 0,
"y": 1
"y": 6
},
"id": 7,
"legend": {
......@@ -103,7 +285,7 @@ data:
"steppedLine": false,
"targets": [
{
"expr": "rate(mongodb_op_counters_total{instance=~\"$env\"}[$interval])",
"expr": "rate(mongodb_op_counters_total{instance=~\"$instance\"}[$interval])",
"format": "time_series",
"interval": "",
"intervalFactor": 2,
......@@ -165,7 +347,7 @@ data:
"h": 6,
"w": 8,
"x": 10,
"y": 1
"y": 6
},
"id": 9,
"legend": {
......@@ -199,7 +381,7 @@ data:
"steppedLine": false,
"targets": [
{
"expr": "rate(mongodb_mongod_metrics_document_total{instance=~\"$env\"}[$interval])",
"expr": "rate(mongodb_mongod_metrics_document_total{instance=~\"$instance\"}[$interval])",
"format": "time_series",
"interval": "",
"intervalFactor": 2,
......@@ -261,7 +443,7 @@ data:
"h": 6,
"w": 6,
"x": 18,
"y": 1
"y": 6
},
"id": 8,
"legend": {
......@@ -290,7 +472,7 @@ data:
"steppedLine": false,
"targets": [
{
"expr": "rate(mongodb_mongod_metrics_query_executor_total{instance=~\"$env\"}[$interval])",
"expr": "rate(mongodb_mongod_metrics_query_executor_total{instance=~\"$instance\"}[$interval])",
"format": "time_series",
"interval": "",
"intervalFactor": 2,
......@@ -340,273 +522,6 @@ data:
"alignLevel": null
}
},
{
"collapsed": false,
"gridPos": {
"h": 1,
"w": 24,
"x": 0,
"y": 7
},
"id": 22,
"panels": [],
"repeat": "env",
"title": "Health metrics for $env",
"type": "row"
},
{
"cacheTimeout": null,
"colorBackground": false,
"colorValue": true,
"colors": [
"rgba(245, 54, 54, 0.9)",
"rgba(237, 129, 40, 0.89)",
"rgba(50, 172, 45, 0.97)"
],
"datasource": "Prometheus",
"decimals": null,
"format": "s",
"gauge": {
"maxValue": 100,
"minValue": 0,
"show": false,
"thresholdLabels": false,
"thresholdMarkers": true
},
"gridPos": {
"h": 4,
"w": 4,
"x": 0,
"y": 8
},
"id": 10,
"interval": null,
"links": [],
"mappingType": 1,
"mappingTypes": [
{
"name": "value to text",
"value": 1
},
{
"name": "range to text",
"value": 2
}
],
"maxDataPoints": 100,
"nullPointMode": "connected",
"nullText": null,
"options": {},
"postfix": "",
"postfixFontSize": "50%",
"prefix": "",
"prefixFontSize": "50%",
"rangeMaps": [
{
"from": "null",
"text": "N/A",
"to": "null"
}
],
"sparkline": {
"fillColor": "rgba(31, 118, 189, 0.18)",
"full": false,
"lineColor": "rgb(31, 120, 193)",
"show": false
},
"tableColumn": "",
"targets": [
{
"expr": "mongodb_instance_uptime_seconds{instance=~\"$env\"}",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "",
"refId": "A",
"step": 1800
}
],
"thresholds": "0,360",
"title": "Uptime",
"type": "singlestat",
"valueFontSize": "80%",
"valueMaps": [
{
"op": "=",
"text": "N/A",
"value": "null"
}
],
"valueName": "current"
},
{
"cacheTimeout": null,
"colorBackground": false,
"colorValue": false,
"colors": [
"rgba(245, 54, 54, 0.9)",
"rgba(237, 129, 40, 0.89)",
"rgba(50, 172, 45, 0.97)"
],
"datasource": "Prometheus",
"decimals": null,
"format": "none",
"gauge": {
"maxValue": 100,
"minValue": 0,
"show": false,
"thresholdLabels": false,
"thresholdMarkers": true
},
"gridPos": {
"h": 4,
"w": 4,
"x": 4,
"y": 8
},
"id": 2,
"interval": null,
"links": [],
"mappingType": 1,
"mappingTypes": [
{
"name": "value to text",
"value": 1
},
{
"name": "range to text",
"value": 2
}
],
"maxDataPoints": 100,
"nullPointMode": "connected",
"nullText": null,
"options": {},
"postfix": "",
"postfixFontSize": "50%",
"prefix": "",
"prefixFontSize": "50%",
"rangeMaps": [
{
"from": "null",
"text": "N/A",
"to": "null"
}
],
"sparkline": {
"fillColor": "rgba(31, 118, 189, 0.18)",
"full": true,
"lineColor": "rgb(31, 120, 193)",
"show": true
},
"tableColumn": "",
"targets": [
{
"expr": "mongodb_connections{instance=~\"$env\",state=\"available\"}",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "",
"metric": "mongodb_connections",
"refId": "A",
"step": 1800
}
],
"thresholds": "",
"title": "Available Connections",
"type": "singlestat",
"valueFontSize": "80%",
"valueMaps": [
{
"op": "=",
"text": "N/A",
"value": "null"
}
],
"valueName": "avg"
},
{
"cacheTimeout": null,
"colorBackground": false,
"colorValue": false,
"colors": [
"rgba(245, 54, 54, 0.9)",
"rgba(237, 129, 40, 0.89)",
"rgba(50, 172, 45, 0.97)"
],
"datasource": "Prometheus",
"format": "none",
"gauge": {
"maxValue": 100,
"minValue": 0,
"show": false,
"thresholdLabels": false,
"thresholdMarkers": true
},
"gridPos": {
"h": 4,
"w": 16,
"x": 8,
"y": 8
},
"id": 1,
"interval": null,
"links": [],
"mappingType": 1,
"mappingTypes": [
{
"name": "value to text",
"value": 1
},
{
"name": "range to text",
"value": 2
}
],
"maxDataPoints": 100,
"nullPointMode": "connected",
"nullText": null,
"options": {},
"postfix": "",
"postfixFontSize": "50%",
"prefix": "",
"prefixFontSize": "50%",
"rangeMaps": [
{
"from": "null",
"text": "N/A",
"to": "null"
}
],
"sparkline": {
"fillColor": "rgba(31, 118, 189, 0.18)",
"full": true,
"lineColor": "rgb(31, 120, 193)",
"show": true
},
"tableColumn": "",
"targets": [
{
"expr": "mongodb_connections{instance=~\"$env\",state=\"current\"}",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "",
"metric": "mongodb_connections",
"refId": "A",
"step": 1800
}
],
"thresholds": "",
"title": "Open Connections",
"type": "singlestat",
"valueFontSize": "80%",
"valueMaps": [
{
"op": "=",
"text": "N/A",
"value": "null"
}
],
"valueName": "avg"
},
{
"collapsed": false,
"gridPos": {
......@@ -618,7 +533,7 @@ data:
"id": 23,
"panels": [],
"repeat": null,
"title": "Resource Metrics",
"title": "Resources",
"type": "row"
},
{
......@@ -666,7 +581,7 @@ data:
"steppedLine": false,
"targets": [
{
"expr": "mongodb_memory{instance=~\"$env\",type=~\"resident|virtual\"}",
"expr": "mongodb_memory{instance=~\"$instance\",type=~\"resident|virtual\"}",
"format": "time_series",
"interval": "",
"intervalFactor": 2,
......@@ -759,7 +674,7 @@ data:
"steppedLine": false,
"targets": [
{
"expr": "rate(mongodb_network_bytes_total{instance=~\"$env\"}[$interval])",
"expr": "rate(mongodb_network_bytes_total{instance=~\"$instance\"}[$interval])",
"format": "time_series",
"interval": "",
"intervalFactor": 2,
......@@ -814,9 +729,7 @@ data:
"refresh": "5s",
"schemaVersion": 19,
"style": "dark",
"tags": [
"prometheus"
],
"tags": [],
"templating": {
"list": [
{
......@@ -829,9 +742,9 @@ data:
"definition": "",
"hide": 0,
"includeAll": true,
"label": "env",
"label": "instance",
"multi": true,
"name": "env",
"name": "instance",
"options": [],
"query": "label_values(mongodb_connections, instance)",
"refresh": 1,
......@@ -951,5 +864,5 @@ data:
"timezone": "browser",
"title": "MongoDB",
"uid": "HEK4NbtZk",
"version": 1
"version": 2
}
......@@ -23,7 +23,7 @@ metadata:
heritage: Tiller
name: osm-monitoring-prometheus-mysql-exporter-grafana
annotations:
k8s-sidecar-target-directory: "/tmp/dashboards/OSM Third Party Modules"
k8s-sidecar-target-directory: "/tmp/dashboards/Open Source MANO"
data:
mysql-exporter-dashboard.json: |-
{
......@@ -40,7 +40,7 @@ data:
}
]
},
"description": "Basic Mysql dashboard for the prometheus exporter ",
"description": "Mysql dashboard",
"editable": true,
"gnetId": 6239,
"graphTooltip": 0,
......@@ -731,7 +731,7 @@ data:
"dashLength": 10,
"dashes": false,
"datasource": "Prometheus",
"description": "The number of connections that were aborted because the client died without closing the connection properly. See Section B.5.2.10, “Communication Errors and Aborted Connections”.",
"description": "The number of connections that were aborted because the client died without closing the connection properly.",
"fill": 1,
"fillGradient": 0,
"gridPos": {
......@@ -820,7 +820,7 @@ data:
"dashLength": 10,
"dashes": false,
"datasource": "Prometheus",
"description": "The number of failed attempts to connect to the MySQL server. See Section B.5.2.10, “Communication Errors and Aborted Connections”.\n\nFor additional connection-related information, check the Connection_errors_xxx status variables and the host_cache table.",
"description": "The number of failed attempts to connect to the MySQL server.",
"fill": 1,
"fillGradient": 0,
"gridPos": {
......@@ -1106,8 +1106,6 @@ data:
"schemaVersion": 19,
"style": "dark",
"tags": [
"Databases",
"backgroundservices"
],
"templating": {
"list": [
......@@ -1169,7 +1167,7 @@ data:
]
},
"timezone": "",
"title": "Mysql - Prometheus",
"title": "Mysql",
"uid": "6-kPlS7ik",
"version": 1
}
This diff is collapsed.
This diff is collapsed.
......@@ -70,7 +70,8 @@ fi
# remove dashboards
echo "Deleting dashboards...."
kubectl -n $NAMESPACE delete configmap osm-monitoring-prometheus-summary-grafana > /dev/null 2>&1
kubectl -n $NAMESPACE delete configmap osm-monitoring-osm-summary-grafana > /dev/null 2>&1
kubectl -n $NAMESPACE delete configmap osm-monitoring-osm-nodes-grafana > /dev/null 2>&1
kubectl -n $NAMESPACE delete configmap osm-monitoring-prometheus-kafka-exporter-grafana > /dev/null 2>&1
kubectl -n $NAMESPACE delete configmap osm-monitoring-prometheus-mysql-exporter-grafana > /dev/null 2>&1
kubectl -n $NAMESPACE delete configmap osm-monitoring-prometheus-mongodb-exporter-grafana > /dev/null 2>&1
......@@ -98,11 +99,9 @@ echo "Deleting monitoring namespace...."
kubectl delete namespace $NAMESPACE
if [ -n "$HELM" ] ; then
sudo helm reset --force
kubectl delete --namespace kube-system serviceaccount tiller
kubectl delete clusterrolebinding tiller-cluster-rule
sudo rm /usr/local/bin/helm
sudo helm reset --force
kubectl delete --namespace kube-system serviceaccount tiller
kubectl delete clusterrolebinding tiller-cluster-rule
sudo rm /usr/local/bin/helm
rm -rf $HOME/.helm
fi
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment