Configuring Prometheus-operator helm chart with AWS EKS(Part-2): Monitoring of external services

Vishesh Kumar Singh
Zolo Engineering
Published in
7 min readMar 31, 2020

This is the second article in the series about configuring prometheus-operator helm chart with AWS EKS. If you have not read the first article yet, I suggest you give that a read before you continue.

In this, we will look at how to add an external service to the Prometheus targets list and monitor them. We will be covering the following concepts:

  • Basic prometheus-operator architecture
  • Role of servicemonitor
  • How prometheus-operator uses node-exporter to fetch the metrics by default
  • Working examples by monitoring Elasticsearch and adding Grafana Dashboard for visualizing
  • Monitoring different database services like MySQL, MongoDB, and Redis in k8s cluster

The prometheus-operator is simple to install with a single command and enables users to configure and manage instances of Prometheus using the simple declarative configuration that will, in response, create, configure, and manage Prometheus monitoring instances. It does not support annotation-based discovery of services as Prometheus does(https://github.com/helm/charts/tree/master/stable/prometheus), using the serviceMonitor CRD is its place as it provides far more configuration options.

What do we mean by annotation-based discovery?

It is the default configuration that causes Prometheus to scrape a variety of Kubernetes resource types, provided they have the correct annotations as mentioned below.

In order to get Prometheus to scrape pods, you must add annotations to the pods as below:

metadata:
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: /metrics #based on the URL serves metrics from
prometheus.io/port: "8080"
spec:
...

Instead of this, prometheus-operator uses servicemonitor to monitor the same.

Once installed the prometheus-operator provides the following features:

  • Create/Destroy: Easily launch a Prometheus instance for your Kubernetes namespace, a specific application or team easily using the Operator.
  • Simple Configuration: Configure the fundamentals of Prometheus like versions, persistence, retention policies, and replicas from a native Kubernetes resource.
  • Target Services via Labels: Automatically generate monitoring target configurations based on familiar Kubernetes label queries; no need to learn a Prometheus specific configuration language.

Till now, our configuration look likes:

  • Installed prometheus-operator helm chart in our cluster with namespace monitoring and includes Prometheus, Alertmanager and Grafana with default specifications.

How does it work?

The Basic Operator Architecture:

After we successfully deployed prometheus-operator we should see a new CRDS(Custom Resource Definition)

kubectl get crds -n monitoring

  • alertmanagers — defines installation for Alertmanager
  • podmonitors — determines which pods should be monitored
  • prometheuses — defines installation for Prometheus
  • prometheusrules — defines rules for alert manager
  • servicemonitors — determines which services should be monitored. The operator automatically generates Prometheus scrape configuration based on the definition.

ServiceMonitor

kubectl get servicemonitors -n monitoring

This displays, the list of all service monitors, which is set up by default by the operator, which Prometheus shows in the target.

Let's take an example, for how Prometheus gets to know about node-exporters and how it uses its service monitor to identify its target and then we will make our own service monitor fetch the ElasticSearch metrics, which behaves as our external service to monitor.

Scraping an exporter or separate metrics port requires a service that targets the Pod(s) of the exporter or application.

kubectl get svc prometheus-operator-prometheus-node-exporter -n monitoring -o yaml

kubectl get servicemonitors prometheus-operator-node-exporter -n monitoring -o yaml

So, the important part here is the labels:

  • app: prometheus-node-exporter (comes from the service, which Prometheus will monitor)
  • release: prometheus-operator (this name comes from the name of helm chart)

Also, we need to specify the endpoint, where to get metrics from in the service monitor, taking the name of the port from the service file.

  • port: metrics(defined in service, the port where metrics are exposed to)

So, now we have seen how Prometheus-operator monitors its services, we are good to go for making our own service monitor keep check for our external services which we are deploying in our k8s cluster such as databases, queue service, log services, etc.

Example1: Monitoring ElasticSearch Cluster via ElasticSearch Exporter Helm Chart

Exporter Helm Chart: https://github.com/helm/charts/tree/master/stable/elasticsearch-exporter

helm install --name es-exporter -f values.yml stable/elasticsearch-exporter --namespace kube-logging

This lets us install an elastic search exporter in Kube-logging namespace, where our ES clusters reside(we need to add Address(host and port) of the Elasticsearch node we should connect to in our values file)

kubectl get svc es-exporter-elasticsearch-exporter -n kube-logging -o yaml

This is the ES exporter service file, which gets created by deploying the ES exporter helm chart, as mentioned earlier.

Now, we will make a service monitor for this ES service, so that our Prometheus gets to know about our ES cluster and scrape the metrics.

How the service monitor discovers?

  • labels
  • namespaceSelector

So, after creating our service monitor, we will be able to get our metrics on Prometheus and monitor our ES cluster.

While ServiceMonitors must live in the same namespace as the Prometheus resource, discovered targets may come from any namespace(here Kube-logging). This allows for cross-namespace monitoring use cases. Use the namespaceSelector of the ServiceMonitorSpec to restrict the namespaces from which Endpoints objects may be discovered.

Now, we are able to get metrics in Prometheus as shown:

We can add the Grafana dashboard for the same for visualization.

Steps to follow:

  • Go on to the grafana dashboard set-up by prometheus-operator with the credentials given in YAML file
  • Select import dashboard from the sidebar options
  • There are dashboards which are made, we only need to import the dashboards and edit it, if changes are required(https://grafana.com/grafana/dashboards/6483)
  • Copy the code ID given(6483, in this case)
  • Paste on your Grafana dashboard, and select Prometheus as Datasource
  • Our Elasticsearch dashboard is ready

Example2: Monitoring MySQL Cluster

MySQL Helm Chart: https://github.com/helm/charts/tree/master/stable/mysql

helm install --name mysql-8-0-19 stable/mysql --values values.yaml --namespace mysql

This will let you have MySQL ready in our k8s cluster. Ensure to make these changes and keep this highlighted as this will give the pathway to monitor your MySQL cluster

metrics:
enabled: true
image: prom/mysqld-exporter
imageTag: v0.10.0
imagePullPolicy: IfNotPresent
resources: {}
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9104"
livenessProbe:
initialDelaySeconds: 15
timeoutSeconds: 5
readinessProbe:
initialDelaySeconds: 5
timeoutSeconds: 1

From the above section, we get to know that we are enabling metrics options(Start a side-car Prometheus exporter) and our metrics are exposed on port 9104, and after deploying in the cluster we can get more information by its service.

kubectl get svc mysql-8-0-19 -n mysql -o yaml

From the above file, we can check that the port name is metrics by default and labels from the same and then add them in our service monitor so that Prometheus can discover our targets.

vim service_monitor_mysql.yaml
kubectl create -f service_monitor_mysql.yml -n monitoring

Deploying this to our cluster, will let Prometheus discover MySQL and monitor it, which can be seen under the targets section.

Example3: Monitoring MongoDB Cluster

MongoDB Helm Chart: https://github.com/helm/charts/tree/master/stable/mongodb-replicaset

helm install — name mongodb-replicaset -f values.yaml stable/mongodb-replicaset — namespace mongodb-replica

This will let you have MongoDB ready in our k8s cluster.

So, the same configuration can be seen for MongoDB also, the port name declared is metrics and it exposes metrics on port 9216.

vim service_monitor_mongodb.yaml
kubectl create -f service_monitor_mongo.yml -n monitoring

So, after creating the service monitor for the same, it will let Prometheus discover MongoDB and monitor it, which can be seen under the targets section.

Example4: Monitoring Redis Cluster

Redis Helm Chart:https://github.com/helm/charts/tree/master/stable/redis

helm install stable/redis --values values.yaml --namespace redis --name redis-cluster

This will let you have Redis ready in our k8s cluster. After deploying it successfully, you will get the metrics at port 9121. This can be verified by using the kubectl port-forward and can be checked on your localhost. Now, since metrics are exposed by making a service monitor, Prometheus will discover the target and we will able to get all Redis metrics in Prometheus.

vim service_monitor_redis.yaml
kubectl create -f service_monitor_redis.yml -n monitoring

Deploying this to our cluster in monitoring namespace, will let Prometheus discover Redis and monitor it, which can be seen under the targets section.

Cleaning Up:

Say you made a mistake and want to purge the whole deployment and restart for any cluster say Redis, you would have to run the following command,

#!/usr/bin/env bash
helm delete --purge redis-cluster
kubectl delete -f service_monitor_redis.yaml
kubectl delete namespace redis #if something goes wrong with cluster

If you are into DevOps or Fullstack and find terms like ServiceMesh, Infrastructure Automation, Event Loop and Micro-Frontends exciting as much as we do, then drop us a line at join-tech@zolostays.com. We would love to take you out for a Coffee ☕ to discuss the possibility of you working with us.

--

--

Vishesh Kumar Singh
Zolo Engineering

Techie by trade, and an athlete at heart. State-level in badminton/football, and cricket allrounder. Exploring tech and sports with a passion!