In this article, I demonstrate the general steps needed in creating a custom Prometheus exporter, creating a starting block for anyone attempting to develop their exporters, and will do so via a simple example.
Prerequisites:
- Kubernetes cluster
- Installed Helm
- Deployed Prometheus and Grafana (I recommend kube-prometheus-stack)
Prometheus Exporter
In this article, the exporter generates a random number between 1 and 10 and exposes it to Prometheus via /metrics endpoint.
Before we explore how to write custom exporters check if what you are looking for exists already.
What is an exporter?
Exporter is an abstraction layer between the application and Prometheus, which takes application-formatted metrics and converts them to Prometheus metrics for consumption. Because Prometheus is an HTTP pull model, the exporter typically provides an endpoint where the Prometheus metrics can be scraped.
The relationship between Prometheus, the exporter, and the application in a Kubernetes environment can be visualized in the following way:
1. Create an exporter (Prometheus Client)
The Prometheus client is a Python client for writing exporters. Below you can find a simple example main.py which generates a random number between 1 and 10, every 30 seconds. Additionally, it starts a Prometheus HTTP server on port 8000.
Note: Remember to create
requirements.txtfile with necessary Python dependencies, in this case:prometheus-client
main.py
import random
import time
from prometheus_client import start_http_server, Gauge
# Create a Prometheus gauge metric
random_number_metric = Gauge('random_number', 'Random number generated every 30 sec')
def generate_random_number():
# Generate a random number between 1 and 10
return random.randint(1, 10)
if __name__ == '__main__':
# Start the Prometheus HTTP server on port 8000
start_http_server(8000)
while True:
# Generate a random number
random_number = generate_random_number()
print('Random number is: ', random_number)
# Set the value of the Prometheus metric
random_number_metric.set(random_number)
# Sleep for 30 sec
time.sleep(30)In the Gauge the random_number is the name of the metric, while random number generated every 30 sec is a a help string or description for the metric. Metric types other than Gauge can be used, and their use depends on the metric you intend to measure for example:
Counter: for metrics that can only be incremented and reset at process restartGauge: can go up and downSummary: tracks the size and number of eventsHistogram
and more… (for additional information check LINK)
2. Verify the metrics
We can verify that the exporter is working as intended
curl -iL 127.0.0.1:8000/metrics3. Containerizing an application
Now is the time to dockerize our application. We will start with creating a Dockerfile for our application.
Dockerfile
FROM python:3.11-alpine
# This prevents Python from writing out pyc files
ENV PYTHONDONTWRITEBYTECODE 1
# This keeps Python from buffering stdin/stdout
ENV PYTHONUNBUFFERED 1
WORKDIR /app
COPY requirements.txt /app/requirements.txt
RUN pip install -r requirements.txt
COPY main.py /app/main.py
EXPOSE 8000
ENTRYPOINT ["python3", "main.py"]5. Build the image and push it to the remote repository.
This step will depend on your personal preference. I have used AWS ECR for that purpose.
We have written the exporter itself using prometheus-client, dockerized the application, and pushed it to the remote repository.
What is left to be done is deploying our application in Kubernetes, we will handle that in the steps below…
6. Create manifest files (Deployment, Service and ServiceMonitor)
We first create deployment.yaml and service.yaml files.
deployment.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
name: random-num
spec:
replicas: 1
# match the pods controlled by deployment
selector:
matchLabels:
app: random-num
# pod template used by deploy to create a new pod.
template:
metadata:
labels:
app: random-num
spec:
containers:
- name: random-num
image: <IMAGE_URL>
imagePullPolicy: Always
ports:
- containerPort: 8000service.yaml
We set the port to the number we wish, for example, 9102 -port on which the service will listen for incoming traffic inside the cluster. When traffic comes to port 9102 on the Service, it will be forwarded to port 8000 on the Pods that match the selector.
For development, we will leave the service type as ClusterIP (default) — for production, you would want to change it, for example to: LoadBalancer. You can also use NodePort service type, which doesn’t require port forwarding.
apiVersion: v1
kind: Service
metadata:
name: random-num
labels:
app: random-num
spec:
selector:
app: random-num
ports:
- name: metrics
# Sets port on which the Service will listen for incoming traffic within the cluster.
port: 9102
# Specifies the port on the Pods to which the traffic should be forwarded.
targetPort: 8000
protocol: TCPAfter defining the above elements we will look into a key component that will link the Prometheus to our exporter, allowing for scraping the appropriate metrics — ServiceMonitor.
ServiceMonitor
A ServiceMonitor in Prometheus is a custom resource provided by the Prometheus Operator, which is used to automatically discover and monitor services running on Kubernetes. It allows Prometheus to dynamically discover endpoints of your applications and scrape their metrics.
ServiceMonitor acts as a bridge between your Kubernetes services/Pods and Prometheus, enabling automatic discovery and monitoring of your applications’ metrics without the need for manual configuration.
I assume you have already installed Prometheus and Grafana in your K8s cluster. If you didn’t I recommend you use kube-prometheus-stack which is the easiest and fastest way to deploy the stack needed for monitoring.
We check the existing service-monitors:
# Define alias for convenience
alias k="kubectl"
# Display existing service monitors
k get servicemonitorBy default when installing Prometheus with kube-prometheus-stack you will get access to node-exporter which scrapes information about your Kubernetes cluster and exposes the information to Prometheus/ grafana. Let’s view its manifest file.
k get servicemonitor monitoring-prometheus-node-exporter -o yamlFor Prometheus to discover our service-monitor it does so through a release label. In our configuration we defined the label to be monitoring, this is crucial to check as we will need this release label value in our service monitor!
servicemonitor.yaml
Note: The label
releaseis set tomonitoringthough you need to check it for yourself as it can be different for you. The scrapingintervalfor metrics from the service monitored by the ServiceMonitor is every 10 seconds.
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: random-num-servicemonitor
namespace: prometheus
labels:
release: monitoring
spec:
selector:
matchLabels:
app: random-num
endpoints:
- port: metrics
interval: 10sNow that we have all of them in place we can deploy them
k apply -f service.yaml
k apply -f servicemonitor.yaml
k apply -f deployment.yamlTo enable external access to the/metrics endpoint (port on the container) we will perform port forwarding.
k port-forward service/<svc-name> <host-port>In our case:
k port-forward service/random-num 9102Accessing the content of the /metrics endpoint.
# 127.0.0.1 or localhost - "loopback" address
curl -iL 127.0.0.1:9102Visualizing the scrapped metrics in Prometheus and Grafana
To access the Prometheus URL, follow these steps:
# retrieve the Node Port IP associated with
# prometheus-monitoring-kube-prometheus-prometheus-0 ex. 10.137.XXX.XXX
k get pods -o wide
# obtain the port mapping ex. 9090:30000/TCP
k get svc We then access using the URL: http://10.137.XXX.XXX:30000
Recall that the name of the metrics random_number
Great! — we can see that Prometheus is successfully discovering the metrics we wish to scrape. Now let’s proceed to Grafana to visualize the same results.
We can similarly access Grafana as we accessed Prometheus i.e. using IP and port. After logging in we can go and create a new Dashboard, while choosing as source Prometheus.
After selecting random_number as our metric we click: “Run queries”. We can then manipulate the visualization in a way that fits our needs.
Below I present a table view.
To validate match between Grafana results and atual results we can display the logs of our random_num pod
k logs -f pod <random_num_pod_name>The numbers should match.
Conclusion
This will be it for today. I hope this short guide will help you out in creating your custom exporters and will lay a foundation for some of the more complex exporters.
You made it!
If you like the style of the article and found the information useful, please consider clicking the 👏 emoji.
Got any questions? Want to connect? — Feel free to add me on LinkedIn
Good luck!
