Feed Prometheus with Locust: performance tests as a metrics’ source

Adrian Gonciarz
The Startup
Published in
8 min readJun 19, 2020

Introduction

Prometheus and Grafana are widely used as a monitoring solution on Kubernetes clusters. For some reason though, load testing is far distant from already implemented tools and is often run on separate machines and not using the aforementioned monitoring stack as a reporting system.

The idea presented here uses Locust load test results as a Prometheus metrics’ source to graph load test results against cluster resource consumption in Grafana. The whole tutorial is based on Docker Compose and can easily be run on any machine with Docker.

Prerequisites

I assume the reader has a basic understanding of:

  • Docker
  • Docker Compose
  • the concept of load testing
  • Prometheus and Grafana

Also, as a prerequisite to run code along with this tutorial Docker is required (you can get it here).

If not stated explicitly, all files are created in the root directory of the tutorial. Many times I use docker-compose up to spin up the environment. Although it is not necessary to run docker-compose down in between, if you stumble across issues — it might be good to put down the old composition (viadocker-compose down). If the problems will be more persistent — docker system prune might come handy.

Preparing API service

Let’s start by creating a simple API service emulating production app we can experiment with. For this, I will use JSON Server, a simple mock API I often use for education and demos.

First, create a directory api and create filejson.db inside with the following content.

{
"posts": [
{ "id": 1, "title": "Test Title 1", "author": "alysson" },
{ "id": 2, "title": "Test Title 2", "author": "john" },
{ "id": 3, "title": "Test Title 3", "author": "mike" },
{ "id": 4, "title": "Test Title 4", "author": "mary" },
{ "id": 5, "title": "Test Title 5", "author": "kate" },
{ "id": 6, "title": "Test Title 6", "author": "andy" },
{ "id": 7, "title": "Test Title 7", "author": "wendy" },
{ "id": 8, "title": "Test Title 8", "author": "sophie" }
],
"comments": [
{ "id": 1, "body": "some comment 1", "postId": 1 },
{ "id": 2, "body": "some comment 2", "postId": 2 },
{ "id": 3, "body": "some comment 3", "postId": 1 },
{ "id": 4, "body": "some comment 4", "postId": 3 },
{ "id": 5, "body": "some comment 5", "postId": 4 }
],
"profile": { "name": "typicode" }
}

It will be our test database of API resources. It contains 8 posts and 5 comments which will be available under /posts and /comments endpoints of the API.

Next, we create aDockerfile inside the same api folder (this directory structure will be important for build context) with the following content:

FROM node:latestWORKDIR /mnt/apiRUN npm install -g json-serverCMD json-server --watch /mnt/api/db.json --port 3333 --host 0.0.0.0

This is a very simple Dockerfile with layers:

  1. NodeJS base image
  2. working directory /mnt/api
  3. installing the NPM package of the JSON server.
  4. a command to start the API server on port 3333 on container creation, using a data file placed under /mnt/api/db.json

In our project root directory let’s start forging docker-compose.yml, beginning with

version: '3.0'
services:
api:
image: api:latest
container_name: api
build:
context: api/
dockerfile: Dockerfile
ports:
- "3333:3333"
volumes:
- ./api/db.json:/mnt/api/db.json

Great! Now we can run docker-compose build to create an API image. After a successful build, run docker-compose up. Open browser and visit http://localhost:3333 to see the API running. Available endpoints will be:

The main page of API running on localhost:3333

Adding API Container metrics

An idea for simple resource monitoring of API container comes from the Prometheus official cAdvisor tutorial. It’s as simple as adding another service to docker-compose.yml

cadvisor:
image: google/cadvisor:latest
container_name: cadvisor
ports:
- 8080:8080
volumes:
- /:/rootfs:ro
- /var/run:/var/run:rw
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
depends_on:
- api

When we spin docker-compose up we should be able to find some data about our API container underhttp://localhost:8080/docker/api

Container API info in cAdvisor on localhost:8080/docker/api

All the metrics available (which we will plug into Prometheus) can be found under http://localhost:8080/metrics

Running Prometheus

What we need now is Prometheus as a metrics consumer. Thus, we create a Prometheus provisioning config file in our root directory, called prometheus.yml with content:

scrape_configs:
- job_name: prometheus_scrapper
scrape_interval: 5s
static_configs:
- targets:
- cadvisor:8080

Now we add another piece into our docker-compose.yml

prometheus:
image: prom/prometheus:latest
container_name: prometheus
ports:
- 9090:9090
command:
- --config.file=/etc/prometheus/prometheus.yml
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml:ro

Once we rundocker-compose up and wait for the composition to start, we can visit Prometheus under http://localhost:9090

Prometheus server on localhost:9090

cAdvisor as a target should be shown inhttp://localhost:9090/targets . Having those connected means we can now draw metrics such as:

  • rate(container_cpu_usage_seconds_total{name="api"}[1m])
  • container_memory_usage_bytes{name="api"}

If we run load tests again we should see MEM or CPU metrics being populated via Prometheus Graph

Prometheus graph for API mem usage

Data visualization — Grafana

The only thing left is to set up local Grafana. That would require two configuration files in the root directory:grafana.ini (Copy the official file under this name) and a file datasource.yaml like this:

apiVersion: 1datasources:
- name: Prometheus
type: prometheus
access: proxy
url: http://prometheus:9090

Then add Grafana to docker-compose.yml

grafana:
image: grafana/grafana
container_name: grafana
ports:
- "3000:3000"
volumes:
- ./grafana.ini:/etc/grafana/grafana.ini
- ./datasource.yaml:/etc/grafana/provisioning/datasources/datasource.yaml
depends_on:
- prometheus

and as previously restart docker-compose. Now you have running Grafana with Prometheus as data source under http://localhost:3000 (default login credentials are our favoriteadmin/admin). We can create graphs for container resource usage metrics, using Prometheus as a data source.

API memory usage from Prometheus in Grafana

Load Tests and metrics

We’d like to be able to load test our API using Python library Locust.io. To do this, we create a new directory load_tests and put basic locustfile.py inside

import random
from locust import HttpUser, task, between
class QueryAPI(HttpUser):
wait_time = between(1, 2)
@task
def get_posts(self):
self.client.get("/posts")
@task
def get_comments(self):
self.client.get("/comments")

The file describes two tasks (HTTP requests) our load tests user will be executing:

  • GET /posts
  • GET /comments

Since Locust provides the official Docker image, we’ll use it according to the official tutorial by mounting the created file to the container. Docker UI server will use port 8089. Update docker-compose.yml with new service

locust:
image: locustio/locust
ports:
- "8089:8089"
volumes:
- ./load_tests/:/mnt/locust
command: -f /mnt/locust/locustfile.py
depends_on:
- api

After we run docker-compose up we should be able to open http://localhost:8089 to see Locust UI waiting for orders.

Locust running on localhost:8089

We can start a small test by setting:

  • 5 concurrent users
  • hatch rate of 1 user/s
  • hosthttp://api:3333 (internal Docker Compose alias under which locust container sees our API)

and see the test running without errors

Locust running on localhost:8089

You can then stop the tests with the nice red “stop” button in the top bar.

The missing link — Locust Metrics Exporter

Some smart people of the internet have come up with Locust Metrics Exporter that exposes Locust test results as Prometheus metrics. The project can be found here. So let’s add it to our docker-compose.yml

locust-metrics-exporter:
image: containersol/locust_exporter
ports:
- "9646:9646"
environment:
- LOCUST_EXPORTER_URI=http://locust:8089
depends_on:
- locust

And as always — run docker-compose up . We should be able to see metrics under http://localhost:9646/metrics

Locust results exposed as Prometheus metrics on localhost:9646

Next thing is to modify prometheus.yml by adding Locust Metrics Exporter to targets scrapped by Prometheus

scrape_configs:
- job_name: prometheus_scrapper
scrape_interval: 5s
static_configs:
- targets:
- cadvisor:8080
- locust-metrics-exporter:9646

Our Locust Metrics Exporter should be shown as the second metrics’ source under http://localhost:9090/targets

Now when we run load tests on http://localhost:8089 we will be able to graph some metrics related to locust tests, for example:

  • locust_request_current_rps
  • locust_request_avg_response_time
Prometheus graph for average response time measured by Locust

The final graphs

Since our Prometheus now scraps the data for both API resources and load test results we can put them together as panels into Dashboard showing resource consumption vs load test metrics and see how those two worlds interact with each other. Isn’t it informative?

Final Grafana dashboard

Summary

We went from having a simple API service running as a Docker container by being able to read it’s metrics with cAdvisor and scrap them to Prometheus, visualize with Grafana. Then, we run load tests in the same composition and finally put the missing link of exporting Locust results as another source of metrics in Prometheus. This is not a very common, yet a super-simple solution that allows us to easily visualize how our application performs on user-end as well as resource-wise. Having this in sync, graphed on the same timeline in Grafana can be a no-brainer for quick analysis of system behavior.

Final docker-compose.yml looks like this

version: '3.0'
services:
api:
image: api:latest
container_name: api
build:
context: api/
dockerfile: Dockerfile
ports:
- "3333:3333"
volumes:
- ./api/db.json:/mnt/api/db.json
cadvisor:
image: google/cadvisor:latest
container_name: cadvisor
ports:
- 8080:8080
volumes:
- /:/rootfs:ro
- /var/run:/var/run:rw
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
depends_on:
- api
prometheus:
image: prom/prometheus:latest
container_name: prometheus
ports:
- 9090:9090
command:
- --config.file=/etc/prometheus/prometheus.yml
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml:ro
grafana:
image: grafana/grafana
container_name: grafana
ports:
- "3000:3000"
volumes:
- ./grafana.ini:/etc/grafana/grafana.ini
- ./datasource.yaml:/etc/grafana/provisioning/datasources/datasource.yaml
depends_on:
- prometheus
locust:
image: locustio/locust
ports:
- "8089:8089"
volumes:
- ./load_tests/:/mnt/locust
command: -f /mnt/locust/locustfile.py
depends_on:
- api
locust-metrics-exporter:
image: containersol/locust_exporter
ports:
- "9646:9646"
environment:
- LOCUST_EXPORTER_URI=http://locust:8089
depends_on:
- locust

If you want, you can check my example repository available here

Further steps

This example only covers the minimum ground and although it has great demonstrative value — it is not 1:1 with production systems. Docker Compose is a good approach for local machines but in real-world we use Kubernetes (K8S) clusters. In order to put this solution into a real-world context, one should prepare K8S deployments, put all the pieces together in particular namespaces, and connect the pieces into a working solution. Beware, there should be another part of this tutorial that covers such ground!

Having this solution working gives us the ability to easily track the performance of our systems using the tools we already have. Or, for example, implement the idea of deployment Quality Gates (you can look at what’s Keptn project) and this is yet another fantastic scientific area that needs some of my attention.

I hope you liked what you read, thanks!

--

--