Locust.io experiments — extending the UI

Karol Brejna
Locust.io experiments
5 min readFeb 24, 2018

Locust provides a small HTTP API and a Web UI to control the execution of the tests and browsing through the results. One can use the API, for example, to automatically trigger stress tests as a part of build process.

What if we want to extend the UI by adding some customized result views, new actions?

According to the docs it should be fairly easy.

The goal

Now, that we are able to run Locust in Kubernetes it (K8s) will allow us easily rescale the cluster.

Photo by adam morse on Unsplash

This simple command will do:

>  kubectl scale --replicas=3 deployment/locust-slave

Let’s be nice to the users and not force them to use command line tools (use a terminal connected to K8s cluster) and prepare some graphical user interface to perform this action, instead.

Components of the solution

What we’ll need (the minimal version) is:

  • code that will hold the logic (communicate with K8s and perform rescale)
  • HTTP endpoint that exposes the functionality
  • a web form where new number of slaves can be entered

The logic

In our code, we want to repeat the same thing that kubectl scale command is doing. Let’s sniff the traffic that kubectl generates so we can see what APIs are used there:

It looks like it GETs current deployment definition, updates replicas count and PUTs modified deployment back. (Ok, I cheated. Actually, I used--v=8 switch which shows not only HTTP operations, but also request/response bodies. Presenting this output would overflow the boundaries of this article…)

Digging a bit deeper, it turns out that we can be even smarter and do the change in a single PATCH request. Take a look at this pseudo-code:

The code performing required action is collected in kubernetes.py. (As usual, you’ll find all the sources in the accompanying GitHub repo: https://github.com/karol-brejna-i/locust-experiments).

One last thing to mention here is what needs to be done in order to access K8s API from the pod. I chose to obtain an authentication token a do the HTTP request myself. According to the official documentation, the preferred way is either using kube-proxy or a programmatic client (see this article for details). For some real heavy lifting I would probably go for the client, but decided to stay plain for the sake of simplicity.

Web endpoint

Locust makes use of Flask to deliver it’s web user interface (see: https://github.com/locustio/locust/blob/master/locust/web.py).

Adding your own endpoint is just a matter of proper `route` annotation.
In this example, I created two endpoints — one responsible for displaying rescale form and one for performing the action.

See the following pseudo-code:

The actual code resides in the enclosed locustfile.

Web form

Being a flask app, the UI can utilize Jinja2 to generate the pages. For this experiment I trimmed down the template used by Locust itself so my web form looks at least a bit similar to the rest of the UI.

The key part of [rescale-form.html](./locust-scripts/rescale-form.html) is the following form:

And these are all the required parts.

Deployment

This work depends on previous experiment which point was to create Kubernetes descriptors for Locust. See the https://github.com/karol-brejna-i/locust-experiments/tree/extend-web-ui/kubernetes for the scripts and info on how to use them.

Basic K8s mechanics here are intact, the difference being the config map that holds update Locust scripts.

In short, creating Locust cluster in K8s can be done these commands (if Kubernetes is up and running):

I am assuming you have done it already (while running the previous experiments) and locust nodes are running.

So, now let’s go to extend-web-ui folder (here are the sources for the current exercise) and update the configmap:

$ cd ../extend-web-ui
$ kubectl replace -f kubernetes/scripts-cm.yaml
configmap "scripts-cm" replaced
$ kubectl delete pods -l 'role in (locust-master, locust-slave)'

The last line (restart of pods) is here to make sure new configuration is used by Locust.

In action

So, let’s open rescale form in the browser — http://192.168.1.123/locust/rescale-form (my K8s cluster’s IP is 192.168.1.123):

Rescale form

Remember when I claimed I trimmed down the original template? Well, actually: I crippled it greatly which you can see by yourself. I didn’t want to focus on the UI part, but just to prove the point — so I made a minimal effort there.

Still, this simple form let’s us submit a new number of workers. My cluster has two workers (these are the defaults provided by my K8s scripts). Let’s go for 4 slaves now. After submitting, proper command is sent to Kubernetes and the user gets redirected to the main page.

First upscale

In the upper right corner you can see that after a while the number of slaves has been increased.

We are doing so well, so we could try something braver: let’s do a downscale (to 1 worker) and then another upscale (to 3) straight away.

And here is a little surprise. Instead of expected 3 workers at the end, I see 6 of them on the UI.

Checking how many workers are really there (each Locust worker should correspond to one K8s pod):

$ kubectl get pods
NAME READY STATUS RESTARTS AGE
locust-master-5d8fbbc6-46k6n 1/1 Running 0 5m
locust-slave-54c84df478-nh7wn 1/1 Running 0 31s
locust-slave-54c84df478-rt8fn 1/1 Running 0 31s
locust-slave-54c84df478-v65ng 1/1 Running 0 5m

There are three of them, as there should be.

Inspecting the master’s logs (kubectl logs -l=’role=locust-master’ | grep “^\[“) confirms, that communication with K8s was effective (Kubernetes did what we asked it to). The following lines seems to bring us closer to the diagnosis:

These are the only traces of worker-client communication. My reasoning is that Locust workers are able to report for duty, but they are unable to inform the master when they cannot server the load anymore. Nor is there any keep-alive or heartbeat mechanism in place. I’ll leave inspection of the sources for later (Locust’s internal mechanics is not the core concern of this article, scaling is just an exemplary new action).

In the meantime…

Conclusions

I think I managed to prove that extending the UI for Locust is quite easy: both introducing new endpoints and creating new web pages.

Aside from the problem mentioned earlier, scaling Locust with Kubernetes gives great flexibility for handling different/changing loads. It could be even cooler to use it in conjunction with K8s’ autoscaling features. Maybe this should be a subject for some future experiments…

As usual, the sources mentioned in this article are stored in https://github.com/karol-brejna-i/locust-experiments.

--

--