Load test with Locust

Linh Ling
4 min readJan 25, 2019

--

What is Locust?

From their official documentation page:

Locust is an easy-to-use, distributed, user load testing tool. It is intended for load-testing web sites (or other systems) and figuring out how many concurrent users a system can handle.

From my point of view, Locust is an open source load testing tool and one of the many alternatives to Jmeter. It allows you to write tests in Python and its implementation is based on tasks.

Writing a locust file

To be able to write a test you will need to define a locust file, it’s just a regular python file. There are more examples in the docs, but here’s what mine looked like:

At a minimum, you simply need to define 2 classes:

  • A subclass of Locust
  • A subclass of TaskSet

Locust class represents one user that will perform the tasks written in the task set.

TaskSet class basically mimics the behavior of the users. A TaskSet is a collection of tasks and each task contains a particular API operation (GET, POST, ..).

For load testing you might want to make one of the requests execute more often than the others, Locust allows you to do it by defining the weight for each task.

task_1 will happen twice as much as task_2

Once the test is running, tasks will be picked randomly. In case you want to execute the tasks in order, you can use TaskSequence:

In the above example, locust will execute the tasks following this order: task_3 (10 times) > task_2 > task_1

You can also define other test parameters in Locust class, including the host and wait time between the execution of tasks (per simulated user). The time is randomly chosen between min_wait and max_wait (in milliseconds).

After the test is run, locust will start spawning users, each user does the following:

  1. Pick one of the tasks from your locust file
  2. Execute the task
  3. Pick a random wait time between min_wait and max_wait and wait for that amount of time
  4. Repeat from step 1

Let’s say I have this locustfile.py and configuration like this:

Number of users simulate: 10 / Hatch rate: 1

When you start the load test with this configuration, locust will spawn 1 new user for every second until it reaches to the total number of users to simulate (which is 10 in this case).

Running a locust file

Simply run this command to start Locust, you can also pass in the host you want to test if you didn’t define it in Locust class

$ locust -f locustfile.py

Now open your browser and go to localhost:8089, enter the number of users and the hatch rate to start the load testing.

After that, you will be able to see the result of the ongoing test:

Once you have reached the desired number of requests, you can stop Locust from the web interface. It also allows you to reset the stats or run a new test.

In addition to the Statistics, you can then look at the charts (2nd tab) or any failures/exceptions received (3rd, 4th tab) as well as download test results as CSV files (the last tab).

Basically, requests per second (throughput) indicates the number of transactions per second your application can handle. And response time is the amount of time from the moment that a user sends a request until the time that your application indicates that the request has completed.

In the graph above you can see the correlation between response times and throughput. The overall throughput tends to decrease as you increase the response time for an average transaction. The reason is that after sending the 1st request, locust needs to wait until the request to be completed/processed to send the 2nd request. So that it’s tricky to use Locust if you want to have a fixed throughput.

Overall, Locust is a great tool to quickly running applications under load. If you want to find out more about Locust, check out its official documentation.

--

--