If you have a server, you might want to know roughly what volume of requests it can deal with before it begins to fall over or fail. Falling over might take the form of refusing connections or taking a long time to respond to all requests, or to a proportion of requests.
To help get some insight into how your server performs under stress, you may want to use Benchmarking or Load Testing techniques.
What’s the difference between load testing and benchmarking?
Load testing is about putting your application under heavy load. So if we’re talking about a server, that probably means firing lots of requests at it. You’re interested in general in what happens in the system. You might not be looking for anything specific but you want to see when unusual things start to happen — slowdowns, bad responses, unusual behaviour.
Benchmarking to me suggests more of a specific goal. You want to see how a certain factor (or factors) hold up under a specific amount of a certain stress. It might not be an excessive amount of stress, just a regular amount. For example, if you know that you can expect 300 requests per second to hit your server, you want to run a benchmark test to see how response time of your server performs under these conditions. Or you might want to run a benchmark of twice or 10x your normal load. But the aim is for a specific result — to be able to say “with this traffic, 95% of requests are resolved in n milliseconds”.
In this article I will show you how you can use two tools, Apache Benchmark and JMeter for either load testing or benchmarking.
To test your server, first ensure your server is running. Make sure you run it locally or as a separate development instance so that you don’t hammer your production server and potentially bring it down for your actual clients! I would recommend running it as separate development instance and make sure the underlying hardware is as similar as possible to the production hardware, otherwise you won’t have a fair test.
It’s likely your server will be interacting with external services, such as a database, and these services will affect how your server performs. To get a fair test, you should replicate these external services as best you can, e.g. create a test database, a test queue to push into, a test cache. If you simply remove calls to these separate services you’re not getting a realistic idea of how your server will behave in production.
With your server running, it’s time to begin load testing and benchmarking.
There are two tools which I would recommend using. The simplest is Apache Benchmark.
With Apache Benchmark
Apache benchmark is a simple-to-use tool to help you understand how an HTTP server copes with large volumes of traffic. The tool comes pre-installed on MacOS. If you are using a Linux distribution, Apache Benchmark comes along with the
httpd package which can be installed with whatever package management system your distribution uses.
To read more about Apache Benchmark and to see the sorts of things it can do, type
$ man ab
We can ask AB to fire a specific number of requests at specific endpoint. We can also control other parameters such as the concurrency (how many requests it will fire at the same time).
Fire 500 requests, with a maximum concurrency of 10 at a time
$ ab -c 10 —n 500 —r localhost:8000/api/books
—r flag means don’t exit if it gets an error. By default, AB will perform a GET request so if you need to test another type of request you will have to pass it the specific option to do so and also set the content-type header of the request with the
—T flag. For example, to send POST requests with a specific file used as the POST data, the command might take a form like this:
$ ab -c 10 -n 500 -p event.json -r -T application/json localhost:8000/api/books
After running a test, AB will print out a report like this:
Here you can see a breakdown of how quickly requests are served, the requests per second and the number of requests which failed. This information can be quite enlightening!
Note that Apache Benchmark will fire off requests as fast as it possibly can, given the permitted concurrency. It’s hard to say “send 1000 requests over the space of 1 minute” — it will just do 1000 requests as fast as possible. Therefore, it’s kind of tricky to simulate a realistic load. AB is generally better for finding out at what point your service begins to degrade, and then ascertaining whether you could ever reasonably expect this load.
In the past I’ve found it useful to use the
watch command to keep on firing AB requests at an endpoint, and just sitting back and watching the results. For example, this command repeats the AB test every second:
$ watch -n 1 ab -c 10 -n 150 -r localhost:8080
watch is not available by default on Macs but can be installed with
$ brew install watch )
JMeter is a more powerful tool than Apache Benchmark and allows you to be a bit more specific about how your traffic is fired. For example, with JMeter it is possible to say “Send 1000 requests spaced out over 1 minute”, which is much more realistic. It is so configurable that it provides a GUI (Graphical User Interface) to help you set up your tests.
JMeter is built in Java, so you need to have Java installed on your system. If you find you need to install Java you can do so here.
Once you have Java installed, on a Mac you can install JMeter using Homebrew.
$ brew install jmeter
You should then be able to launch the GUI:
On a Linux machine, the best option seems to be to go to the JMeter website and download the application directly from there (again, you will need Java first). Then extract the file and run JMeter:
$ tar -xf apache-jmeter-5.1.tgz (or whatever version you downloaded)
Change into the unzipped directory:
$ cd apache-jmeter-5.1
And run the following to launch it:
The GUI will open, allowing you to set up a “test plan”. You should use the GUI to set up and test your “test plan”, but revert to the CLI to actually execute it, as the GUI is not going to cope with displaying huge request volumes.
First, add a new Thread Group by right clicking on Test Plan and selecting Add → Threads → Thread Group.
Under the settings for the thread group, you have the option to specify how you want the requests to behave. The number of threads represents the number of users which will simultaneously connect to your app. This is similar to concurrency in AB. The Ramp-Up Period is the time it should take JMeter to increase the load from 0 to the target load, allowing your server a bit of a warm up. It can be useful to use this value so you can see how your server behaves as load gradually increases.
You can also change the loop count, or keep running the test forever until you manually stop it by ticking “forever”. Begin with these numbers very small to begin with so you can test your configuration. Then go back and ramp them up when you’re ready to use the CLI.
You can set up the details for the actual HTTP request to be made by adding a HTTP Request Sampler:
In the configuration for the sampler, you can add all the details of the request, including the method, any POST data, the content encoding and of course the protocol, host and path.
Finally, you need a Listener, which will catch the results of your test. Add whichever one you like. Play around and see which gives you the most interesting report.
Now it’s time to save your configuration and run it, which can be done with the green Play arrow at the top. Make sure you are only using smallish user numbers for this first run, and switch to the CLI when you’re happy with your test setup.
Once you’ve run your test, you’ll get a report in whatever format you specified, which you’ll be able to see via the GUI by clicking on your Listener.
Running in Non-GUI Mode
To run your JMeter test from the CLI, which is necessary for large loads, save the test configuration as a .jmx file (I called mine
blogpost.jmx) and then, in the terminal, type
$ jmeter -n -t blogpost.jmx -l testresult.jlt where the final argument is the name of the output file where the report will be saved.
NB if you are using Linux, the
jmeter command will be
Once the test is done, the results file can be opened and viewed in the GUI. Open the GUI and add a Listener of any kind (e.g. a Summary Report) to the Test Plan. There is no need for any other configuration.
In the area where it says Write/Read from file, select the .jlt file where your results were saved. I had to type the path to the file into the input area, because I couldn’t get the file browser to work 😔 However, if you select the .jlt file somehow it will display the results in the GUI for you!
That’s about it for a simple set up. There is so much more you can configure, including using a Constant Throughput Timer to ensure a constant load on your server, which can be quite useful. I’m not going to go into everything here as there are lots of other tutorials for achieving specific ends with JMeter.
I hope this was helpful 🐙
Follow me on Twitter and leave a clap if you enjoyed this!