3 reasons why Gatling is the perfect tool for performance testing your API’s

Sam Bird
John Lewis Partnership Software Engineering
4 min readFeb 5, 2018

My current assignment resides in the world of software testing, specifically performance load testing John Lewis’ e-commerce engine API. This API handles a range of functionalities across all Online channels (i.e., desktop website, mobile apps…) so monitoring levels of performance and ensuring optimal response rates is fundamental to our customers.

But with an API that has to be capable of supporting channels 24/7 and at peak times see’s over 700 orders a minute, how can such load be simulated to analyse performance?

Testers are being challenged to automate, utilising many technologies from the open source community (which recently turned 20). There are many tools available depending on the type of testing required, but for load testing specifically and for this assignment, I’ve been using Gatling and I wanted to share a few reasons why.

1. Test scenarios are maintainable and easy-to-read

Gatling’s Domain-Specific Language (DSL) makes code highly readable, even to those with limited programming background, which is an advantage to many who are used to more manual forms of testing.

Let’s build a scripted scenario, showing how a HTTP request can be constructed to add an item to an e-commerce basket along with test data:

private val sessionFeeder = csv("sessionIds.csv").random
private val itemFeeder = separatedValues("itemIds.txt", ';').random
val postBasket = exec(feed(sessionFeeder)).exec(feed(itemFeeder))
.exec(http("POST Item to Basket").post(baseURL + "/api/basket")
.header("Cookie", "${sessionId}")
.header("Accept", "application/json")
.header("Content-Type", "application/json")
.body(StringBody(
"""{
| "item":
| {
| "skuId": "${itemId}",
| "quantity": 1,
| }
|}""".stripMargin
)))

To begin with, I’ve used Gatling’s Feeder to inject test data from source files. This data is then fed into the Cookie and body of the request using the ${value} notation to better replicate a live environment, as you would want to change these variables; so the same item is not added to the same basket in every request by a user during the simulation.

Using .post() configures it for a POST request type, with the endpoint URL passed in as a parameter. Headers have been added to the request, along with a body, conforming to the API.

2. Virtual users can be injected with automatic connection management

With the request now built, virtual users are ready to be injected. This can be done in a number of ways (i.e., throughput) depending on your test, but for this example we’ll ramp up to 250 users in 10 seconds and maintain this level for the next 50 seconds as so:

val httpProtocol = http.baseURL("http://www.yourEcommerceSite.com").shareConnectionspostBasket.inject(
rampUsersPerSec(1) to 250 during (10 seconds),
constantUsersPerSec(250) during (50 seconds)
).protocols(httpProtocol)

When many connections are generated in a short time to create this level of load, ports on the host operating system can become saturated. Unlike some other load testing tools, Gatling ensures that once a connection has been released by a user, it can be reused by another thread, reducing the number of connection errors thrown. To achieve this, simply call the .shareConnections() function on your protocol.

3. Easy integration with CI/CD with performance reporting

Despite Gatling’s DSL, under the hood it’s written in the Scala programming language and ultimately runs on a Java Virtual Machine (JVM). This is no desktop application and as such using build automation tools such as Gradle allows a simple task in your build.gradle file to seamlessly run all tests with a one-line command. This can then be embedded into a CD pipeline with the likes of Jenkins using an available plug-in straight out of the box.

But to understand if performance has degraded since the last run, we need to collate and aggregate performance related metrics to benchmark against.

* Results are not based on above request constructed, simply to show example stats collated

Thankfully, Gatling generates a complete report of your simulation in HTML format to analyse this data. Gatling calculates the means, averages and percentiles of your response times’ distribution. Not only can you drill down into each request in detail to check response times for passed and failed tests, but to make your analysis even more visual, graphs are created.

Graph: Ramping to 250 users in 10 seconds and holding constant for 50 seconds

I hope you find this blog useful and and that you’ll utilise Gatling to help you identify ways to make better and faster improvements to your web application.

--

--