Performance-Driven Development

YLD
YLD Blog
Published in
5 min readJan 18, 2016

Intro

If DevOps means anything, it’s that developers don’t just write code anymore for the sole purpose of chucking it over the wall to Ops. In modern teams, developers often manage a lot of infrastructure for their applications themselves. This is thanks to the combination of platforms like AWS and Azure and high-quality tooling (orchestration tools like Ansible, the Hashicorp stack, open-source PaaS solutions) that has made it possible to manage infrastructure with developer-friendly workflows.

Performance testing is undergoing a similar change. Whereas in traditional organisations, a dedicated team of test engineers would run performance tests as part of the pre-release QA process, with modern tooling developers that are working on the application can run performance tests themselves, and catch many performance problems before reaching the QA stage.

Since the turnaround time between a dev team and a testing team can often be measured in days, tuning performance tests in development compresses the amount of time QA need to spend on performance testing and lets the application be shipped faster.

Performance-Driven Development

Performance-driven development is just TDD (test-driven development) applied to performance testing. With PDD, we measure and log performance metrics of the application rather than functional correctness of the code.

A performance test is a combination of putting the system under load, logging the relevant metrics, and then failing or passing the test run based on thresholds for each of those metrics. With PDD, this process is kicked off as early in the development cycle as possible.

What Is Performance?

A system’s performance can be analysed along three dimensions:

  1. Is it fast? (enough) — Given a level and pattern of load, are latency and throughput within an acceptable threshold?
  2. (how) Does it scale? — what effect will adding more servers (horizontal scaling) or getting more powerful hardware (vertical scaling) have on latency and throughput?
  3. Is it reliable? (enough) — at what amount of extra load (over projected levels) will the system degrade to the point of being unusable?

Each of these can be analysed with the same combination of tools and processes.

Getting Started With PDD

1. Sketch down some requirements

Are you expecting to need to handle 1,000 concurrent users or 10,000? What is the maximum acceptable latency for the most important API call to the system? (500ms? 1000ms?) What sort of throughput would you expect (20 RPS? 500?) Is the load likely to be evenly distributed or will it be very spiky?

Having a discussion and will help you develop better tests which will help you produce a higher-quality application.

2. Grab a load-generator

Many load-testing tools and frameworks exist, both open-source and proprietary, free and commercial. The choice of one depends on the exact requirements of the project and team, such as protocol support, scripting capabilities, and reporting features.

As Node.js developers, we have come to expect a high level of quality from our tools. Functionality alone is not enough — we also want our tools to be lightweight, and pleasant to use. That was one of my main motivations for writing Artillery — it’s a tool I enjoy using as a Node.js developer and which fits into my Node.js-centred workflow — unlike other tools, I can npm install it, it has a nice CLI, it uses JSON or YAML rather than XML for describing test scenarios, and it's just Node.js under the hood.

Whatever load-generator a team settles on, the minimum set of requirements would be:

  1. Ease of installation. Not being available via npm, brew or apt/yum is a barrier (laziness is a virtue)
  2. It should require no configuration to get started with and allow for ad-hoc testing in the style of ab.
  3. It needs to have scripting capabilities. We are likely to be testing APIs and microservices with complex transactional scenarios. A tool that only supports looping over a list of URLs won’t do.

(Artillery satisfies all of these requirements, naturally.)

The tool of choice probably needs to support protocols beside HTTP as well. Modern systems often have a WebSocket or a Socket.io-based component, and increasingly often RabbitMQ is used as microservice-glue, so AMQP support can be necessary too.

Run your first performance test

If you’re using Artillery, this can be as simple as:

artillery quick --duration 300 --rate 50 http://localhost:8080/

This will run a quick test hitting the URL at 50 RPS for 300 seconds and produce a performance report for you.

While your application is in its early days, you can get a lot of mileage out of quick tests like this one without needing to spend much time on setting anything else up.

(Two things also worth mentioning at this point: (1) you should have monitoring set up for your application with something like New Relic APM or Keymetrics to see how the application performs internally when you run your load tests. (2) When testing on localhost, you can use htop to see how your Node.js process is doing, and prof or dtrace if you want to see what is happening under the hood.)

4. Write more test scenarios

As the application you’re working on matures, you can start extending your performance test suite with scenarios that model real user behavior. You probably don’t need to worry about covering all possible interaction patterns, just having a script for the “happy path” will allow you to get a lot of value from performance testing.

5. Use your performance tests

Make it easy for yourself to run performance test. For example, if you add a script to your app’s package.json such as:

"scripts": {
"test:perf": "artillery run ./tests/perf/happy_path.json"
}

You can easily run the tests with npm run test:perf after making changes to the code that could impact performance.

You can go one step further and use a command like this one if you’re on a Mac:

artillery run ./tests/perf/happy_path.json -o mg_report.json && artillery report mg_report.json && open mg_report.html

This command runs the test, generates a graphical report and opens it in your default web browser.

6. Advanced: add your performance tests to CI

At some point, you will want to add your performance test suite to your CI server to have more repeatable performance measurements and to keep a history of performance reports. Setting up a CI job is beyond the scope of this write up, but typically the performance tests would run after a successful deployment into a test or staging environment against that environment. Artillery is lightweight enough to be installable directly on a build agent, however at some point you’ll want to set up a dedicated load injector server (or even a grid of them if you need to generate higher loads).

Conclusion

Whenever appropriate, we like to practice PDD at YLD, and we found it to help us ship better software.

By Hassy Veldstra

Originally published at blog.yld.io on January 18, 2016.

--

--

YLD
YLD Blog

YLD is behind many of the products and services you use every day. We create tech and design capabilities for you, that last beyond us. medium.com/yld-blog