“Tests as contract” — using automated tests to help teams work together

Sam Partington
whiteoctober-posts
Published in
6 min readJun 26, 2018

We recently worked with UNISON, the UK’s largest public sector trade union, to create a better online joining experience for their new members. The Online Join journey represents a key revenue stream for the 1.3 million member organisation.

Joining the union is a multi-stage sign-up process which we helped them drastically simplify, decreasing the total time needed to complete the process, and designing the experience in such a way that it meets the raised digital expectations of potential members.

As part of this work, we wanted to pass data about new members into UNISON’s existing membership system, which was produced and supported by a third-party supplier. The agreed approach was that the supplier would produce an API for the membership system which our new application would use.

We were tasked with defining the specification for the membership system API. One driver of the project was to move towards updating the membership system too, so it made sense to design the API primarily from the perspective of the newer system rather than the existing system (although of course the existing membership system did provide some technical constraints we needed to consider).

We provided the supplier with a detailed written specification for the API, but also with a set of automated tests which should pass when run against an API which met the specification.

Read on to see why we created these tests, the benefits they gave us, and what we learnt. Along the way you’ll also find out how I accidentally created the most useful stub code I’ve ever written!

What we produced and why

We produced three main pieces of work for the API:

  • A formal written API specification detailing the routes required and their desired behaviour
  • A set of automated tests which should pass when run against an API which met the specification
  • A stub API which passed the tests but had little “real” behaviour (not shared with the supplier)

APIs lend themselves quite naturally to unambiguous specifications detailing inputs and outputs, so you might wonder why we also gave the supplier a set of tests to run against their API. There are several reasons:

  • Whilst API specifications can be clearly defined, they can also be long and detailed, making it easy to miss something.
  • A written specification can be misunderstood; tests are unambiguous
  • Having a set of tests to run during API development could be a helpful tool to provide quick feedback and a sense of progress.
  • The tests provide a thorough mechanism for formal acceptance testing on the API produced

We needed some way to “test the tests”. We also needed something to point our system at before the real API was delivered, or in contexts such as automated tests where using the real API might be overly slow. For this reason, we produced a Stub API which did nothing with the data it was given except validate it and provide an appropriate error or success response. For routes where the API was providing rather than accepting data, the stub provided some predefined responses.

This stub API had some major additional benefits during development which we hadn’t anticipated; these are discussed in the next section.

Providing a test suite to a supplier in this way is similar to the concept of an “executable specification” from Agile software development and Behaviour-Driven Development.

How we used it

The set of API tests we produced were used in a number of ways on the project.

Testing while developing

The developers of the API could run the tests while they were developing the API to help keep them on-track and provide quick feedback.

Sign-off process

When the API was “delivered” to us as clients, we could run the test suite against the UAT endpoints to formally sign off the work. Where there were problems, we could use the specific failing tests to help in communicating this to the supplier.

Where problems were encountered using the API with the real system, we could augment the test harness with additional tests to replicate the problems, providing a way of both demonstrating the problem and proving that it was resolved.

Validation of the Stub API

It might sound odd that we ran the API tests against our stub API as part of our continuous integration process. Why bother testing a stub? However, there was a very good reason — in the development and test environments, the application runs against the stub API. Therefore, if the stub API were wrong, integration and behavioural tests might pass but the corresponding code would actually fail against the real API. It was important, therefore, to keep on verifying that the stub API was indeed right.

Validation using the Stub API

We ended up using the stub API as a testing tool in its own right. This was a major benefit of having it, but not one that we’d predicted:

We needed to write application code to communicate with the API, such as helper methods which prepared the necessary JSON. Since there were tests in the suite to ensure that the API validated its input, this meant that we had to put validation into the stub API. Having validation in the stub API in turn meant that we had a ready-made set of “tests” for the data produced by our API-communication code!

As the developer who worked on the API-communication code put it, “It meant that I could build the queue to interact with the API knowing that I was sending the right data” — because the stub API would catch it if he wasn’t. Being able to concentrate on the details of the queuing mechanism without having to also think in-depth about data formats made development of this area much easier.

How it went and what we learnt

“Tests as contract”

Using the tests as a “contract” provided a high level of reassurance that the delivered API met our specification, and provided a common point of reference (a failing test) when it didn’t.

However, the tests didn’t catch everything. In particular, we should have done further validation that the data sent from the API matched the formats (e.g. maximum length) that we were expecting.

Having a test suite encouraged us to specify the API in a very precise way, rejecting data that didn’t meet specific formats. However, we actually needed the API to be tolerant of errors: It’s important that membership data makes it to UNISON even if it requires some manual “unpicking” — you can’t charge membership fees to a member whose data never reaches you! We failed to consider this non-functional requirement when writing the tests and so had tests checking that the API rejected data that really it should have accepted, for example.

An overly restrictive API is not the inevitable consequence of having a test suite; rather, it’s the consequence of us not considering non-functional requirements when writing the API tests.

The Stub API

The Stub API was so useful as a test suite when developing the API communication code the stub this would have been worth writing for that reason alone. In fact, the developer quoted above went on to say, “we should always write stub versions of APIs we want to use”.

Running the automated tests against the stub API itself ensured that our integration and behavioural tests which then used that API were more representative and more reliable, which was a definite boon.

Running the tests

The API tests weren’t written as actual unit tests, but as a standard Python script which used Python’s assert statement. This meant that the entire script stopped as soon as a test failed. Although we provided the ability to run just those tests relating to a single API endpoint (via command-line options), having the tests stop on the first failure meant that problems with later tests were hidden until earlier ones were resolved. We should have provided the ability to continue even if a test failed.

In a similar vein, it would have been very useful to have the ability to run a single test at a time — this would have made communication with the supplier about specific issues much simpler, since we could have pointed them directly at the specific test which was failing, rather than just the endpoint whose set of tests was failing.

Conclusion

Overall, the test suite for the API was a major benefit to us. And we learnt a lot about how to use one effectively (and what not to do!) whilst using it on this project, so it should be an even more effective approach next time we do it.

Have you tried out having an executable specification or using tests as a contract with a supplier? Do tell us about your experiences in the comments.

Originally published at blog.whiteoctober.co.uk on June 26, 2018.

--

--

Responses (2)