Consumer Driven Contract Testing — A scalable testing strategy for Microservices

Jemma Wells
John Lewis Partnership Software Engineering
8 min readFeb 18, 2020

Hi! My name is Jemma Wells and I’m a Product Engineer at John Lewis & Partners, specialising in Quality Assurance.

Here at John Lewis & Partners we’re almost two years into our transition away from a legacy monolithic platform onto our *ahem* award-winning new digital platform. A huge amount has changed in that time along with the architecture and how we develop — not least, the way we approach testing. For everyone in the JL&P Testing Community, our role has undergone a drastic transformation from test managers and coordinators to skilled technical Product Engineers.

As more and more microservices were on-boarded onto the platform, it quickly became obvious that our approach to testing the integration between those services would require some real consideration. With automated testing so integral to our CI pipelines and our ability to gain fast feedback, being dependent on multiple other services and environments was becoming increasingly problematic. The ability for other teams to cause a failure in our pipeline was not ideal to say the least.

We needed a way to retain confidence in our integrations without the fragility of end-to end tests. For this, we looked to the technique of Consumer Driven Contract Testing as a potential solution. More specifically, after a little bit of research we started to investigate an existing open-source framework called Pact, as it looked to offer everything we were after for a first attempt at Contract Testing.

In this article, we’ll introduce the concept of Consumer Driven Contract Testing and the tool we’ve selected to trial it, then move on to how we’ve implemented it at JL&P and some reflections and recommendations on its use.

Consumer Driven Contract Testing

For anyone unfamiliar with the concept of Consumer Driven Contract Testing, in a nutshell, it is a method of testing the communication layers between services by verifying a service against a set of expectations (a contract), without the need for either an integrated environment or the deployment of those services each time a test is run. (For more information specifically on the Consumer Driven Contract pattern, I would highly recommend:

To clarify some terminology that will appear many times in this article — a Consumer is a service that initiates an HTTP request to another service, and consumes the respective response (regardless of the way the date flows — could be a GET, PUT, POST etc). A Provider is a service that responds to that HTTP request.

A major advantage of this process being Consumer driven is that only the parts of the communication actually being used by the Consumer get tested. This therefore means that any Provider behaviour not being used by a Consumer can be changed without causing test failures. The picture below is a visual representation of Consumer Driven Contract Testing.

Conceptual diagram of how Consumer Driven Contract Testing works

As part of the Consumer pipeline, a test is run using a mock Provider and asserts against a mock response. (The mock provider can either be configured using tools like Pact if that’s your CDCT of choice, or just whatever mocking library you might already use for your other tests). Through this, the Consumer is essentially defining the contract they expect the Provider to fulfil. On the Provider side, a test is also executed as part of the pipeline which verifies the contract defined by the Consumer against the real response — any differences will indicate a broken contract, which should prompt both teams to investigate the failure.

It’s important to remember that these tests should not replace functional tests of the Provider or the Consumer, and should not be looking to test functionality of either service in any way. It is purely designed to test for broken contracts.

One clear benefit of Consumer Driven Contract Testing is the collaborative nature and behaviours that are instilled. It is at the very least, a great excuse for teams to be communicating in this way about the contracts that exist between their services. It can form a really valuable part of initial design conversations between teams, and can be seen to add value before tests have already started running.

Pact Framework

As mentioned, we’ve selected the Pact tool for our implementation of CDCT. It is a widely used framework with multiple language implementations, and a good amount of documentation available. Note that its most common application is for communications between services via HTTP, however it can be used for asynchronous interactions.

One of the main features of Pact is something called the Pact Broker — a central application used by the framework for sharing and storing consumer driven contracts and the verification results of those contracts. It has a simple user interface for viewing real time statuses, as well as a few nice visual features.

Example from PACT Broker UI, showing different contracts and their statuses:

Pact Workflow

Diagram showing the workflow of PACT between Consumer, Provider and the central PACT Broker

When the Consumer pipeline runs, a test is run against a mock Provider. If using Pact, that mock Provider can actually be created by the framework itself.

Once the test has run something called a Pact file is created. It is simply a JSON file that contains all interactions being tested including the request being sent, and the expected response.

This file is pushed up to the central Pact Broker and is stored there, waiting to be verified by the Provider.

When the Provider side pipeline is run, the test will look to the Pact files within the Broker for any relevant interactions. The Provider will run those requests against the real service, and the actual response(s) are compared to the corresponding expected responses stored in the Pact file.

If the real response contains at least the data in the minimal expected response, the test passes and a positive result is published back to the broker.

Implementing Pact at JL&P

As with all good implementations, we started small — one Consumer and one Provider that already had a live and stable contract. Not knowing how much effort was involved, we wanted to test it out on an integration where setting up the framework wouldn’t delay us from putting our changes live. It was also a contract that has remained the same since going live, while the Provider service itself has changed fairly significantly — a good use case for a Contract Test.

There are a couple of options for how to set up your Pact Broker — there is a free hosted option called if you want to get up and running quickly. We chose to host our own using Kubernetes and run the Pact Broker Docker image.

We decided to follow a structure of keeping the Consumer test in the same location as our existing unit tests and run as part of the build, while having the Provider tests in their own repository. They are triggered as part of the main service pipeline, so run on every commit, and will block that pipeline on any failure.

How our Contract Tests fit into our pipeline

So after a few mobbing sessions, we had some working tests and a contract being verified in the Pact Broker — great! It wasn’t perfect, but it was a start. We had broken the rules of Pact a little by using hardcoded data in order to get some tests up and running quickly. This wasn’t too much of a problem as long as that data remained in place, but still made the tests more brittle than they needed to be. Certainly something to improve.

It’s green!

With some working tests to refer to, it was now a good opportunity to approach other teams about either setting up some Contract Tests for the interactions that were already live and could benefit, or even better, creating them as part of new integrations being worked on. Pact really comes into its own as the number of interactions increase and you can start to map the Contracts between lots of different services. The Pact Broker provides a really good visual representation of this.

Are we sold?

Fast-forward a few months and our network of contracts is currently looking something like this:

Our PACT network map

Most importantly… it has given our team the confidence to remove the E2E tests we previously had running in our integrated environment. We no longer have to rely on other services being available for our pipeline to remain green — a problem that we were coming across far too often.

It’s also been great having a central place to view all these interactions along with their status — and it will only become richer as we add in new services. Contract Testing is now becoming the norm for teams forming new integrations, and both Engineers and Product Owners are forming more of an understanding on the benefits of doing this over full blown end-to-end tests.

We certainly invested a fair chunk of time in getting the framework set up — as with any new tool adoption. However I can honestly say that after the first interactions were completed, it is trivial to add new ones on both the consumer side and particularly the provider side. The hard work is certainly done up front. It has also been really simple for new teams adopting the tool to re-use a lot of code as it’s a fairly standard set up. I would say the identifiable benefits aren’t really in the time and effort involved in creating the tests, but are seen much more clearly in the stability of our pipelines and removal of external dependencies.

As an organisation, we’ve found that CDCT offers us a testing technique that really suits the way we are now developing at JL&P, and we are excited to watch our network grow.

There are a few recommendations I’d offer to anyone looking to use Pact or any similar CDCT framework:

  • Consider your tests very carefully — think about what sort of breaks in a contract you’d like this to catch, but try very hard not to end up blurring into functional scenarios. These should be covered elsewhere and should not replace functional testing of your service.
  • Keep tests as data-independent as possible — Make use of the Pact DSL and pattern matching as much as possible to avoid brittle tests.
  • Work together on both sides of the test — we found that where the Consumer and Provider were two separate teams, the process was most successful when we paired on both sets of tests and everyone had a full understanding of the expectations. It’s also particularly important to collaborate where introducing these tests could cause a breakage in another team’s pipeline. Good communication is key!
  • Education is really important — this is a very new concept to a lot of people who are used to writing end-to-end tests as a way to feel confident in their integrations. It’s a relatively complex topic for anyone who hasn’t come across it before, and finding the best way to demonstrate the framework and the benefits will be critical in getting the go-ahead for implementing. As with any new testing tools, find a way to prove the concept and show others a working example!