Contract testing in Everon

Artem Ptushkin
Everon Engineering
Published in
13 min readFeb 8, 2021

This is the story about implementing and evolving contract tests in a growing EV charging software company.

It was a moment when our company reached maturity, and the number of main components becomes stable, but there was still too much ahead to implement. With the growth of integrations and isolated components came the requirement to make the system more reliable, transparent for maintainers and API consumers, and control breaking changes.

On the way to the quality, we had integration and end-to-end tests, but these are slow and require quite a lot of time for maintenance with the growth. We started considering more isolated and targeted tests and came to the known concept of contracts between services. It opened for us that if something fails on the change against expectations, let’s formalize and test it.

Contract tests

This methodology tells us you could test only consumer expectations against provider API. By expectations, for instance, in the case of HTTP integration is meant URL, query parameters, headers, and bodies.

This approach is also applicable for asynchronous integrations in that case we keep the message data in the contract, consumer listens to a queue and provider sends messages. It is not supposed to test the actual message provider, i.e. Kafka, MQ service as we expect only a message.

Contract tests are the way to test those expectations and the next step is to find the framework that will let us formalize it and automate it.

Contract tests definition

Expectations from contract tests

The methodology itself won’t be valuable if there is no clear motivation behind it. Let’s define it:

  • Become confident that provider changes won’t break consumers on production
  • Reduce teams communication overhead discussing changes by having an efficient tool
  • Make testing integrations between services more explicit and isolated

The first point is the major and the most viable for the growing amount of integrations as we want consumers to be assured that its changes are compatible with the version of supposed providers' services. Providers in its turn have to be verified that they don’t expose any breaking changes with a new version release.

Paying attention to the process of engineers’ work it would be noted that teams try to align the future changes in messengers, tasks, and sharing the documentation or even OpenAPI specification. But neither of these approaches test the exact application’s behavior, for instance, the data format in the server JSON response. Especially, it doesn’t keep the reference between the code of provider and consumer. It is helpful when teams start writing tests with stubbing downstream API calls but what guarantees that stub actually represents the exact provider behavior? In other words:

It can’t be guaranteed that everything will always work as expected if there are no actual reference between consumer and provider.

Having only end-to-end tests in place doesn’t make it easy to test a specific endpoint update as it depends on the state of the whole environment. Having integration tests as well makes it easier but it depends on the environment state as well.

Contract tests should make it possible to have isolated small tests that depend only on provider state and consumer requirements and create the reference between mocks on consumer side and the actual tests on provider.

Contract test initiator

This paradigm supposes that either consumer or provider can lead the testing, i.e. be the contract creator. The consumer-driven approach looked more preferable for us as the only consumer formulates the requirements. For instance, if you have a UI web page it depends on server responses and no-one expects other behavior, hence functionality isn’t needed.

The frameworks

There are two major frameworks for contract testing: Pact and Spring Cloud Contract. The second one is designed to follow the provider-driven approach despite it is possible to write contracts on consumer side it is not so convenient comparing Pact.

We started using Pact for another major reason — it’s Pact Broker. It provides us with the opportunity to:

  • Keep track of existent integrations and contracts
  • Persist the contracts inside a single place marked by tags and versions
  • Persist verification result of each provider test against a contract
  • Verify either service is allowed to deploy with others or not — can-i-deploy Pact feature
  • Integrate multiple-languages applications, i.e. Java and JavaScript

Implementation

Pact Foundation provides libraries for different kinds of languages and tech stacks and we had the next list:

  • JavaScript Vue + Angular
  • Java + Netty + Jax-Rs/Jersey
  • Spring Boot

All of these techs are covered by Pact libraries with the only difference that for different versions of Junit5 and Spring there are separate libraries you can find on GitHub per application type: consumer/provider.

In general, it brings different implementations in Java tests and test setup will differ from an application to another but Pact DSL declaration is the same.

Consumer side

The only responsibility of consumer is to declare the contract itself. This part is really straightforward as we can write the unit-test on any level we want, for instance, for a Java-service class or Java-client.

All we need is a classic Junit test that will execute the actual application code and it requests the stub server that we declare with PactProviderRule . Besides this, we declare the contract (pact) — actual consumer’s expectations against the provider under the @Pact annotated method with the DSL object. The requirement to the test annotated with — actually hit the stub server.

The basic consumer pact with au.com.dius.pact.consumer:junit

The pact will be converted into a stub during a test by Pact-JVM library.

The transformation of the pact during tests

Pact libraries will generate the pact into the build/pacts directory on the condition of the actual hit the stub server and if the test finishes successfully. This pact will contain all the information that will be kept inside Pact Broker and used to verify the provider's behavior.

As soon as tests are finished and the pact is generated the only step left is to publish pacts to Pact Broker. With Gradle, it can be achieved by the task pactPublish . Read more in the documentation and we’ll touch it later in the pipeline section.

Provider side

As for providers, it is always more complicated and some challenges should be expected in the case of HTTP integrations.

This part requires the actual state preparation as Pact only knows about the pact and it will try to call the application and expect that the state is prepared. Otherwise request will be responded with something unexpected.

In the state preparation method, you should do whatever you need to prepare your application for the actual request, which will be built by the pact file:

  • Stub downstream calls
  • Prepare database
  • Stub layers

Contract tests on the provider side will always rely on the whole application start as it will be called by HTTP. One of the major problems with integration tests like this — state leak. You can update the database but one day forgot to rollback it. To handle this problem Pact has a way to teardown your state. As it is expected to keep the state as it was before the test, make it isolated.

The basic consumer pact with au.com.dius.pact.provider:junit

Historically we used Liquibase to prepare the data for integration-unit tests in a project with a database and as it seemed convenient to reuse existed test data we kept it for contracts tests. This approach didn’t allow us to clean up the state and we switched to the general save-delete approach with JPA repositories.

Following the concept, provider has to verify the pacts and it will do with the passed unit-test. To publish pacts to Pact Broker, the only thing left is to use a JVM parameter pact.verifier.publishResults and the verification results will be published in case of success or failure in tests either. In the latter case, Pact Broker won’t allow deploying this provider version.

Pipeline

In Everon we use CircleCI for building CI/CD pipelines. This is a cloud-managed platform and the pipeline configuration is based on YAML files. It shares context and syntax with GitHub actions. If you know any of it the pipeline samples will familiar to you.

Generally, to achieve the best value pipeline it is needed to have:

  • Docker access from the pipeline machines.
  • Have explicit and not coupled steps: build, tests, deploy, etc.
  • Pact broker host access from the pipeline machines

Pact broker

There are two ways to set up Pact Broker:

  • Managed software as a service — PactFlow
  • Software as a service — deploy Pact Broker it PostgreSQL for it to your infrastructure

We considered the first option as it was more acquainted for us. But to keep flexibility, we decided to follow the second way and have an isolated cluster in Google Cloud Platform.

One of the challenges that we got here was the security of Pact Broker. Google Cloud provides an option to set up IAP: Identity-Aware Proxy. This is an HTTP layer that provides authentication and authorization checks for client calls.

There were no problems with browser access to Pact Broker web UI pages on the other hand requests from CircleCI pipeline and local developers' machines have required customization. Google provides a good guide for programmatic configuration.

The goal is to get the access token as it will need to call Pact Broker with an Authorization Bearer token, on Java level we pass it as a propertypactbroker.auth.token .

  • Running during CircleCI pipeline:
#getting access token
gcloud auth print-identity-token --audiences="your-oauth2-api-client.apps.googleusercontent.com"
  • Local run. The problem here that to get an access token you have to always keep the refresh token and actually proceed with a request call. In order to automate it, we applied scripts that use kept refresh_token and update id_token if it is expired.

Tagging and version strategy

Before jumping into the actual pipeline it is important to mention tags and versions of pacts.

Version is the same concept we’re used to having in the development with only change that in Pact Broker there are 3 versioned entities: consumer, provider, and the pact. We as developers have to configure passing the version for each Pact participant (“pacticipant”) and we decided to follow the advised strategy of having the pact service version equal to the first 6 letters of git commit. We got a reference to the code with it and each service build got its own record to Pact Broker table per commit.

As for the tags, it is more important to sort it out for your self before you start. It is mandatory for using can-i-deployfeature and creating a matrix that will allow you to verify only the relevant contracts for the current branch. The general relations of tags are the following:

  • Each pact has one consumer and one provider [pacticipant]
  • Each pacticipant’s version can have N tags

Hence, keep in mind that tag doesn’t belong to a pact, it is a part of pacticipants.

It is possible to publish tags with contracts (consumer part) or publish it with the verification result (provider part), also there is a CLI tool that allows you to publish it independently from the build. At the same time, we see that it moderates our pipeline logic that's why we should consider it as some strategy to have control over it and make it transparent.

The best strategy from the recommended is to follow the concept and be uniform with it:

Mark each pacticipant version with a tag equal to the environment it has been deployed.

It doesn’t cover all the cases you want to publish pacts, for instance, local development. In this case, we use thedev tag to distinguish local processing from the pipeline. Read more best practices in the documentation.

Consumer pipeline

Following the concept, a consumer has to run the tests and if it passes, i.e. the stub server got the request during the tests then it should publish pacts to Pact Broker.

The sample steps to build and publishing contracts for pact consumer, CI/CD pipelines

Consumer tags and versions will be used later during the verification of services compatibilities.

Besides the build, we want to deploy the application to a staging environment but beforehand we would like to verify:

  • Have all the providers verified our contracts?

If yes, then we are free to deploy and suppose that there is a provider on production with the same version as the version of the provider who has verified pacts. Potentially, someone can deploy a different version than the verified one but if all the providers follow the same approach with this verification step then you can be safe. Read more in the documentation.

The sample of pact-cli steps for CI/CD pipeline

Let’s disclose the steps:

  1. We ask Pact Broker here to check that all the pacts have been verified for the service with a version equal to the current Git commit hash and we are allowed to deploy to test the environment.
  2. Deploying the service any way you used to do it. Let’s consider we deploy to test in this pipeline.
  3. We mark consumer’s version with a tag test as we have deployed the application to this environment.

Provider pipeline

Provider has to verify the pacts, i.e. pass the tests and publish verification result right after it. This result actually is not a formalized entity and is not kept inside the file system as a pact, thus you have to publish it right after the tests by the same command. It is easily achievable with a property pact.verifier.publishResult=true

The sample steps to build and publishing contracts for pact provider, CI/CD pipelines

The set of tags differs from consumers one as provider has to verify all the pacts including production one in order to keep backward compatibility.

The pact-cli steps such as can-i-deploy could be the same as for consumer.

Scenarios

Let’s consider practice examples on a sample project to make sure that it meets our expectations and it is convenient to use.

According to the described pipeline steps above the green flow should look like this. We havebuildjob that executes maven steps anddeploy that proceeds with pact-cli steps to verify the deployment and create tags.

The simplified model of the common pipeline

Provider wants to change a field without an agreement with a consumer

Let’s suppose a provider breaks backward compatibility against any consumer by changing the name of a field in the response that was expected and forgets to let know the consumers about it.

The pipeline will fail on buildjob with an error like this, that something was mismatched.

Pipeline fails on provider tests when it doesn’t meet the contract

The state of the latest pact in Pact Broker will look like this.

Pact Broker after provider failed tests

Let’s disclose it:

  • Consumer of version 026bcb4f that has been passed test, prodenvironments (supposed being deployed to the production)
  • Provider of version 50dec08d that have tried to verify pacts from its branch 1-provider-breaks-backward-compatibility
  • They can’t be deployed together as pact verification result is failed

If we will try to ask Pact Broker about this, it will respond with false.

Pact broker blocks deployment on failed pact verification

Consumer wants to deploy the application when provider is not ready

Let’s suppose a consumer expects an additional field from a provider and wants to deploy the application to a test environment without checking either provider ready or not.

Consumer fails on deploy if the pacts haven’t been verified

In this case build will succeed as consumer unit-tests use pact only as a stub during the tests and verifies a Java method call. The error message of deployjob will be like this.

Pipeline fails on missing verified pacts

The state in the Broker tells us that the pact hasn’t been verified yet.

Pact Broker after the recent publishment of a pact

If we ask it either we can deploy to test or not, then we’ll get the exact response:

Pact broker blocks deployment on not verified pacts

The mindset

Contract testing implies a different mindset by formalizing expectations into an automated too and bounding services. It differs from the known unit and integration tests:

  • Tests on one side depending on another
  • Deployment depends on the pacts state, tagging, and versioning strategy
  • Expectations declared implicitly

Thus it is assumed that engineers have to spend time not only on tooling and tech things but also on the concept.

We saw it as a tool for anyone as Pact makes it formalized and has a helpful web UI. In this way, we decided to involve QA engineers with their API experience in the work on writing contract tests. It assumed the further challenge on the entry barriers, as in average QAs could be not so expertise in Java backend stack, pipelines.

QA or other engineer's integration process could be managed with experience sharing sessions, technical demonstrations/design review, organizing a channel in Slack, and code review discussions. We made use of most of it and onboarding works well.

The contracts on the consumer side could be reviewed by the engineers from the provider team by adding them to your pull requests. It is really convenient to discuss each field in a place that will be bounded.

Approving consumer PR you approve the contract.

Diving in

One of the ways here is to get practice, as usual, and you’re welcome to try it from the GitHub consumer and provider sample projects integrated with the real Pact Broker and GitHub Actions pipeline.

Having Pact Broker web UI with its matrix made integrations more transparent and accessible for us. We come to Broker to check the existed integrations pacts at the same time surfing through it helps to get more of the tool.

Conclusions

The contracts driven approach requires additional steps as including external service — Pact Broker and pipeline adjustments. It could take time to manage it in some companies and train engineers for the new approach.

In general, contract tests are capable of guarantee preventing breaking changes on production and enlight the integration points in a formalized contract. It is useful for growing projects with a lot of integration, i.e. microservices and helps to move communication processes from informal messaging to a standardized process by reviewing integrations as a code.

Do not hesitate to involve other engineers in the process: QA, system analyst, etc. as contract testing is a part of the fundamental improvements on the way to develop transparent APIs.

We hope this article helps you on the way to building qualitative software.

--

--

Artem Ptushkin
Everon Engineering

Software engineer, clean code enthusiast, and contract testing expert