Contract Testing using Pact

Santhosh
12 min readDec 29, 2023

We have transitioned from a time when deployments occurred once every few days to a scenario where deployments take place every few seconds within an application, whether in a higher or lower environment. As applications transform into microservices, enterprise applications now comprise tens or hundreds of microservice applications. The changes within each microservice are very frequent, and with interdependencies among services, the impact of changes in one service can affect others. Identifying these impacts after deployment can be costly. To address this challenge, we need to detect issues during the code development phase.
So let’s see how this issue can be handled, but before doing so, let’s take a step back to revisit some basics and understand the problem and its solution.

Monolithic vs Microservice:
In the monolithic architecture, when the front end interacts with the back end, there is one single instance responsible for the application's business functionality.
Example — The user interface communicates with a server repository (say — User Profile, Orders, Products, Payments, Notifications) and a database. In a monolithic architecture, all functionalities exist in one code base with projects having APIs created to access data and talk to a single database. For simple applications, we can continue with the monolith architecture.

Challenges in the Monolith approach:
-
If the server or instance goes down, you cannot access the website, as everything is on one server.
- While there may be limited load, on special days with significant traffic, scaling up resources becomes necessary. However, the problem is that multiple resources are shared by all features, and required ones may not be available.
- For any error in business functionality, redeploying the entire system is necessary, leading to a server outage for errors like product search.
- The monolith has a massive code base making it difficult to maintain.
- Everything is tightly coupled, and upgrading some stack or a feature to another technology requires updating the entire system.

Benefits of adopting Microservices:
The above-discussed problems can be resolved by adopting microservices. Identify the application’s business functionalities, and spin up a microservice for each, with a separate database for each functionality. For instance,
- Upgrading Orders to MongoDB for efficiency won’t impact other services.
- If there’s an issue in payments, only the payments scenario goes down when redeployed, not affecting other services.
- Orders tech stack can be handled in Java, Payments in JavaScript, and Products in Python or Ruby.

Image source — Developer.com

How Microservices Communicate:
Microservices communicate with each other through REST API calls. For example, from the User Profile, calling the get Orders method retrieves all orders by sending the profile ID. Intercommunication between microservices is made through REST API calls.
On a monolith, the front end or user interface talks to only one system. In microservices, the front end can talk to orders, and in turn, orders talk to payments and return responses.

Microservices testing:
From a testing perspective, we can write some API integration tests, run them against the test environment, and then verify if we are receiving the correct data from the provider. However, shifting our focus to microservices, where large software projects are broken down into smaller, independently developed modules or components by different teams, introduces various challenges to testing.

Challenges in testing the interdependent microservices :
-
The challenge arises when attempting to test all the different microservices in a dedicated testing environment.
- Different teams can deploy their changes to different services simultaneously, resulting in constantly changing data.
- Some services might be down due to environmental issues. Relying solely on testing the integration points via real services becomes cumbersome.

Deploying small, independent services has its benefits, but on the flip side, integration testing becomes complex as the number of integration points increases. Imagine dealing with hundreds of thousands of services communicating with each other, similar to what companies like Amazon or Netflix have. How can you ensure that every change you make doesn’t cause issues with other services?

Mock Services — Testing in isolation without invoking the real service:
Isolated tests offer significant value. They provide a fast feedback loop to check if different inputs yield the expected outputs. For instance, consider a consumer making a GET request to users, and there’s also a mock provider predefined with a response status code of 200 and an object with a name and value. Simultaneously, the provider has isolated tests by simulating what the consumer is expecting.

Challenges running with mocked services:
- It doesn’t give us confidence that the mock provider accurately represents the real provider.
- The same applies to the consumer. Additionally, it doesn’t prevent broken changes from being deployed in a dedicated environment, as the provider or the consumer can deploy broken changes to production, especially if they are on different pipelines and don’t regularly trigger each other’s tests.

Here we will summarize the testing types performed for microservices testing and its challenges

Unit Testing — Test with mock services/data
- Lacks confidence for release.
- Mock providers may not accurately represent the real provider and the same holds for consumers.
- Broken changes can be deployed, especially if on a different pipeline.

Integration Testing - Tests actual endpoints
- When a consumer makes a request, instead of communicating with a mock provider, it talks to the real provider.
- However, having too many of these tests becomes complex in microservices.
- They are slow and brittle, particularly when dealing with frequent data changes.
- Moreover, they require dedicated environments where services integrate, leading to extensive test maintenance.
- Additionally, they don’t prevent the deployment of broken changes.

To address these challenges, we need an intermediate step between unit testing and integration testing. This step goes beyond testing with mocks in isolation but involves less interaction with the real systems. It focuses on verifying that one service meets the expectations of another. This middle ground involves testing with mocks, yet the system understands what the other system expects. This is where contract testing comes into play.

Contract Testing:
Contract testing is a technique for testing an integration point by checking each application in isolation to ensure the messages it sends or receives conform to a shared understanding that is documented in a “contract”.

Let’s understand the importance of Contract testing through a simple within a team discussing an issue in a retrospective meeting.

Q: Why didn’t unit test cases catch the failure?
A: We mocked the response of the external microservice.

Q: Why did you mock the external dependency?
A: Unit tests should be conducted in an isolated manner with stubbing/mocking dependencies.

Q: Why did you have the wrong mock response?
A: Below is the contract we agreed upon while consuming the response from the All Courses microservice. So, we are using this mocked contract to test our code.

Q: What was the reason for the exact failure?
A: We are parsing JSON and grabbing the field to sum up all the courses. The provider microservice has changed the contract and is now sending the field name as “pricing.” Our code base cannot identify the “pricing” field, and it failed to parse the JSON. Unit tests did not catch this because we are using a mock response that has the “price” field. According to that mock response, the test passed. However, when QA tested it with the real service in their end-to-end or integration test, it broke.

Q: Why did you not update your change to the consumer?
A: Many teams are consuming this service, and we are not sure which team is using what field. So, we just updated our documentation for the change we made, and they must check our documentation every time.

Q: How one can monitor these contract changes every time, if the provider doesn’t reach us for the exact changes made?
A: Why worry? A team will however catch the issue on their e2E testing. and we can fix it

Q: Is that not too late? we are agile and contract change issues should be identified in the early life cycle. and we are dependent on the total
7 providers in consuming the data. How we can keep track of all these schema/contract changes?
A: Is there any tool, where can define our expectations on what fields in the JSON contract we are using
so that provider-side test cases will break where they try to make some changes that are in contract with other systems
That way, they will know and drop a heads-up to us on changes

WELCOME to PACT:

image source — pactflow.io
  • Pact provides an RSpec DSL for service consumers to define the HTTP requests they will make to a service provider and the HTTP responses they expect back.
  • These expectations are used in the consumer specs to provide a mock service provider.
  • The interactions are recorded and played back in the service provider specs to ensure the service provider does provide the response the consumer expects.
  • The Pact specification is supported in various programming languages such as Java, Ruby, Python, Go, C++, etc. Specifically in Java, it can be utilized in conjunction with JUnit 5.

This allows testing of both sides of an integration point using fast unit tests.

  • Two services, a consumer and a provider, are interacting via a REST API, with the expectation of receiving a specific status code and response.
image source — Pact docs
  • Use Pact as a mock provider to avoid directly interacting with the real service
image source — Pact docs
  • Write tests on the consumer side, mocking the expected data from the provider. Cover all scenarios, specifying the expected status and response values.
  • Pact automatically generates a contract file (JSON) with all the interactions from the test runs. and uploads it to the PACT Broker.
image source — Pact docs
  • The provider consumes this contract file from the Pact Broker. All interactions are executed on the actual provider service, and the result is compared with the expected outcome.
image source — Pact docs
image source — Pact docs
  • This approach ensures that any changes made by the provider to their API are immediately flagged during development. Issues are highlighted, indicating that local changes may break the consumer, as the contract test has failed.

Case study:
Let’s create a simple contract test. We have a library service offering book resources and a courses service providing online course resources. When the front end calls the library service, it, in turn, calls the courses service. The library service consolidates the responses, summing up resource prices, and sends them back to the front end. Here, the library service is the consumer, and the courses service is the provider. They have a contract or integration point; for example, the provider (courses service) should respond with the price as an integer and the category name as a string in the response JSON.

Pre-requisite:
Download the Spring Boot projects (consumer, producer) from the URLs mentioned in the references section.
Add consumer and provider pact dependencies and the pact broker configuration in the build file
PACT JVM Junit 5 for Consumer — dependency
PACT JVM Junit 5 for producer — dependency
Pact Broker configuration:

Now, let’s write the consumer test to create a contract JSON file.

Consumer Test:
1. Add Pact configurations at the Consumer class
- @SpringBootTest — It is part of the Spring Boot Testing framework indicating the class should be treated as a Spring Boot Test
- @ExtendWith(PactConsumerTestExt.class) — ExtendWith is JUnit 5 annotation that is used to register extensions in JUnit tests. It facilitates the Pact features to the test class file to perform the contract testing
- @PactTestFor annotation is used to specify the provider against which the interactions defined in the test should be verified. The name below “CoursesCatalogue” has to be mentioned the same as Provider and this binds the Provider and Consumer

2. Prepare a mock response for what the course service is supposed to respond with.
- @Pact annotation is used to mark the method that generates a Pact between the consumer and provider
- PactDslWithProvider object is automatically injected by Pact and it provides a fluent DSL for constructing a Pact between the consumer and provider

3. Write the test, making a GET request with the resource, and write assertions for the status code and response.
- @PactTestFor — It indicates that this test is associated with the interactions defined in the above interactions (or Mock Response)
- MockServer object is automatically injected by Pact and it provides the URL where the mock server is running
- setBaseURL — Override the host URL so that the API request is redirected to the Pact mock server

4. When the tests are run, a contract JSON file is created within the ‘target’ folder > ‘pacts’ folder, capturing the expected interactions from the provider.

5. Now, either manually upload it to the provider or use the plugin within the build file to automatically upload the contract file to the Pact Broker (a cloud service that maintains contracts and details).
Maven goal — mvn pact:publish

Contract uploaded to the Pact Broker

Provider Test:

1. Add Pact configurations at the Provider class.
@Provider — This annotation is employed to indicate the name of the provider service for which the contract test is conducted. The name specified at Consumer within the annotation @PactTestFor should align with the name specified here.
@PactBroker — Provide the Pact Broker account and token details that downloads the contract files
@PactFolder(“pacts”) — If the contract file exists in local, provide the folder name here

2. Write the test to run the interactions from the contract file
@BeforeEach will set the server details to make the interactions over this real micro-service
@TestTemplate — Its part of the JUnit testing framework that allows to define a template for tests and then generates multiple tests.
PactVerificationInvocationContextProvider — Each Pact test in the consumer represents an interaction in the contract file. This attribute will generate a test for each interaction found in the Pact files for the provider.
@State — This annotation helps in setting up the required precondition or states on the provider to handle specific scenarios. The state name is already configured at the consumer mock object. Hence the name in provider should match with the server.

Writ2

2. Run the provider tests; it runs the tests from the contract file with the actual service and verifies the results in the contract file against the actual results from the service.
3. Now, let’s explore how the contract test is a game-changer.

Suppose, at the provider, the price type is updated from INT to String for some reason (e.g., to include the currency sign). This change would break at the consumer because, as discussed above, the consumer (library) service reads this price and sums it up with its resource prices. The contract test will fail, indicating that the consumer is expecting the price to be an INT. This information is then passed on to the consumer to update the changes at their end, ensuring a smooth release without any issues in later stages.

We can consider another scenario: If the required product that the consumer is requesting doesn’t exist with the provider, and the consumer is expecting a response code of 404. If, for some reason, the producer makes changes later to respond with 200 and an empty response body, this will break at the consumer. Hence, such discrepancies are known from the contract test.

Contract tests over schema validation :
While there is a presumption that schema validation can solve these issues, but contract testing is more than that. As discussed, the status code scenario above, and often the consumer might not be interested in thousands of values from the producer’s response; only a few of them would be required. If we go with schema validation, it has to be maintained, and unnecessary changes at the producer show as a failure to the consumer. Moreover, in the agile world, with producers and consumers making changes simultaneously leaves a scope for the issue slip

References:
- Pact Docs — https://docs.pact.io/
- https://pactflow.io/how-pact-works/#slide-1
- Rahul Shetty Contract Testing Course

--

--