Continuous Testing of APIs
A strategy to ensure all of your APIs are tested, all the time.
Software composition is increasingly moving towards an API-driven world. If you follow what we write and talk about at Postman, you have probably come across this statement multiple times. We cannot emphasize this enough.
APIs are the building blocks of large software systems that are being built today. More and more companies are moving towards an API-first approach. Building systems as APIs is becoming a business decision, instead of only a technology decision. Ensuring stability, security, performance, and availability are high priorities in this scenario. These shifts have made API testing a first-class objective when building and shipping APIs.
At Postman, we have a unique view of this evolving landscape, thanks to our lovely community. We see API testing strategy becoming a necessary part of the API design lifecycle. Just as designing the interface for a service or product’s API is not an afterthought for an API-first organization like us, so is the need to design a resilient testing system for those APIs. My colleague, Joyce, and I have been talking about this topic since January of 2019 and it is high time that we actually write about it.
A collection of strategies
This publication already contains quite a few articles on test automation of APIs. I have talked about how integration testing of APIs changes in the context of a service-oriented architecture. We followed that up with how Postman sets you up on the path to better automation. We have dived into consumer-driven contracts and how they can help you get over microservice dependency hell. We have also written about how snapshot testing improves reliability guarantees of APIs.
There are a few practices common to all of these solutions. At their core, they are workflows around a set of tools, which when used in a certain way solve a set of problems that API producers and consumers face regularly. Each of these testing patterns are not very useful without rigor.
APIs represent business domain requirements. They change as requirements evolve. The evolution of an API’s lifecycle needs to be in harmony with other systems that depend on it and with the systems that it depends on. APIs have to be flexible and still not break things as they grow and scale.
Need for a tight feedback loop
Building APIs with an API-first model in mind takes care of the design aspect of a distributed system. This is especially relevant in the context of microservices. These design and development need to be supported with a resilient testing system that allows you to react to changes in code or business requirements for your APIs.
You need to know when your APIs fail, why they failed, and you need a tight feedback loop to alert you as soon as possible. So, how do you go about building an API testing pipeline that satisfies all these requirements?
Your API testing pipeline needs 3 key stages:
- Well-defined tests for your APIs.
- Ability to run your tests on-demand and on a schedule.
- Report back passes and failures to alerting and analytics systems.
Writing good tests
A testing system is as good as the tests. Everything begins with well-written tests. When it comes to testing APIs, you need to assert on the responses sent by the application.
You can test for the response data-structure, presence (or absence) of specific parameters in the response, response timing, response headers, cookies and response status. If your API is not over HTTP, these semantics may vary. That is a larger discussion, but the core things you would test in a response would still remain the same.
All of these need good test cases. These test cases should map to business requirements. These can be user stories, user journeys or end-to-end workflows. You can have test cases documented as a BDD specs or as epics/stories in your product management platform or Postman Collections.
You can pick any tool of your choice as long as you are able to author such tests, preferably collaboratively, and execute those tests when you need.
Run your tests on-demand or on schedule
This is the key to continuous testing. To get to this phase, you need to have a continuous integration (CI) pipeline in place. Assuming you have one, you will want to run some of your API tests at build time, and some of your tests on a regular schedule. The cadence will vary based on the scale of your systems and the frequency with which code changes are committed.
On-demand runs: You would run contract tests, integration tests and end-to-end tests in your build system. Code changes, merges and release flows are the typical sources that trigger build pipelines. Depending on how your build pipelines are set up, each stage of tests can be run only after the previous stages pass. The illustration below shows how Postman’s continuous deployment pipelines are set up.
Scheduled runs: You would then want to run some tests at regular intervals against your staging and production deployments to ensure everything works as expected. This is where you would run API health checks, DNS checks, security checks, and any infrastructure related checks. For example, you may test that an upstream dependency API responds with the proper data structure, or, your cloud security permissions are in place. You can even test for something as simple as the response time of your API endpoints.
Combining the power of the two: When you do both scheduled and on-demand tests on your APIs, you end up getting complete test coverage of your APIs. On-demand tests prevent broken APIs from shipping out. Scheduled runs ensure they still remain performant and retain quality after being integrated with the larger system or in production.
Analytics and alerting
Now that you have some data generated from the tests, you would want to make use of it. The 3rd stage towards a resilient API testing pipeline is connecting it with alerting and analytics systems.
Alerting systems will let your stakeholders know the moment a system fails — in this case, these should be failed tests. You can use services like Pagerduty or BigPanda here. If you use Slack at work, you can also push notifications to Slack.
Analytics systems give you a view of system health, performance, stability, resiliency, quality and agility over time. If you follow a maturity model for your services, these data will feed there as well. These data can enrich the product design and product management roadmap by giving important metrics on what works and what doesn’t. Piping this data back to product management closes that feedback loop that I mentioned earlier.
Continous Testing with Postman
The philosophy of continuous testing is tool agnostic. But, if you were to implement continuous testing in your organization with Postman, this is how you would go about it. Let’s map the three key stages of an API testing pipeline that I mentioned above to the features Postman gives you.
- Writing good tests — Collections: This is where Postman Collections come in. A collection is a group of API requests that can be executed at a go. You can write tests for each request and for group of requests. Postman tells you how many of these tests pass or fail when you run the collection. You would have collections for each of your test suites. For example, you would have a collection that runs contract tests of a given consumer’s expectations against a producer, and you would have a separate collection that runs health checks for that service.
- Running tests —
newmanand Monitors: You would integrate
newman, Postman’s command-line collection runner, in your CI systems and run your collections on-demand as part of your CI pipeline. These run on your setup. On the other hand, Monitors in Postman let you schedule collection runs on pre-defined intervals. Monitors can run across multiple regions worldwide. These run on the Postman’s hosted cloud infrastructure.
- Analytics and alerting — Integrations & custom requests: Postman has pre-defined integrations with external services. Your monitor runs can integrate with analytics systems and keep a tab of the runs over a period of time. There are also integrations with notification systems which can alert you whenever monitor runs fail. Beyond these, you can always include requests in your collection that push data to 3rd party services. This is useful when you are running your collections on your own infrastructure using
This workflow will enable you to get the rigor that I alluded to earlier. You will have a strong pipeline and a resilient process that will ensure all the systems in your services function well with each other.
Finally, here is a rather large and poster-worthy illustration showing how these items fit in place conceptually.