Acing your API tests — what you need to know for test automation

We’ll break it down, with examples, so you can test early and often.

Joyce Lin
Better Practices
8 min readNov 26, 2018

--

As Quality Engineers, we set the quality standard on our products and hold our development teams accountable.

Writing tests for our microservices keeps our products running smoothly. We would not be able to release with the high level of confidence we currently have without these tests in place to ensure that our services are running and integrated as expected.

Trent McCann, Lead Quality Engineer at Cvent

Part 1: API tests

Part 2: integration tests

Part 3: other stuff that people talk about when writing tests

Part 4: a recipe for writing tests in Postman

As our applications and services grow, it becomes more and more time consuming to execute regression testing on the existing codebase every time new functionality is introduced.

⚠️ The testing community at times conflates common testing terminology, or refers to concepts interchangeably. Regardless of how your team decides to define particular terms, first understand the purpose for why you’re running each type of test.

Part 1: API tests

API test for a single API request

When some people talk about “units tests” for an API, they are likely referring to a test of a single HTTP API request. Similar to how software unit tests are written to assess the smallest unit of functionality, you can think of an API test like a unit test.

With this type of testing, you may or may not have access to the underlying code. However, you can validate that an API behaves as expected from the user’s perspective.

Testing a publicly facing API through your UI is not enough!

Amber Race, Senior SDET at Big Fish Games

Examples of API tests

  • Status: the correct response code was returned
  • Performance: the response was returned within a certain time
  • Syntax: the content type of the returned content was as expected
  • Syntax: the server accepts correct input
  • Error handling: the server rejects incorrect input
  • Error handling: excluding a required parameter results in an error
  • Error handling: submitting incorrect data types results in an error
  • Error detection: negative testing to identify exceptions and race conditions
  • Schema: the response payload conforms to an expected structure or format
  • Functional: the server returns a predictable value based on the input condition
  • Functional: the request predictably modifies a resource
  • Security checks: SQL injections do not result in data leakage

Tests are the documentation for your code. They should document the desired functionality of the software, and also address edge cases and error scenarios.

Valentin Despa, Software Developer at AOE

Part 2: Integration tests

Integration tests are built across multiple units to verify that components are working together as expected. This can encompass two or more individual components, or endpoints. Integration tests can include internal services, third-party services, and other external dependencies.

We write tests because we want them to fail someday, to warn us that something is our application has changed or behaves differently. While this seems rather obvious, tests that never fail are quite common.

It can be that the test run report is not properly understood by the CI/CD tool and marked as passed, or that assertions themselves are not executed, are faulty, or too permissive. So when you write a test, make sure it can fail.

Valentin Despa, Software Developer at AOE

Setup and teardown

Sometimes your test cases require some initial groundwork to prepare your test environment. Perhaps you’re generating new users, creating authentication tokens, or simply initializing variables.

After your tests run, you may need to clean up the test conditions, so that you’re not littering new users, records, and other side effects throughout your test environment.

Automate the setup and teardown of your test conditions

Automating the setup and teardown steps allows you to quickly re-create your test conditions in a consistent way so that you can repeat your tests quickly and easily. Creating repeatable tests allows you to more efficiently tweak other variables, isolate the system under test, and observe the results in a scalable manner.

Scenario tests

Make sure your application is robust and stable across a spectrum of scenarios. Testing both expected and unexpected use cases is critical to a good user experience. Visualize the user’s workflow and think about how they interact with the application.

Write and run your API tests in a sequence that mirrors a typical user workflow, commonly identified in the user story business requirements. Testers should also identify test cases for atypical user behaviors.

Generally, I take a black box approach — follow the happy-path to ensure it meets the defined functional specs and start going off that path. Try negative and edge case scenarios to see how the application will respond.

A good rule of thumb to keep in mind is: if a user can do it, they will at some point actually do it, no matter how obscure it may seem.

Trent McCann, Lead Quality Engineer at Cvent

Part 3: Other stuff that people talk about when writing tests

Performance tests

You can run these test cases to performance test your APIs. Increasing concurrent load at controlled increments allows you to identify bottlenecks and validate service-level agreements (SLAs) for internal APIs or public services.

Exploratory load testing to gain a deeper understanding of your systems

Mocking dependencies

If your test case involves shared or production resources, some teams will use mock services. Mocking the actual service allows you to rely on a predictable response so you can isolate the component under test when debugging issues.

You can also use mocks to simulate error conditions or rare instances that might be difficult or problematic to set up in a production environment.

Contract testing

As organizations continue to transition to a microservices architecture, teams become increasingly dependent on internal services. An API-first approach assumes the design and development of an application programming interface (API) comes before the implementation.

Consumer-Driven Contract Testing (CDC Testing) ensures that your service doesn’t break when a service that you rely on is updated. This type of contract testing requires the consumer of the service to define the format of the expected responses, including any validation tests. The provider must pass these tests before releasing any updates to the service.

Consumer-Driven Contract Testing ensures that your service doesn’t break when a service that you rely on is updated

We take an API-First Design approach to ensure that our API specifications are defined and approved before cutting any code. This provides us with a unified design to build upon, and allows our developers and QE to simultaneously develop our application and tests, thereby avoiding the waiting period before the handoff from dev to QE.

We leverage the Postman mock service to ensure that our tests are developed and ready for use when handoff takes place. Then it is simply a matter of swapping variables and our QEs are off to the races.

Trent McCann, Lead Quality Engineer at Cvent

Regression testing

Run initial smoke tests to verify that your build is stable enough to proceed with more rigorous, regression testing.

Regression tests can be either unit tests, integration tests, or both. Run your entire test suite after a patch, upgrade, or bug fix. Even if you’re working on a part of the code that you don’t believe impacts a previous patch, you should run your regression tests to ensure that new functionality doesn’t break something that was previously working.

Tests give you confidence when adding new features and changing your code. They should not be a burden to write and maintain but a tool that can empower you as a development team to build better software.

Valentin Despa, Software Developer at AOE

Part 4: a recipe for writing Postman tests

Now it’s time to get tactical! Let’s check out some examples.

To follow along with these, and other, examples, click the orange +New button in the top left of the Postman app. Under the Templates tab, search for Intro to writing tests — with examples and import the sample collection and environment into your instance of the Postman app.

Import this collection and follow along with these examples

Read through the descriptions in the Postman app for details, or check out the web documentation for step-by-step instructions and screenshots.

  • Basic test syntax
  • Considerations for using variables
  • Test automation in Postman
  • And examples aplenty!
Examples of writing tests in Postman

A final thought about writing tests

Once you’ve written your tests, the world is your oyster. Do more with your time, and continue your journey from manual to automated testing.

Couple your tests with a CI/CD pipeline to ensure no failure conditions are released into your server environments. Continue running your tests in production at regular intervals to monitor your endpoints and make sure they’re behaving as expected. Run exploratory or regular tests to assess the performance of your APIs with real-world loads.

Automating simple API tests and more complex integration tests allows you to catch issues earlier in the development cycle when the cost of debugging is the lowest. Running these tests in an easily repeatable manner allows you scale your development with confidence as the code base grows.

If you have examples of tests that you use, you too can share them with the rest of the Postman community. Publish your examples in a template.

--

--