API Test Automation and Postman

Saira Mughal
motive-eng
Published in
6 min readDec 17, 2019

KeepTruckin is on a mission to connect the world’s trucks with our electronic logging devices (ELD) and fleet management platform. Among the fastest-growing SaaS companies to date, in only 12 months KeepTruckin has grown from $1M to over $50M in annual recurring revenue.

KeepTruckin products are ever-evolving, and continually enriched with salient features. Quality assurance is an increasingly crucial step in the process that delivers products at our high standards. Our QA team is constantly devising and revising the best policies, standards, and methodologies for all testing, with assiduous attention to detail.

Need for Automated API Testing

Every change in our product offering requires verification of all existing features. But because of our rapid growth, testing each unit is extremely resource-consuming in terms of both time and headcount. We have addressed this problem through test automation.

Automated software testing has been a boon for the QA world, and can be applied to regression testing, load testing, performance testing, and design verification. The QA Automation team at KeepTruckin has scoured external and internal sources for the best tools and practices to maximize its benefits. Our testing automation efforts focus on user interfaces (UIs), application program interfaces (APIs), and end-to-end tests covering embedded code.

In the past, automated API testing has not been common. A QA engineer would spend hours creating tests for all services and plugging them into the main workflow, only to find unexpected results. They would then learn that the issue lay with the main building block: the APIs.

To eliminate these time-consuming nuisances, we have effectively automated our API testing. Our automation testing focuses on our APIs’ adherence to the expectations we set in terms of their functionality, reliability, performance, and application security. Read on to share in the learnings we have applied after intense research and experimentation with the tools and techniques for API testing.

API Testing Targets

API testing is included in our regression and load tests and is categorized as black-box testing. We’re looking to identify the following types of bugs in our product:

  • Duplicate or missing functionality
  • Unused flags
  • Fails to handle error conditions gracefully
  • Reliability issues (difficulty in connecting and getting a response from API)
  • Security issues (does API testing protect data exchanges?)
  • Multi-threading issues
  • Performance issues (API response time)
  • Improper errors/warning to a caller
  • Incorrect handling of valid argument values
  • Incorrectly structured response data (JSON or XML)
  • Validation of response data values

The Building Blocks of API Tests

1. Designing test cases

Designing test cases is a fundamental step in automation testing. Time invested in the test design phase can reduce test execution time and help to discover quality bugs.

“A pinch of probability is worth a pound of perhaps.” — James Thurber

Things to keep in mind when designing test cases so that you miss nothing:

  • Test for the typical or expected results first.
  • Add stress to the system through a series of API load tests. See how the API handles unforeseen problems and loads by throwing as much as you can at it.
  • Test for failure. Make sure you understand how your API will fail. Just make sure the API fails consistently and gracefully.
  • Limit the tests from as many variables as possible by keeping them as isolated as possible.
  • Perform well-planned call sequencing.
  • For complete test coverage, create test cases for all possible API input combinations.

2. Writing tests

We write our API tests in a modular approach. Each test is responsible for a request and contributes to the testing of an endpoint. Typically a test consists of the following parts:

  • Pre-requisite scripts:
    Before we execute a test, we lay the initial groundwork that creates the testing environment for a particular request. Two examples of prerequisites are: user authorization, or encrypting parameters to pass. We use Postman’s pre-request script option to define the variables or functions required for our testing environment. Here’s an example of how we create a request URL using variables defined in our pre-request script:
A request URL created using variables defined in our pre-request script
  • Response validation:
    After test execution, the heart of API testing is analyzing the response. We use scripts to ensure that response objects contain accurate status codes and data validation. For micro-level testing, we use assertions to validate every record and field within a response object. Below is an example of how we validate the response to a request:
Validating the response to a request
  • Global functions and variables:
    For exhaustive testing, we make sure to validate each corner case of an API. This can require using common data across objects, or verifying the same validation in multiple response objects. To avoid code redundancy in these cases, we use global objects to define global variables and functions. The scope of these global objects is set at the ‘collection’ level in Postman (see “3. Grouping tests in collections” below to learn more about this feature). In the screenshot shown below, we created a library of global functions. These functions are called from multiple requests to validate the same type of response:
Defining a global function `ValidateResponse`
Using `ValidateResponse` function in tests

3. Grouping tests in collections

Postman ‘collections’ allow you to group related requests in a folder. These clustered collections reduce the time spent searching and maintaining test scripts. Another of their great advantages is that you can execute multiple tests as a single unit.

Grouping tests in collections

4. Environments

The Postman ‘environment’ variable is a great option for testing the same request at different stages (say, preview, staging, and production). An ‘environment’ is a group of variables defined in the form of key-value pairs.

5. Sharing tests and version control

Collections and environments can be imported from and exported to JSON files. We push these JSON files on our official GitHub repository so that global teams can access and run these tests. We make these tests part of our continuous integration to avoid any defects or bugs during the development phase. It also helps that we maintain the version history of our testing scripts.

6. Running tests

As we discussed above, API tests are grouped in a collection, and environments are defined for every stage. These efforts save us a lot of time when we need to execute our tests for regression testing. A tester need only select an environment and click ‘run’ , without needing to specify every detail every time. After test execution, Postman displays a report of passed and failed tests.

One more simple option is to run tests using Postman’s Newman command-line tool. Anyone can use the JSON files we’ve made available in the KeepTruckin official repository, and run API tests with Newman.

7. Test reports

When using assertions, we found it crucial to add meaningful messages. This way, the reports generated by the collection runner contain meaningful result statements. Here is an example:

Collection runner results

8. Load testing

Many users are on their KeepTruckin concurrently, so load and performance testing is essential. After verifying API calls using Postman, we use Jmeter to validate required response time and results. JMeter examines how the system behaves during normal and high loads, and determines whether applications can handle high loads given a high volume of end users.

API testing allows us to execute requests that may not be possible to execute in the UI, tests instrumental for exposing potential bugs.

Our QA Journey

This post is only a high-level summary showcasing a small stretch of our KeepTruckin Quality Assurance journey. It is not the end. Check back often, and we will soon share our optimized testing techniques.

--

--