Thoughts on automated API integration testing

API flows, or API integration, means running a sequence of API requests, where each request depends on the output of the previous requests.

The simplest example is:
1. Perform request to authenticate and get a token barrier
2. Use the token to perform another API request

It doesn’t matter if you’re a backend or frontend developer, learning how to quickly test flows in the API can be very useful. It can save you a lot of time on pinning down a problem, clarify expectations, guard from regressions and even serve as a documentation for the API.

While it is possible to use tools like postman to write and test requests against your server, I find it much simpler, nevertheless if you’re a programmer, to write your tests in javascript code, using libraries such as Mocha and Chai.
Writing code to test your API gives you a lot more flexibility, reuse and version-control.

In this post I will show how to create a small tests project.
Make sure you have node installed on your environment before starting.

First, create a folder named `github-api-testing`, open this folder in a terminal window and run ‘npm init -y’.
This will create a ‘package.json’, which helps npm to manage all the external libraries and dependencies we will need.

Tools

Run ‘npm install --save mocha chai supertest supertest-as-promised’.
This will install several external packages required for the tests to run. and registered them as dependencies in our package.json file.

I’ll explain a little on each of these packages:

Mocha is the testing framework we are about to use, it define a set of keywords to identify the test functions, group several tests together and to allow code to run before and after each test.
It also serves as the test runner which monitor the test files, run all the tests and reports the success and failure of each case.

Chai is an assertion library, it allow us to assert that our code behave as expected, while keeping it readable and easy to understand.

Supertest is an assertion library around HTTP requests. It perform an HTTP request to the API and allow to set expectations like checking the response’s headers, status and body.
Other libraries, like hippie, api-easy, chai-http and frisby provide similar functionality.

Supertest-as-promised is a wrapper around Supertest, which give Promise interface to the request object, so we can use methods like .then() and .catch() in our tests instead of passing callbacks functions.
This has more importance when doing flow-testing, because it let’s you define the sequence in a more readable syntax.

Writing the tests

For the purpose of the demo, we will test github public API and will check the following flow:

1. Use the ‘/users?since={userId}’ API to get the username of a specific user
2. Use the ‘/users/{username}/repos’ API to get the list of this user’s repositories
3. Call ‘/repos/{username}/{repository}/contributors’ API to query the list of contributors for a repository belong to that user

Now that the libraries are in place, let’s create an ‘integration-tests’ folder in which we’ll place our test files.

Create a file named ‘get-user-repositories.test.js’ and add the following code:

const request = require("supertest-as-promised");
const chai = require(‘chai’);
const { assert, expect } = chai;
chai.should();

This snippet import the libraries we installed before, and let us use it inside our code.

We’ll now define a base supertest’s ‘request’ object, and the functions that will use it to invoke the API requests:

const baseRequest = request('https://api.github.com'); 
function getUsersSinceId(userId) {  
return baseRequest
.get(`/users?since=${userId}`)
.set('Accept', 'application/json')
.expect('Content-Type', /json/)
.expect(200)
.then(function(result) {
const users = result.body;
expect(users).to.be.an('Array', "Couldn't get users");
return users;
});
}
const getFirst = items => items[0];
function getRepositoriesForUser(username) {
return baseRequest
.get(`/users/${username}/repos`)
.set('Accept', 'application/json')
.expect('Content-Type', /json/)
.expect(200)
.then(function(result) {
const repos = result.body;
expect(repos).to.be.an('Array', "Couldn't get repositories");
return repos;
});
}
function getContributorsForRepository(username, repository) {
return baseRequest
.get(`/repos/${username}/${repository}/contributors`)
.set('Accept', 'application/json')
.expect('Content-Type', /json/)
.expect(200)
.then(function(result) {
const contributors = result.body;
expect(contributors).to.be.an('Array', "Couldn't get contributors");
return contributors;
});
}

Each of the above functions perform a network request to the API and assert certain expectations like headers, status code and returned data structure.
We will use these functions as lego blocks for our flow tests, combining them in specific order.

Let’s add the code to test our desired flow:

describe("Verify user\'s repository has any contributor", function() {
this.timeout(5000);
this.slow(1000);
  const state = {};
state.passed = true;

afterEach(function() {
state.passed = state.passed &&
(this.currentTest.state === "passed");
});

beforeEach(function() {
if (!state.passed) {
return this.currentTest.skip();
}
});

it('should get username for user-id 7034093', function() {
return getUsersSinceId(7034093)
.then(getFirst)
.then((user) => state.username = user.login);
});
  it('should get first repository name', function() {
return getRepositoriesForUser(state.username)
.then(repos => state.repos = repos)
.then(getFirst)
.then(repo => state.repo = repo.name);
});
  it('should get repository contributors', function() {
return getContributorsForRepository(state.username, state.repo)
.then(contributors => contributors.should.have.length.above(0));
});
});

The ‘describe’ function group several tests together.

We use this.timeout() and this.slow() to specify the duration (in milliseconds) for a test to be consider slow or timed-out.
Slow tests will be marked with yellow in mocha ‘spec’ reporter.
Timed-out tests will be consider as failure.

Then, we create a state object, that will be used to pass needed data from one test to the other.

We define a ‘passed’ property to capture the state of the last test, and use ‘afterEach’ function to update this state after each running test.
We can then use the ‘beforeEach’ function to skip all remaining tests in case one of the tests failed.

We can now define the tests using ‘it’ functions, each such function has a title and a body.
Mocha will run the tests in the order they defined, reporting success/failed status for each test title.

We perform the API calls from within our test functions, using the ‘lego blocks’ we created before.
And we save the returned value from each call on the state object, so the next tests can use that data.

To run the tests, run ‘mocha tests/**/*.js’.
We can also define it as npm script, by editing the package.json file scripts section:


“scripts”: {
“test”: “mocha tests/**/*.js”
},

Now you can simply run ‘npm test’ to execute the tests.

The output will be something like:

mocha tests report

Conventions

Usually, when writing unit-tests, you want to have the tests as separate as you can from each other.

I could of write the flow test in the same manner, eliminate the need for a ‘state’ object and the ‘beforeEach’ / ‘afterEach’ function, and have all the code under one test function:

it('should verify user\'s repository has any contributor', function () {
return getUsersSinceId(7034093)
.then(getFirst)
.then((user) => user.login)
.then((username) => {
return getRepositoriesForUser(username)
.then(getFirst)
.then(repo => repo.name)
.then((repoName) => {
return getContributorsForRepository(username, repoName)
.then(contributors => contributors.should.have.length.above(0));
});
});
});

The main reason why I choose to split each step in the flow into separate test (‘it’) function, is because I feel that it gives me the most readable report and help to catch the exact point where the flow has failed.
The stacktraces and error messages you get from mocha and supertest, when dealing with promises, is not clear enough, and so you must have the test report to be as informative as possible, and let you see exactly where did the flow broke.

Another thing worth mentioning, and is different than how I would write unit-tests, is that some of the assertions were written inside the ‘lego block’ functions that actually perform the http requests, and some of the assertions where made inside of the actual ‘it’ function.
My thinking in separating the code this way, was that in integration / flow testing, I might have the same blocks of functionality repeat in several different flow. For example, I might have all my flow start with a ‘login’ request.
So I tried to build those low-level blocks as general as possible, and to only include assertion logic related to the network and structure of response.
The actual flow and assertion related to business logic, will be perform inside the test function itself (like asserting the number of contributors, in our example).

We can even go further and structure our files in such way, extracting the low-level blocks into separate file (e.g. `api-units.js`) and have the tests import the needed functions from there.

integration-tests/
├── tests
│ ├── api-units.js
│ ├── get-user-repositories.test.js
└── package.json

Conclusion

Writing integration testing for your API can be really easy and useful, you can integrate them as part of your development or deployment process, and they serve as good documentation as well.
The structure of integration tests can be different than how you’d write unit-tests, and it worth investing the effort and plan it for your needs.

The full code for this post can be found at https://github.com/avivr/github-api-tests


If you have any questions, comments, ideas for improvements or you want to share how you do API testing — please feel free to reach out.