Integration test Node.js and MySQL/PostgreSQL using docker-compose — CI/CD

Matt Goodson
7 min readMar 17, 2020

--

Integration testing is a vital part of running a Continuous-Delivery pipeline. While Unit tests can help make sure small code units are functioning correctly, they do very little to ensure that your APIs or database queries are actually meeting your business requirements. Modern SaaS products can have new code delivered to customers multiple times a day and so it is necessary to automate the process of testing this functionality.

In this article I won’t go too far into the scope of how you should write integration tests but will instead focus on how such tests could be run on a CI/CD server for an application using Node.JS and a relational database such as MySQL or PostgreSQL. This can be extended to include other services such as message queues, caches, micro-services, NoSQL databases etc by adding more containers to docker-compose.

The technologies we will use in this tutorial are:

  • Node.JS 10
  • MySQL 5.7
  • Jest
  • Docker-Compose
  • Knex.js (migration system only)
  • Google Cloud Build (CI build service)

Note that most of these technologies could be swapped out for an equivalent depending on your tech stack. For instance, instead of Google Cloud Build, you could use Circle CI or Jenkins and, instead of MySQL you could use PostgreSQL. Knex.js could be swapped out for any migration system including an ORM if you are using one. Jest could be swapped for Mocha or Jasmine. The process in any case should be roughly the same.

How will it work?

The general process for running the tests will look like the following:

  1. Code is committed to your Git repo
  2. A CI build is triggered on Google Cloud Build
  3. This build spins up a local instance of your database on the CI server using docker-compose
  4. Migrations and seeds will be run against the database to create your table structure using Knex.js
  5. The Jest integration tests are run against the local database instance
Simple representation of the CI build flow

Step 1: Writing a test

const mysql = require('promise-mysql');
const userService = require('./services/users.service.js');
const config = require('./config);
let pool;/**
* Initialise the MySQL connection pool
*/
const
openConnection = async () => {
pool = mysql.createPool({
host: config.host,
port: config.port,
user: config.user,
password: config.password,
database: config.database,
});
};

/**
* close all db connections
*/
export const
closeConnection = async () => {
await db.pool().end();
};
/**
* Test the login service
*/
describe('Login', () => {
let user;
beforeAll(async () => {
// open database connection
await openConnection();
// create a user in the database
user = await userService.createUser();
});
afterAll(async () => {
// make sure connection is closed when tests finish
await closeConnection();
});
it('succeeds with correct credentials', async () => {
const response = await userService.login({
email: user.email,
password: user.password,
});
expect(response.body.token).toBeDefined();
expect(response.body.user_id).toBe(user.user_id);
});
it('fails with incorrect credentials', async () => {
// incorrect email
const response = await userService.login({
email: 'incorrect@email,
password: user.password,
});
expect(response.body.token).toBeUndefined();

// incorrect password
const response = await userService.login({
email: user.email,
password: 'incorrect password',
});
expect(response.body.token).toBeUndefined();
});

});

This test is only a very simple example. In a real application, the logic for opening and closing database connections would be abstracted out to be reused across all tests.

Notice that we create a real user instance in the database. There is no mocking of our service. Also notice that we close the connection to the database after all tests are run. This ensures the tests will finish properly without hanging on to a connection.

Step 2: Setup structure

Our directory structure will look like this:

db/
— knexfile.js
— migrations/
— seeds/
test/
— MyTests/
— docker/
— — docker-compose.yml
— — migrate-mysql.dockerfile
— — package.json
— — wait-for-it.sh

The db/ directory contains the Knex configuration and the migrations and seeds to build the database.

The test/ directory contains your test code. It also contains the docker configuration for running the integration tests under docker/

Step 3: Knex configuration

You will need a knex configuration to point to the running MySQL docker image (see host and port configuration). I’ve called this configuration ci. An example knexfile.js:


module.exports = {
ci: {
client: 'mysql',
connection: {
database: 'mydb,
user: 'myuser',
password: 'mypassword',
host: 'db', // name of docker container mysql service
port: 3306, //docker MySQL runs on port 3306 internally
},
},
};

Step 4: Docker-compose configuration

We need to write a docker-compose.yml file to specify which containers to start and how. We need docker-compose to do two things:

  1. Run our MySQL server instance for our tests to hit
  2. Run the migrations and seeds to build the MYSQL database after the server starts but before the tests are run

Here is a docker-compose.yml file to achieve this:

version: '3'


services:
migrate:
command: ./wait-for-it.sh db:3306 -- cd db && knex migrate:latest --env ci && knex seed:run --env ci
volumes:
- ./target:/opt/setup/target
image: migrate-mysql
build:
context: ../../
dockerfile: test/docker/migrate-mysql.dockerfile
depends_on:
- "db"

db:
image: mysql:5.7.28
environment:
MYSQL_ROOT_PASSWORD: 'ROOT_PASSWORD'
MYSQL_DATABASE: 'mydb' # create a db by default
restart: on-failure

# force containers to join cloudbuild network
# allows them to communicate on the google cloud build system
networks:
default:
external:
name: cloudbuild

The compose file specifies two services to start:

  • The first is db which is the MySQL database. This will be pulled from the docker container registry. Note there are currently some issues with version 5.7.29 when trying to start containers in a sequence. Because of this I have specified minor version 5.7.28.
  • The second is migrate which is a temporary container that will run the database migrations and seeds. There are two things to note here:
    - First is the depends_on field which tells docker-compose to only start this service after the db service is already running.
    - Second is the command field which refers to a bash script wait-for-it.sh. Even though we used the depends_on field, docker-compose does not know when MySQL is ready to accept connections. This script will poll the database server and run the migrations and seeds only when the database accepts a connection on port 3306.
  • The third thing specified is a docker network to connect the containers on. Google Cloud build automatically specifies the internal network cloudbuild so to make it so the containers can communicate we need to connect them to this network.

Step 5: Migration image (migrate-mysql.dockerfile)

Here is an example of a minimal Node.js image that will allow is to run the migrations using knex. Note that it has its own package.json file to install the knex dependency. It makes the wait-for-it script executable and then installs from the package.json file.

FROM node:10-alpine

RUN apk add --no-cache bash
RUN apk add --no-cache coreutils

COPY test/docker/package*.json test/docker/wait-for-it.sh /opt/setup/

COPY db /opt/setup/db

WORKDIR /opt/setup

VOLUME /opt/setup/target

RUN chmod +x ./wait-for-it.sh

RUN npm install

Step 6: wait-for-it.sh

The wait-for-it script was created by vishnubob. You can find the script on GitHub here:

Step 7: Running it on Google Cloud Build

To run our tests on Google Cloud build we need to specify a build yaml file then setup a trigger to start the build. I won’t go into the details of configuring a build trigger as the documentation goes into this in detail: https://cloud.google.com/cloud-build/docs/running-builds/create-manage-triggers.

Here is the build.yaml file to run the tests:


steps:
- name: 'gcr.io/cloud-builders/npm'
id: Install
args: ['ci']


- name: 'gcr.io/cloud-builders/docker'
id: Build migration image
args: [
'build',
'-t', 'migrate-mysql',
'-f', 'test/docker/migrate-mysql.dockerfile',
'.'
]

- name: 'docker/compose:1.19.0'
id: Compose up
args: ['-f', 'test/docker/docker-compose.yml', 'up', '-d']

- name: 'gcr.io/cloud-builders/npm'
id: Integration test
args: ['run', 'test']
env:
- 'MYSQL_HOST=db' # point app instance to MySQL docker instance

There are four build steps here:

  1. The first step installs the npm packages for the application being tested. (npm ci is used as apposed to npm i. Find out why here: https://docs.npmjs.com/cli/ci.html)
  2. The second step builds the migration container we defined before. This will allow docker-compose to run it. In a production system, it would be best practice to cache this image so that it does not need to be rebuilt every time.
  3. The third step runs docker-compose up -d which starts the services in the background.
  4. The fourth step runs the tests using npm test. It assumes the package.json has been configured with a test script. An environment variable is also set to point to the local mysql instance. This can be accessed in the Node.js code using process.env.MYSQL_HOST.
    For the purpose of keeping this article short, I won’t discuss in detail how you might make your app and tests configurable for different environments.

Step 8: Deploy it

Make sure to update the build trigger to use the new build.yaml file.

Depending how you have set up your build trigger, pushing your code to your git repository should trigger a build.

You should see docker compose start MySQL then then run your tests. Because you have run docker-compose in detached mode, there won’t be any output logs for running the migrations and seeds.

Assuming you closed the database connections properly, everything should finish once the tests have been run.

Thats it!

You can now use a CI server to run integration tests that don’t mock out service dependencies. In production, this can give much more confidence that your services will behave as expected. If you want to learn more about continuous delivery then Atlassian provides some great resources on the topic: https://www.atlassian.com/continuous-delivery. If you want more testing tutorials or have any feedback then comment below!

--

--