The Ultimate Guide to Integration Testing With Docker-Compose and SQL

Piotr Karpala
The Startup
Published in
5 min readAug 31, 2020

Updated 9/5/2020

In the previous versions of this article, I’ve stated that theseed container doesn’t have to wait for thedb container. That turned out not to be the case. The article has been corrected. If you prefer looking at code — here’s a sample repo that uses everything described below.

Integration tests are hard

I’ve been toying with the idea of doing integration tests with docker-compose for a while and that time has finally come :)

This guide should apply to any application that is stateful, with a data store that can be dockerized (not Cosmos DB).

I want a one-script solution — no manual steps. It needs to run both locally and on the continuous integration server (Azure DevOps in my case).

The tests that I want to add are calling a node.js azure function that talks with MS SQL server and returning data from the database.

Docker compose file that we’re building

The Application

The application consists of two layers — azure function running node.js and MSSQL 2019 Server. There’s also some test data I want to push to the DB before tests are executed.

The issue

While adding docker-compose for an app is fairly easy, and there’s plenty of great guides and tutorials out there, docker-compose is not designed to be an orchestration tool for setting up the DB, then seeding it with data, starting the app, and executing tests. In production scenarios, applications should be resilient to any of its dependencies being not available, so integration testing is somewhat an edge case. We really just want the test to wait until everything is ready, execute, and give us the results.

Materials

Before I show you the solution — here are some great materials that I used to assemble this post.

The solution

The solution consists of 4 containers:

  • db — MSSQL 2019 server
  • seed — dotnet core 3.1 image that runs an app that creates schemas and pushes data based on https://github.com/DbUp/DbUp
  • fun — Azure function app
  • tests — nodejs app running JEST tests against fun

Directory structure:

├── Dockerfile
├── Readme.md
├── Sample.App.Database
│ ├── Beef.Database.Core.ReadMe.md
│ ├── Migrations
│ │ ├── 20200814-000000-create-dbo-Contact.sql
│ │ └── 20200814-000000-data-dbo-Contact.sql
│ ├── Program.cs
│ ├── ReadMe.md
│ ├── Sample.App.Database.csproj
│ ├── Sample.App.Database.sln
│ ├── Sample.App.Database.xml
│ ├── docker-compose.yml
│ └── wait-for-it.sh
├── azure-pipelines.yml
├── docker-compose.override.yml
├── docker-compose.tests.yml
├── docker-compose.yml
├── host.json
├── jest.json
├── package-lock.json
├── package.json
├── src
│ ├── api-contact-list
│ │ ├── function.json
│ │ └── index.js
│ ├── host.json
│ ├── local.settings.example.json
│ └── local.settings.json
├── test-report.xml
├── test.sh
└── tests
├── integration
│ ├── Dockerfile
│ ├── contact
│ │ └── getContactList.test.js
│ ├── package-lock.json
│ └── package.json
└── unit
└── contact
└── getContacts.test.js

I’ve extracted common properties and added it to .env file:

DB_PORT=1433DB_NAME=SampleDBDB_USER=saDB_PASSWORD=yourStrongPassword123

In production scenarios, I wouldn’t keep passwords in git repository, but this one is not really used anywhere — just internally for testing, so I think it’s fine. If you prefer not to use the .env file, the same thing can be achieved with docker-compose override files, but you need to specify them explicitly like this:

docker-compose -f docker-compose.yml -f docker-compose.local.yml up

The goal of docker-compose is to create containers in order:

  1. Create SQL Server (db)
  2. When SQL is ready to accept connections — seed it with data (seed)
  3. When seeding starts — start the function app (fun)
  4. When seeding is done (container exits)— start the tests (test)

This docker-compose file is the base one — it can be used to run the function locally.

Integration tests are added in an override file — docker-compose.tests.yml

So what’s going on there? First, we create a network, so that we don’t interfere with any other SQL Servers running with the same port.

Next, the SQL server is described — db container, just a standard empty server.

Seed container depends on SQL to start, I’m using a wait-for-it script (not to confuse with the song from Hamilton). The script ensures that seeding doesn’t start before SQL is ready to accept connections.

2020-09-04 14:01:58.37 spid25s     SQL Server is now ready for client connections. This is an informational message; no user action is required.
seed_1 | wait-for-it.sh: db:1433 is available after 8 seconds

The seed is not that special — it just needs to know where the server is (db is the hostname) and what are the credentials.

The next container is fun — an azure functions custom build image, we pass in info on how to connect to SQL Server via environment variables.

The most interesting container is test, it can use a custom image, but I decided to use a ready-image so it’s a little bit faster. The startup command is modified to ping seed image in a loop, until it dies, before running the tests. That ensures tests are not executed before DB is seeded.

Running the tests

Thanks to the awesome work of Harrison Harnisch at https://blog.harrison.dev/2016/06/19/integration-testing-with-docker-compose.html, running the tests is extremely easy, just one shell script:

Continues Integration — Azure DevOps

In azure-pipelines.yml :

- task: ShellScript@2  displayName: "Run integration tests"  inputs:     scriptPath: "./test.sh"

Azure functions Dockerfile

This would be a super simple definition if not for one thing — Azure functions require an authorization key that’s managed by Azure. When functions run locally — azure function core tools handle that for us and skip authorization, but in the docker image that’s no longer the case.

Without a small but ugly hack, it’s impossible to run azure functions in the docker image.

The “hack” is in line 15 — if the build argument LOCAL is set to true, code copied to the image is modified and the authorization level is changed to anonymous.

Running this on Windows

When running this on windows you may face an issue with line endings — basically, the file copied from your windows OS has an additional line ending characters that Linux does not like. Here’s a good article explaining it.

A fix, that’s definitely not the prettiest, is to remove \r before executing the script.

command: bash -c "   sed -i 's/\r//g' wait-for-it.sh   && ./wait-for-it.sh db:1433 -- dotnet run DropAndDatabase"

Source

You’ll find a working example in my github repo

--

--