Testing in Docker
At comparethemarket.com we use Docker for running our applications in AWS. Docker is great tool for many reasons, one of which is to help drive consistency through each of our environments, all the way to production. The idea of building a versioned docker image once, then cascading that same image through each of our environments gives us confidence that by the time the image reaches production, it’ll be a successful release.
That’s great, but even better is if we can apply that level of consistency and portability that docker gives us to our pre-deployment stages e.g. unit & integration testing. There’s a bunch of benefits to doing this:
- no dependencies other than Docker itself
- tests run in Docker containers matching how the application runs in production
- versions of packages and libraries used while testing will always match the versions you’ll deploy in production
- can be run anywhere, consistently and reliably
- docker-compose can be used to provide additional containers needed for integration tests e.g. MongoDB etc.
- managing CI/CD platforms becomes easy as you don’t have different apps fighting for different versions of packages & applications
Here’s a link to an example Node.js app which has been structured so that the tests can run in Docker:
First let’s look at the Dockerfile:
COPY . .
RUN NODE_ENV="$NODE_ENV" npm --quiet install
ENTRYPOINT ["npm", "run"]
We’re building an image from Node 6.9.1 as that’s the version of Node we want in production. As we’ll be using the very same Dockerfile to build our image for testing, we’ll have that consistency i.e. we’ll be testing with exactly the same version of Node that our production image is based on.
ARG NODE_ENV="production" is important - this allows us to override the
NODE_ENV variable set for the
npm install making it easy to build our image with or without devDependencies, without the need for multiple Dockerfiles.
The rest is just a standard Dockerfile. More info on Dockerfiles can be found here.
At this point we could run
docker build, ensuring we pass it the relevant arguments, then manually launch a container with
docker run to execute our tests, but we have a dependency problem. One of the tests is testing a function that requires a Mongo database. No problem - meet docker-compose!
Docker compose helps us to automate the whole process of building our images and running our containers. It allows us to spin up additional containers and link them to our app container, which is great for external dependencies like MongoDB.
Let’s have a look at the docker-compose.yaml from the example app:
services we're defining 2 containers - one is our app and the other is our database.
Our app definition contains a
build section which tells docker-compose we want to build a Docker image. The
context: . sets the current directory as the build context i.e. the place where it can find the Dockerfile and the files & directories required for the build. Under the
NODE_ENV: docker-test is where we're overriding the default value of
production defined in the Dockerfile so the image will be built with devDependencies included.
Under the environment section we also set NODE_ENV, but this is for when the container is running — the app config is environment based so setting NODE_ENV for the running container ensures we use the right config for that environment, in this case the
docker-test config. The config contains the database connection string.
We’re overriding the default command to
test. The Dockerfile file defines the
npm run and the default
start resulting in
npm run start. By setting
command: ["test"] in our docker-compose file, the container will run
npm run test and therefore run our tests instead of just starting the web server.
links section we're linking the
db container to our
app container so lets take a look at the
dbdefinition. Notice there's no
build section? For Mongo we can just use the official MongoDB image from Dockerhub. Our real Mongo boxes in production are on 3.2 so we've chosen the same version of Mongo here.
We’re exposing port 27017 on the
db container and we linked the
db container to our
app container so our app will be able to communicate with our database on port 27017 using the hostname
db - the hostname is always identical to the service name defined the docker-compose file, in this case
db. As we know our database will be available at
db:27107, the config in our app for our
docker-test environment sets the mongo connection string to
For more info on the docker-compose file go here.
See it in action
If you haven’t already got docker and docker-compose installed, you can grab them from here.
Clone the example app repo:
git clone https://github.com/antonosmond/testing-with-docker
docker-compose up -d --build && docker attach app
The above command does a few things. The
--build argument tells docker to build the images before running the containers.
docker-compose up -d tells docker to start the docker containers in detached mode i.e. as background processes. We run the containers in detached mode as we don't care about the output from the mongo container, however we will want to see the output from our tests so
docker attach app will attach our current shell process to the running container named
app. That way we can see the output from our tests. This is also important because any exit codes from the app container will be returned to our current shell - this means if the tests fail, we'll get a non-zero exit code and can take some action e.g. fail our build stage in our CI/CD process. We ran in detached mode and our app container will exit on completion of the tests however the mongo container will still be running. If you want to clean everything up then just run
docker-compose kill && docker-compose rm -f. This kills any running containers and removes them.
If the tests pass and we want to build our production image we can just run
docker build -t myrepo:mytag .. As we defaulted the
NODE_ENV ARG in the Dockerfile to
production, we don't need to pass any additional args to the docker build command and the image will be built excluding the devDependencies. We also defaulted
start so when our real container is started in production it'll start the app instead of running the tests like we did previously.
Hopefully this has demonstrated some of the advantages of testing in docker. You should be able to clone the repo and run the tests, without having to worry about dependencies or what versions of packages you may or may not have installed — even mongo was taken care of for you. The images that we built are essentially the same as they use the same Dockerfile meaning we have that consistency between our testing and production environments. The approach is flexible — the example was a Node.js app but this method could be tailored to fit practically anything that can be dockerized.
At comparethemarket.com we use GoCD as our CI/CD tool of choice. For us, this approach means we’ll no longer need to maintain lots of different Go agents all running different versions of Node or Mongo and anyone can grab the repo and run the tests immediately by simply running the
docker-compose up command. The consistency of the build and test environments with production gives everyone more confidence in what's being pushed to production and the time saved in maintaining the CI/CD agents can be spent on more important things... like eating cakes and drinking beer.
dockerfile reference: https://docs.docker.com/engine/reference/builder/
compose file reference: https://docs.docker.com/compose/compose-file/