Integration Tests with Docker

Tom Linford
Robinhood
Published in
4 min readFeb 2, 2016

Docker is widely used here at Robinhood, performing critical devops tasks. One task we use docker for is integration tests, which provides a much faster alternative to spinning up multiple services compared to starting up multiple VMs. The interactions between services involved in our critical systems like marketdata and order execution are all tested this way.

The basic idea:

  1. Create a docker image for each type of service that needs to be run. Databases should be pre-migrated and perhaps with some initial data to speed up the time it takes to run each integration test. Each service will have its own container, and different services can sometimes reuse the same underlying image.
  2. Write mock services for 3rd party services. We do this by putting some basic code into a python file and then running it with a standard lib python image. The python file can just be mounted into the container when run.
  3. Use docker-compose to declare the service infrastructure for each test case in a YAML file. This makes describing each test case much more concise, and error cases (where a service) is down are much easier to add.
  4. Expose endpoints for the end to end tests. This allows external testing code to hit the endpoints and check for the expected behavior.
  5. Start up the infrastructure with docker-compose and simply make API requests to the exposed endpoints. Sometimes you may need to add additional checks to make sure the interaction is actually happening.

Example

I put together a sample integration tests setup here. It’s a basic integration test, checking an interaction between a sample Django app and vault. It’s a simple REST API that allows users to store secrets by making a POST to /secrets/. Secrets can be deleted and retrieved by using the detail endoint, /secrets/{id}/. Each test case takes around 4 seconds to run, which involves setting up the entire infrastructure, running the tests, and tearing it down. Some things to note from the sample integration tests:

  • The end to end tests are written with unittest from the Python standard library. These tests can be written in any language and in any testing framework. Additionally, the tests could even be run from a docker container — you’d just need to use the docker image as a base, install docker-compose and whatever other tools you need, and then mount /var/run/docker.sock.
  • The sample app has a Secret model, which just references the User model and has a primary key, which is used as the key for storing the secret in vault. The Secret model has no additional information, so if the database storing the Secret model is compromised, then the users’ secrets wouldn’t be compromised. However, the value of the secret that the user has stored can be easily fetched through the API, which internally accesses a Python property of Secret which then makes the request to vault.
  • The database for the sample app is just a sqlite database stored in /tmp/, so it isn’t persisted between runs. This does mean that the database needs to be migrated for each test case, which is ok with a small example but would take far too long with the typical monolithic application.
  • The Dockerfile for the sample app is very simple, it pretty much just installs the requirements. With the image created from the Dockerfile, the app can be run. Additionally, if the app had any workers or management commands in the same codebase, they could just reuse the same docker image. Here’s the Dockerfile:
FROM python:3.5.1
ADD repos/app/requirements.txt /tmp/requirements.txt
RUN pip install — no-cache-dir -r /tmp/requirements.txt
EXPOSE 8000
WORKDIR /app
  • The vault container did not need a custom Dockerfile, as there was already a vault image on docker hub that could be used. All necessary information for the vault container was managed in just a couple of lines:
vault:
entrypoint: /bin/bash
command: /vault/run.sh
image: cgswong/vault:0.3.1
expose:
— 8201
volumes:
— ../../docker/vault/:/vault/
  • The configuration files for the vault container are simply stored in docker/vault/, which just gets mounted in the container at /vault/ when the container is run.
  • The app configuration in the docker compose file is equally simple:
app:
command: /bin/bash /mnt/run.sh
image: integration-app
volumes:
— ../../repos/app/:/app
— ../../docker/app:/mnt
ports:
— “8000:8000”
links:
— vault
  • The link to the vault container is listed in the links section, which adds the VAULT_PORT_8201_TCP_(ADDR|PORT) environment variables (since port 8201 is exposed in the vault configuration). Port 8000 is forwarded, which allows access to the API from outside of docker.

Continuous Integration

These tests can be run in the CI tool of your choice as well. We use Jenkins and have the integration tests setup to run whenever tests for another repo are run, which occurs when new code is pushed to that repo. The command for running the tests is a simple “make test” and with our testing framework (nose) it’s easy to export the tests results into an XML file that Jenkins understands.

Conclusion

So there it is, a great way to setup integration tests. Quick enough to run often and simple enough to make adding new cases easy.

At Robinhood we take pride in our robust and quality code, and these integration tests are an important component for that. And of course, we’re hiring!

--

--