Integration testing with Docker Compose and Visual Studio Team Services. Now it starts getting really interesting!

Photo by SpaceX on Unsplash

In the previous two posts I talked about building multi image solutions and running unit tests using Docker Compose and Visual Studio Team Services. In this, the third, I will explore setting up integration testing, starting with a little background.

One of the common elevator pitches for Docker is that it enables running our applications in a consistent way across developer machines and production environments.

When I started my current job, deployments of our main applications and services was at times multi day events. Building the solutions, creating configuration files and copy/pasting executables and dependencies to physical servers. Fortunately we took the plunge and got started on continuous integration and delivery using TeamCity and Octopus Deploy. It has served us incredibly well and still does. It raised the velocity and reliability of our releases by multiple factors.

As we are starting to split our larger applications into smaller pieces the time has come to take the next step. We are already starting to struggle a bit having significantly more services and applications to deal with, so we are exploring solutions for this.

One of the technologies we are actively looking at is Docker. It’s pretty hard to get around these days and one of the truly exciting things in this space right now.

One thing I hope to use Docker for is minimizing the the pain for myself and my coworkers, when developing and integrating multiple applications and services. It can sometimes be hard to manage all the pieces, especially for new developers. So this is something we are working to address, but one of the first things I have begun to explore is how to setup integration testing of multiple services and running those in an automated way.

We are still using TeamCity and Octopus Deploy, but are considering trying out Visual Studio Team Services so this is where I will start. As you can read in the previous posts mentioned above I already have builds of the Docker images and unit tests up and running. So the foundation for the next step is there.

In the sample on GitHub, that I also used in the previous posts, there is an integration test project that does a few things. The sample is very simple and shows how to spin up multiple containers including the “integration tests” container and have it execute its tests against the running services. I think this is a huge value proposition for Docker, being able to do these kinds of things with a few lines of code. The integration tests consists of the following:

  • VstsDockerBuild.WebApp
  • VstsDockerBuild.PongService
  • VstsDockerBuild.IntegrationTests
  • and last but not least a MongoDB database service

I addition to the projects from the sample, the VstsDockerBuild.PongService project has a dependency on MongoDB where it registers the pings that it receives. Normally when running integration tests you would probably have code to ensure that a test database was created and cleaned up afterwards. I know we have, and running tests in parallel, bad times ahead. This can be solved in different ways, but what I find so awesome with Docker is how natural it feels.

OK lets get back on track. The test project is a regular, out of the box, .NET Core xUnit.net test project that fires off a few HTTP requests at the app and service and checks that everything works as expected. Lets start with the VSTS setup, it looks a lot like the steps for the build and unit tests.

Visual Studio Team Service task configuration

As with the unit tests I’m using the Docker Compose task with the “Run a Docker Compose command” action. The command is the same as the unit tests, up --abort-on-container-exit where the abort argument causes Docker Compose to stop all the containers if one container exits.

In the configuration of the task I have pointed to a Container Registry and defined the compose files that will be used to run the tests. I will get back to that in a bit.

Let Docker Compose do its magic

When setting up the unit tests we only had the unit test container running. What we want to achieve with the integration tests, is to take the images of the WebApp and PongService and start containers of them. Then running the tests against the running instances. Docker Compose is the tool for that and it looks something like this:

This is a complete recipe for the environment defining how the WebApp and PongService as well as the integration tests project will run. As mentioned the PongService have a dependency on a MongoDB server so that is included as well, it is as easy as defining a service called mongo using the docker image mongo from Docker Hub.

The individual docker compose services define dependencies to each other, this is also the case for the integration tests project. The depends_on configurations determine the order the services are started in.

To run this locally you would do something like this:

docker-compose -f .\docker-compose.yml -f .\docker-compose.integration-tests.yml up

Locally I’m using both the docker-compose.yml file as well as the docker-compose.integration-tests.yml file. This is because, locally, I want to build the images if they do not exist and the docker-compose.yml file contains the build definitions whereas the docker-compose.integration-tests.yml file defines the services and environment etc. for the integration tests.

Output from Docker Compose when running the tests locally

Making it run in the cloud

To make it run in Visual Studio Team Services I had to make a few additions. Looking at the task configuration above you can see that I also have two files, but the docker-compose.yml file is not one of them. Running on VSTS the images have already been built so we don’t the need the build definition in docker-compose.yml. Instead we provide the docker-compose.integration-tests.yml file as well as a docker-compose.integration-tests-vsts.yml file that looks like this:

The docker-compose.integration-tests-vsts.yml adds two things. It substitutes the image name with the correct fully qualified name that was built and tagged in the build task. And it defines the entrypoint for the integration tests including the configuration of the test results output. See the previous post about unit testing for a detailed description of the test output setup. In short it allows VSTS to pick up the reports of successful and failed tests and display those nicely.

A bit about the tests

I will not go into details about all the tests, they are not that interesting to be honest. I will go over the test involving MongoDB. It is simple, but it shows the potential of running integration tests using Docker.

The test sends a request to the PongService and sets a custom header that can be used to verify that an event has been written to the database. Take a look at the code in the sample for the full overview.

The URL for the PongService and the connection string for MongoDB is provided using environment variables that are configured in the docker-compose.integration-tests.yml file.

As with the unit tests I needed a simple way to make the tests fail so I could verify the configuration. So I added a BLOWUPINTEGRATION variable that can be set when queuing a build. It just causes an error in one of the tests when running.

Below is the result when things go as planned. Had one or more of the tests failed, there would be a reports showing what went wrong.

When looking back at this setup I’m really excited, especially at how easy it actually was to setup. Granted, you will probably want to have a bit of experience with Docker and Docker Compose, at least have gone through some of the many getting started guides out there. Pluralsight have a lot if you happen to have a subscription (I would highly recommend it, lots of great content on Pluralsight). Katakoda also have some pretty cool courses I would recommend checking out.

I’m really looking forward to working even more with Docker, especially as a tool to make our daily lives as developers even more enjoyable. I’m certain that we will end up running production workloads on Docker, but to begin with I think my focus will be on getting some of our services “dockerized”, focusing on helping my coworkers. Then when we have become more familiar with the technology I have no doubt the next steps will come naturally.

Docker all the things!

One final note

These were some of my first blog posts, maybe it shows? Over the years I have written a few, but I have never really been able to keep it up. The last few years I have taken on a more senior role at work, and I have and would like to focus even more on developer productivity and sharing my experience. Especially as we transition from larger to smaller and more numerous apps and services. For now at least I think that will have a focus for my writing and I hope that these and future posts can help my team and of course others. Additionally I think they will be a tool for myself getting better a communicating the technical stuff. So please leave your feedback, I would appreciate it :)