Using Docker Compose to Run Your Applications

Derian Pratama Tungka
Rate Engineering
Published in
8 min readOct 16, 2018

Preface

This post assumes that you have some basic understanding of Docker, Docker Compose, and the key terms used in the ecosystem. Should you need to get up to speed, the “Get Started” section of Docker Docs is a great place to start.

This is part 2 of the 2-part series where we share about our experience experimenting with Docker to containerize applications here at Rate. In part 1, we talked about how we turned Docker containers into development machines. In this post, we will talk about how we leverage the strengths of Docker containers to run a multi-container application.

Motivations

At this point, we realised that Docker containers shouldn’t be used as dev machines but should instead be used as hosting machines. We still saw benefits in containerization, primarily as a way for developers to self-host applications easily. This allows developers to conduct integration tests independently, which we feel is a significant boost in productivity.

Hence, we proceeded to configure our applications as Compose services. The following text details the steps we took to build the Compose configuration.

Format

The Docker Compose Docs already does a great job of giving you a full run-through of how to set up multiple services in Compose. Instead of repeating what they have done, this post aims to be a concise instructional for any developer seeking to build a Compose config. We will focus on the key components you need to take note of.

The Building Blocks

Building a Compose config can be divided into the 6 main steps:

  1. Split your app into services
  2. Pull or build images
  3. Configure environment variables
  4. Configure networking
  5. Set up volumes
  6. Build & Run

1. Split your app into services

The first order of business is to think about how you’re gonna split the services into different services. Building on from our simple web server example from part 1, we can introduce a simple client application that acts as the view layer. Together with the server and the database, it will run a simple web application which is composed of 3 different services: client , server , database.

Both the client and server services are built from instructions (i.e. Dockerfiles) while the database service is built off a public image (i.e. postgres ). This means you need 1 Dockerfile for the client and 1 Dockerfile for the server.

Assuming we have a sample client application written in React, the following could be a simple Dockerfile for it:

I have already included the Dockerfile for our Go server in part 1.

2. Pull or build images

For some of your services, you may not need to build from a custom Dockerfile and a public image on DockerHub will suffice. You can instruct Compose to pull from DockerHub by declaring:
image: "<repository_name>:<tag>"

For example, our database service pulls the public image of Postgres running on Alpine Linux by having this declared:
image: "postgres:alpine"

However, in most cases, you would likely have custom Dockerfiles to build an image from and this will require specifying a build context. This defines the path which the compose file will look at to build the service image. This path must contain the Dockerfile. Here are some common ways to define build contexts:

Tip 1: Building from a remote location is undoubtedly slower than building from a location on disk, so if your developers have already cloned the repo beforehand, then its better to build from a path. However, using a Git URL is especially useful for CI build scripts!

Tip 2: You can achieve lean images by minimising the number of build layers. You can use Dive to help you do this. It analyzes a Docker image by studying how its contents are being changed over the build process with every addition of an image layer.

3. Configure environment variables

Most applications use environment variables for initialisation and startup. For example, we can supply the environment variables POSTGRES_USER and POSTGRES_DB to define the default superuser and database of the database service. These variables can be defined in the compose file like so:

Other than that, you can also define environment variables in a .env file and place it in the same directory as the compose file. It will be read automatically by Compose when it starts.

You can then pass these variables into the container by including their names without specifying the values.

4. Configure networking

Containers communicate with each other through their own internal network that is created by Compose. Containers refer to each other through their service name. So if the web server is running on port 5000 in the container, the client application can connect to the server over the internal Compose network through web-server:5000 .

If you are trying to connect from the host machine, you will need to expose the service on a host port, in the format of <host port>:<container port> like so: 4200:5000 .

5. Set up volumes

In most cases, we would not want our database contents to be lost every time the database service is brought down. A common way to persist our DB data is to mount a named Docker volume.

Tip: Any named volumes that we use must be declared in the top-level volumes key.

6. Build & Run

Now, you are set to build the images for your services and generate containers from these images.

Build services: docker-compose build [SERVICE...]
Generate and run containers: docker-compose up [SERVICE...]
View running containers: docker-compose ps [SERVICE...]

Tip: The standard output of docker-compose up may hang occasionally, leaving you to think that the application is not responding. Hence, you can run containers detached with the -d flag and tail the container logs manually with docker-compose logs --follow [SERVICE...]

Result

If you have followed all the steps mentioned above, you will end up with a simple compose file that looks something like this:

With this setup, any developer will be able to run these services without even cloning the repositories. They would only require Docker installed on their machine. The “build once, run everywhere” nature of Docker & Compose will be very useful for development and testing, regardless of the type of engineer you are.

Moving forward, you can easily add more services to the Compose configuration and scale it up using replicas as your system grows. This will make it possible to deploy your system to production as a multi-container application.

The following section offers some examples of how Compose can be integrated into the different stages of your software development lifecycle.

Use Case Examples

Development:

LaraDock

LaraDock provides a pre-configured and easy-to-use PHP development environment, similar to the one discussed in Part 1. The highly customisable nature of LaraDock is possible through the heavy use of Compose services to represent a “layer” in the LaraDock tech stack. For example, if you use LaraDock to run a LAMP stack (i.e. Linux, Apache, MySQL, PHP), each of the 4 components will be run as a Compose service.

Have a look at their compose file to get a sense of how big Compose applications can get!

Testing:

Stellar Integration Tests

The Stellar team utilises Compose to perform integration tests on Travis CI. Conducting integration tests for the Stellar blockchain network requires its different components (i.e. financial institution (FI) server, compliance server, DBs etc.) to be up and running simultaneously. For this to be possible, different components are encapsulated in different containers which are then brought up by Compose.

Once all components are up, Travis CI executes script.sh to run the integration tests. Check out their compose file here.

Production:

Blue-green deployment

Blue-green deployment is an important technique to minimise service downtime when deploying codebase changes to production. With this technique, a software will be deployed to two different production environments with identical configurations (i.e. “blue” and “green”). At any time, at least one of them will be alive and servicing requests in production while the other is idle and used as a failover. Let’s assume the active environment is blue and the idle environment is green.

When you wish to release a new version of the software, you can deploy it to the idle green environment first. Once the software is ensured to be working properly in green, you can switch your routing to point to green instead of blue. This act of hot-swapping environments after deployment can cause much shorter downtime as compared to simply deploying new versions to a single shared environment.

To hot-swap between green and blue environments, one would need a service discovery mechanism. This mechanism will automatically pick up the environment that is alive and uses it to serve content to incoming HTTP requests. The developer can then choose which codebase version to run in production by keeping the appropriate environment alive. Of course, you may also have both green and blue running concurrently as a failover system. You can then specify to the discovery tool which environment should take priority in the event that both are alive.

This proof of concept project on GitHub implements blue-green deployment using Compose. The sample application being used is an Nginx web server. It implements service discovery through Registrator and Consul. Registrator is a tool that tracks the availability of Compose services by checking if its containers are online. It will then register/deregister these services to a service registry it is connected to, which is Consul in this case.

The files of interest in the repository are :

In fact, this combination of Consul, Registrator and Compose is a popular choice for implementing service discovery in Docker applications.

Still curious?

If you wish to explore more Docker-related projects, this page offers an excellent selection: https://github.com/veggiemonk/awesome-docker

--

--