Spinning up and Managing Multi-container apps using Docker Compose

Oluwaleke Aina
8 min readNov 13, 2017
Deploying multiple containers using docker-compose

Docker…Containers…Kubernetes…Docker swarm…Containerized clusters etc. I know you have been hearing some, if not all of these terms buzzing around the tech community for a while now and you have been wondering what they are. You might even know them but you haven’t used them before or how they work together.

But first…what are containers? Well from here; containers are lightweight, stand-alone, executable packages of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, settings etc. Containers can share a similar kernel with their host, this makes them really lightweight and fun to use. They also come in various linux distros like Centos, Ubuntu etc. But I won’t be talking much about what docker containers are indepth. You can find more here:

What I will be talking about today is docker-compose. Docker compose is one of the tools provided by docker to define and spin up multiple containerized applications (think Microservices) all with just one file written in YAML. Normally, to spin up a single container all you need is just a single Dockerfile (a Dockerfile is a blueprint which specifies commands to run, and packages to install while building a docker image) but what about when you have multiple Dockerfiles, do you have to run them one by one? This is where docker-compose comes into play.

Why use docker-compose?

  1. It allows you to define/lay a blueprint for multi-container applications using a single file(YAML).
  2. Once all your containers are defined in the file, you can spin them up with a single command, as opposed to running docker build commands(docker build <path_to_Dockerfile) for every Dockerfile. This can definitely be cumbersome.
  3. Docker-compose is very useful for running scale tests and also rapidly spinning up multiple containerized microservices for testing/ deployment. For example: You can run a full blown CI/CD environment e.g. using Jenkins-ci/CircleCI which will deploy and spin up multiple containers with your microservices running(will write about this in another post).
Simple Dockerfile for building a container image

Getting Started

The first step to spin up multiple containers with docker-compose is to install docker depending on the operating system you are working on. Steps for installing the docker-engine is found here.

Make sure to install the Community Edition(CE) since its just for practice at this point. To check which version is installed, by typing:docker version

The docker engine must be installed to use docker-compose

Install docker-compose: There are two ways to do this, from the docker website, all you have to do from your command line is:

First method: Pull the latest version of docker-compose (here we use version 1.16) using:

sudo curl -L https://github.com/docker/compose/releases/download/1.16.1/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose

Make docker-compose executable

sudo chmod +x /usr/local/bin/docker-compose

Second method: You can also install docker-compose using python’s pip by using the command pip install -U docker-compose If pip is not installed, install pip using sudo apt-get install python-pip for Ubuntu and yum install python-pip for Redhat based distros.

Check your docker-compose version: docker-compose --version

You can find more about installing docker-compose here

Writing your first docker-compose file

Docker Compose files are written in YAML (YAML A’int Markup Language) with all the instructions needed to build a container. Let’s look at a simple docker-compose.yml file

sample docker-compose file to install nginx

You might be asking, what do the commands in the above .yml file mean, we’ll come to this in a minute. Up next, save this file to a directory of your choice, for example, we can save this in the Desktop directory; to spin up this container, type in the command: docker-compose up -d; the “-d” flag will spin up the container in detached mode, meaning, it will not get attached to your terminal session once it is up, so you can run other commands.

Check that the container is up using: docker ps if your nginx container is running, you will see something like:

spinning up the nginx container using docker-compose

To get into your nginx container type in: docker-compose exec nginx /bin/bash; where nginx is the name of the service or container we specified in the compose file.

login to nginx container

Note: To run this, make sure you are in the nginx directory where the docker-compose file is.

Some Docker Compose terminologies

version: In the docker-compose file above, you see we specified version, this is telling the docker engine which compose file version this file will be interpreted as. Various compose versions come with some added syntaxes, for example docker-compose version 1 does not come with the keyword services, it was introduced in version 2.

Note: If the version is not specified, docker-compose will interprete your file as version 1. The link below explains more and even throws more light into which docker-engine version is compatible with which “version” keyword of docker-compose.

services: Services in docker-compose basically refers to the containerized application/ microservice you want to spin up, for example in the docker-compose.yml file above, you can see that we have named our service nginx, this is useful, so that you can specify this name to get into your container as shown above.

volumes: This is a very useful option because, it helps you to map certain directories or volumes from your host machine onto your container, so that the container can use specific scripts/files needed by the microservice. so in .yml file above, we have mapped our current “.” directory to the /usr/share/ dicrectory in the container.

ports: By default, docker container ports are not accessible to the outside world or to your local machine until you allow them, one way to do this is by mapping the port in the container to an unused port on your host machine. So for example, in the file below, we have mapped port 80 being used by our nginx container to port 8080 on our host machine.

build: This specifies which directory your Dockerfile (a script which is used to install your docker container)is.

depends_on: This option allows you to make sure a dependent application is not installed until another dependent service is installed first.

Creating multiple container apps

So with the brief intro we have on docker-compose, lets create three containers, one running nginx , another running redis database, and the last one running apache web server.

First, create a directory for your project, then create a docker-compose.yml file as shown below within the directory.

docker-compose.yml file to create 3 services in containers

After creating your YAML file as above, use docker-compose config to check the validity of your compose file for any errors. If there are no errors, then you are ready to bring up the containerized services.

Check docker-compose.yml file for errors

To start up your services; run docker-compose up -d --build if it is your first time running these services.

If you made a change or you want to destroy and recreate the containers again, run: docker-compose up -d --build --force-recreate -t 0

To bring stop and remove all the containers use: docker-compose down

Note: You must be in the directory where the docker-compose.yml is to successfully run your docker-compose commands.

The option --force-recreate will forcefully destroy and recreate the already existing containers.

Creating network "desktop_default" with the default driver
Pulling apache (httpd:latest)...
latest: Pulling from library/httpd
85b1f47fba49: Pull complete
45bea5eb3b59: Pull complete
d360abbf616c: Pull complete
91c7cdd03f84: Pull complete
30623dd230a8: Pull complete
cc21a2e04dd3: Pull complete
f789cd8382be: Pull complete
Digest: sha256:8ac08d0fdc49f2dc83bf5dab36c029ffe7776f846617335225d2796c74a247b4
Status: Downloaded newer image for httpd:latest
Pulling redis (redis:latest)...
latest: Pulling from library/redis
d13d02fa248d: Pull complete
039f8341839e: Pull complete
21b9cdda7eb9: Pull complete
c3eba3e5fbc2: Pull complete
7778a0753f87: Pull complete
b052cf77de81: Pull complete
Digest: sha256:cd277716dbff2c0211c8366687d275d2b53112fecbf9d6c86e9853edb0900956
Status: Downloaded newer image for redis:latest
Pulling web (nginx:latest)...
latest: Pulling from library/nginx
bc95e04b23c0: Pull complete
a21d9ee25fc3: Pull complete
9bda7d5afd39: Pull complete
Digest: sha256:9fca103a62af6db7f188ac3376c60927db41f88b8d2354bf02d2290a672dc425
Status: Downloaded newer image for nginx:latest
Creating desktop_apache_1 ...
Creating desktop_redis_1 ...
Creating desktop_apache_1
Creating desktop_redis_1 ... done
Creating desktop_web_1 ...
Creating desktop_web_1 ... done

As you can see from the diagram, we have the web, apache and redis services running in containers. As you can see, the apache container’s port 80 is mapped to port 32769 on our host machine, this is because, docker chooses a random port mapping on our host machine when we added the ports option in our docker-compose file. So we can access apache from the host machine using that port as shown below:

Confirm containers (e.g. apache) is running

From within the container, apache runs on port 80 as shown below. To get into any of your containers, use the command

docker-compose exec <name_of_service in compose file> /bin/bash

e.g docker-compose exec redis /bin/bash

Running redis inside its container

Scaling up micro services using docker-compose

Docker-compose allows you to scale any container, though to do this, you must make sure you do not map a specific port on your host to used container port.

To scale a microservice up by adding another service, we use the command:

docker-compose up -d scale <name_of_container in compose file>=<num of containers desired>

e.g docker-compose up -d --scale redis=4 will create three additional redis containers, making 4 in total as shown below.

Scaling up to 4 redis containers

Note: To successfully scale a micro service, the service must not be mapped to a specific port on your host machine. For example in the docker-compose file below: the apache and nginx’ service containers are mapped to ports 8099 and 8010 respectively on the host machine. Scaling these will result in an error.

apache container port 80 mapped to port 8099 on host
lakeside@lakeside-VirtualBox:~/Desktop$ docker-compose up -d --scale apache=2
WARNING: The "apache" service specifies a port on the host. If multiple containers for this service are created on a single host, the port will clash.
Starting desktop_apache_1 ...
Starting desktop_apache_1 ... done
Creating desktop_apache_2 ...
desktop_web_1 is up-to-date
Creating desktop_apache_2 ... error
ERROR: for desktop_apache_2 Cannot start service apache: driver failed programming external connectivity on endpoint desktop_apache_2 (01dbbab3165b547616d45cdc1b01c19cede8274144709911f8b49f0b7c7d81af): Bind for 0.0.0.0:8099 failed: port is already allocatedERROR: for apache Cannot start service apache: driver failed programming external connectivity on endpoint desktop_apache_2 (01dbbab3165b547616d45cdc1b01c19cede8274144709911f8b49f0b7c7d81af): Bind for 0.0.0.0:8099 failed: port is already allocated
ERROR: Encountered errors while bringing up the project.

Error occurs because docker cannot map multiple containers to the same port on the host machine.

Conclusion:

There are so many awesome things you can do with docker-compose, also, other infrastructure provisioning tools like kubernetes and docker-swarm are used by many organizations for managing containerized clusters (running various micro service applications) and auto-scaling their infrastructure.

--

--