Simplifying development on your local machine using Docker and Docker Compose

Mandeep Singh Gulati
The Startup
Published in
12 min readJun 2, 2020

Part 1: Why use Docker and Docker Compose?

Consider you’re working on an Express app as an API developer and that app connects to:

  • Postgres database as it’s primary database
  • Elasticsearch to support full text search
  • Redis for caching

Also, consider that you’re working on this project with a team of developers who have different operating systems on their own machines. And you’re given a task of documenting the instructions on how to setup their local development environment.

If you think about writing down instructions on how to install Postgres, Elasticsearch and Redis for different operating systems, it can be a tedious task and the documentation can be hard to maintain. If you think about just writing down instructions like “Follow the official documentation to install X version of Postgres for your OS”, chances are some developers will face issues while trying to install and troubleshooting that will take up their time and perhaps your time as well. And once you manage to get that working, you might need to update your installation instructions about such an issue.

Also, what if some of the developers already have a different version of those software installed on their machine and those version are not fully compatible with your app? I hope by now you’ve realized how tedious this task can be. I’ve been there and I know how stressful this can be.

There is a way to simplify this significantly with Docker and Docker Compose. To give you an idea of how easy this can be, if you have Docker and Docker Compose installed on your machine, all you need is a file called docker-compose.yml with following lines of code:

Note: You don’t have to do this right away because we will be doing a walk-through of this process later in this post.

version: '3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.8.0
ports:
- 9200:9200
postgres:
image: postgres:11.2
ports:
- 5432:5432
redis:
image: redis:latest
ports:
- 6379:6379

And if you run the following command from command line in the same directory where your docker-compose.yml file is:

docker-compose up

This should create containers for specified versions of Elasticsearch, Postgres and Redis on your machine, irrespective of which OS you are using, as long as it supports Docker. You can then access those services through the assigned ports (port 5432 for Postgres for example) as if those software are installed on your host machine. If you want to tear-down the entire setup, you can do:

docker-compose down

And this will stop all the running containers defined by the compose file docker-compose.yml and also remove the network that it created earlier. Since these containers are isolated environments and you aren’t actually installing those software on your host machine, it makes it a lot easier to have different versions of a software on your system when you’re working on different projects that require different versions. For example, the above compose file needs Elasticsearch version 6.8.0. This can be inferred from the line

image: docker.elastic.co/elasticsearch/elasticsearch:6.8.0

Let’s say you’re working on another project which requires Elasticsearch 5.5.0. You can have a separate compose file for that project where you specify the image tag as 5.5.0 like this:

image: docker.elastic.co/elasticsearch/elasticsearch:5.5.0

When you do docker-compose up from that project’s directory, this will pull the required image from the repository and create a separate container based on that image.

There is a limitation with the above compose file that it doesn’t offer persistent storage ( I have excluded that configuration intentionally to keep it easy to understand for first time users of Docker ). What this means is that any data you write to those containers will be lost when you do docker-compose down and later if you bring those containers up again, you won’t see any tables created in the Postgres database or any indices created in Elasticsearch. It can be fixed by adding a few more lines of code to the compose file like this:

version: '3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.8.0
ports:
- 9200:9200
volumes:
- /some/dir/on/host:/usr/share/elasticsearch/data

postgres:
image: postgres:11.2
ports:
- 5432:5432
volumes:
- /some/dir/on/host2:/var/lib/postgresql/data

redis:
image: redis
ports:
- 6379:6379

In the above compose file, /some/dir/on/host and /some/dir/on/host2 can be any path on your local machine that you want to map to the container’s data directories. I know that Elasticsearch stores data in /usr/share/elasticsearch/data directory by default and Postgres stores data in /var/lib/postgresql/data by default so I mapped some directories from my host machine to those directories on the containers. This is something you need to look up to see where the container stores the data and you can map that directory to any directory on your host machine. After adding these lines for mapping volumes, when you stop those containers and bring them back online later, you won’t lose any data written to those containers as long as it is written to those directories that you have mapped.

I hope that by now you’re convinced that learning to work with Docker and Docker Compose can make your life easier as a developer, even if you’re not a DevOps person. Now, we will go through the above compose file in detail and explain what each line means. At the end, I will share some of the Docker Compose commands that I find really useful on a day to day basis

Part 2: Understanding our compose file

Before we dive into our compose file and start explaining what each of those lines mean, I would recommend installing Docker and Docker Compose on your machine first. And for that, I will recommend following the official instructions because those are updated regularly and easy to understand. Visit this link to install Docker for your operating system and this link to install Docker Compose. Once you’ve installed both Docker engine and Docker Compose on your machine and verified that they are installed correctly, you can come back to this post.

I assume you have successfully installed both Docker and Docker Compose on your machine. Now, let’s take a look at our compose file again:

version: '3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.8.0
ports:
- 9200:9200
volumes:
- /some/dir/on/host:/usr/share/elasticsearch/data

postgres:
image: postgres:11.2
ports:
- 5432:5432
volumes:
- /some/dir/on/host2:/var/lib/postgresql/data

redis:
image: redis
ports:
- 6379:6379

The docker-compose.yml file is a YAML file where you provide the configuration for your setup and Docker Compose reads that configuration and sets up docker containers, volumes, network as per your compose file.

On the first line of our compose file, we have specified the version number. This tells Docker Compose which version of their API we are using in our compose file so that it can decide how to interpret the rest of the file. As of writing this post, version 3 is the latest so that’s what I have specified.

Then we specify the services section which is used to define the different containers that we need to create in this setup. Here, we have 3 services:

  • elasticsearch
  • postgres
  • redis

It’s not necessary to name them like this. You can call them something else. This means that when we do docker-compose up it will create 3 containers, one for each of the services defined in our compose file.

Under each of these services, we define the configuration for that service. Let’s take example of Elasticsearch:

elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.8.0
ports:
- 9200:9200
volumes:
- /some/dir/on/host:/usr/share/elasticsearch/data

The image section defines which docker image we want to use. It should either be available on your local machine or on the web. Here, we are pulling the official image from docker.elastic.co. Image name consists of two parts, {repository_path}:{tag}. Here, repository_path is docker.elastic.co/elasticsearch/elasticsearch and tag is 6.8.0 . How do I know the path for this image? I just searched online and found this page which gave me the link to all the official docker images for Elastic stack https://www.docker.elastic.co/

If you notice, the repository_path for Postgres and Redis is a bit different from that of Elasticearch. Here, we have only provided the name of the image and not the full path including the domain name. When we do this, docker assumes we are trying to pull the image from https://hub.docker.com.

Also, for redis, we haven’t provided the tag. Here, docker assumes that we are trying to pull the image with tag as latest. If docker is unable to find the image with the tag you provided, it will throw error.

Next comes the ports section where we map the ports from host machine to the ports on a docker container. On the left is host machine port and on the right is the container port. Like this:

{host machine port}:{container port}

We are mapping the same port on host machine to the container port but you can map some different port of host machine as well. For example, if I wanted Elasticsearch service to be accessible from localhost:8888 (assuming I am running this setup on my local machine), I could have provided the config as:

elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.8.0
ports:
- 8888:9200
volumes:
- /some/dir/on/host:/usr/share/elasticsearch/data

We can’t change the default port of a container (the one on the right side) without changing the configuration for that container. So, in the above setup, if you do 9200:8888 it means bind the port 9200 of host machine to port 8888 of the container running Elasticsearch. But by default, nothing runs on port 8888 of the Elasticsearch container so you won’t get anything when you visit localhost:9200 .

You can map more than one port. For example, Elasticsearch exposes two ports, 9200 and 9300. You can map both those ports to the corresponding ports on host machine:

elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.8.0
ports:
- 9200:9200
- 9300:9300
volumes:
- /some/dir/on/host:/usr/share/elasticsearch/data

Volumes section is something we have already covered in the beginning. That also follows the same convention as ports:

{path on host machine}:{path on container}

You can map more than one directory. Also, you can map files as well. The path will be the path of the file instead of path of a directory. This can be useful when you want to replace the config file in a container with some custom config file. For example, Elasticsearch uses elasticsearch.yml file located at /etc/elasticsearch/elasticsearch.yml in the container. If you want to replace that with a custom YAML file, you can do

volumes:
- ./elasticsearch.yml:/etc/elasticsearch/elasticsearch.yml

Here, my custom elasticsearch.yml file is located in the same directory as my docker-compose.yml file.

That’s all the config we are using for our setup. The compose file supports lots of other configuration options, the details of which can be found here.

I would recommend quickly going through it at least once so that you can get a broad picture of what all you can do and then dive into the individual sections as and when needed based on your use case. Now that we understand the configuration provided in our compose file, next we will go through some command that I find myself using regularly.

Part 3: Why did we use Docker Compose?

The setup that we have created so far using Docker Compose could have also been created using a set of docker command only (without Docker Compose). But that task is a bit more tedious and relatively more complex to understand. When we create a setup using Docker Compose, there is also a lot of networking magic that happens which allows your services to connect with each other by using the service name only. For example, if you need to access postgres container from elasticsearch container, you can find it available at postgres:5432. And this works even if you don’t map the port 5432 of postgres container to the host machine port. Consider the compose file below:

version: '3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.8.0
volumes:
- ./docker-volumes/elasticsearch:/usr/share/elasticsearch/data
postgres:
image: postgres:11.2
volumes:
- ./docker-volumes/postgres:/var/lib/postgresql/data
redis:
image: redis
api:
depends_on:
- postgres
- elasticsearch
- redis
build:
context: ./api
command: npm start
ports:
- "8080:8080"

Here, we have 4 services:

  • elasticsearch
  • postgres
  • redis
  • api

We are only mapping the ports of our api service and not mapping the ports of any other services. This is because we probably don’t need to. We only want those services to be accessible to our api container which is done by docker compose by default since all these services are defined in the same compose file. If we try to achieve this using Docker (without Docker Compose), it will involve additional commands to setup the networking, which is again, go to learn but more hassle and harder to maintain. Hence, we are using Docker Compose because it makes it easier to manage the configuration for our setup.

Part 4: Some helpful Docker Compose commands

In order to run any Docker Compose commands, you need to have a validdocker-compose.yml file in your current directory or parent directory from which you run the command. If you try to run the command docker-compose up from any random directory which does not contain the compose file, you’ll get error like this:

ERROR:
Can’t find a suitable configuration file in this directory or any
parent. Are you in the right directory?

Supported filenames: docker-compose.yml, docker-compose.yaml

The most helpful command for me is the help command. It allows you to explore the commands on your own.

For example, to see the list of all Docker Compose commands:

docker-compose --help

Then, to get some help on any of those commands you can further use the help command.

For example, to see more details on docker-compose up command, do:

docker-compose up --help

Usually, I run the up command in detached mode like this

docker-compose up -d

For our setup, running this command will give the output like this:

Creating network "simplify-with-docker_default" with the default driver
Creating simplify-with-docker_postgres_1 ... done
Creating simplify-with-docker_elasticsearch_1 ... done
Creating simplify-with-docker_redis_1 ... done

By default, Docker Compose will create a network with name {current_directory_name}_default. In my case the current directory was simplify-with-docker. It will also create containers with names as {current_directory_name}_{service_name}_1. The 1 at the end is the index of the container as Docker Compose allows you to run more than one container for a service but that’s a topic for another post.

To see the list of services running

docker-compose ps

This should give output like this:

docker-compose ps

You can see that while postgres and redis are running fine, Elasticsearch is not. Which brings us to the next helpful command:

To check the logs for a service

docker-compose logs -f {service_name}

Let’s see the logs of Elasticsearch to see what happened

docker-compose logs -f elasticsearch

In my case, I found this error in the logs

docker-compose logs -f elasticsearch

Which means that there is some access related issue. The user in the elasticsearch container should be able to access the data directory. Here is my config file with the volumes section:

version: '3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.8.0
ports:
- 9200:9200
volumes:
- ./elasticsearch:/usr/share/elasticsearch/data
postgres:
image: postgres:11.2
ports:
- 5432:5432
volumes:
- ./postgres:/var/lib/postgresql/data
redis:
image: redis
ports:
- 6379:6379

And if I check the permissions on elasticsearch directory:

➜  simplify-with-docker ls -lhrt            
total 12K
-rw-rw-r-- 1 mandeep mandeep 381 Jun 3 00:12 docker-compose.yml
drwxr-xr-x 2 root root 4.0K Jun 3 00:12 elasticsearch
drwx------ 19 999 root 4.0K Jun 3 00:12 postgres

I can see that it is owned by root. After searching online, I found that it can be solved by running this command:

sudo chown -R 1000:1000 <path to the elasticsearch volume>

Here, in our case this will be

sudo chown -R 1000:1000 elasticsearch

After this, if you run docker-compose up -d, it should bring up all the services. And then docker-compose ps should show all services up and running:

docker-compose ps

To get inside a container

docker-compose exec {service name} bash

For example, to get inside the postgres container:

docker-compose exec postgres bash

Sometimes, for some containers, bash might not be available. You can try:

docker-compose exec {service name} sh

docker-compose exec allows you to run a command inside the container. Here, we are running the bash or sh command which gives us the shell. We would run some other command as well. For example, redis container should have redis-cli available. We can run it like this:

docker-compose exec redis redis-cli

Here redis is the service name and redis-cli is the command name. You can get more help on docker-compose exec command using the docker-compose exec --help command.

Let’s try accessing psql from postgres container. At first you might try:

docker-compose exec postgres psql

This will give error:

psql: FATAL: role “root” does not exist

By default, the user needs to be postgres. So, let’s modify our command:

docker-compose exec postgres psql -U postgres

This should get you to the psql shell inside the postgres container

There are lots of other useful Docker Compose commands. I would suggest playing around with it, refer to the help when unclear. If you still need more help, official documentation has lots of helpful information.

--

--