Docker Tips & Tricks

Docker is great. Docker is awesome. I love Docker. It is hard to overstate how much developing applications becomes an absolute joy when packaging, deploying and delivering them is no longer something you really need to worry about.

Docker: https://www.docker.com/

In the assumption that you are building Linux applications, containerisation has many advantages that every developer should love:

  • Universal packaging: Docker images are portable everywhere. We are not talking about Java’s motto “Write once, run anywhere” that has been appropriately modified by the community to “Write once, debug everywhere”. No. Docker images are truly portable to every system which has the Docker container engine installed.
  • Immutable start-up state: Starting containers guarantees that the containerised application will always have the file-system state that the developer designed. No alteration to them is possible. No dirty states are going to cause debugging headaches. If you design your containers to be ephemeral, you are going to leave your worst engineering nightmares behind.
  • Negligible overhead: Docker containers are like virtual machines, but not really. They provide you with a system that is isolated from the host, but the applications you are running in the containers are almost indistinguishable from native processes in the host machine. This means that the performance impact is orders of magnitude lower than the one you would have by running the processes in a Virtual Machine (VM).
  • Universal library dependency management: Do not write any more README files to install the application dependencies in the form of shared libraries or packages to install in the destination system. A Docker image will always contain automatically all the necessary libraries, because you are forced to deliver it that way.

There are many things to learn about Docker and how to use it at the best of its capabilities. It is not my intention to introduce the readers to the tool and how to get started with it. There are plenty of guides in the internet to do that.

What I would like to do in this article is to collect a few tricks that I’ve learnt over my career on how to build better Docker images, how to use them and how to solve certain problems that you might encounter.

I will assume that you already know how to build Docker images, the general syntax of the Docker command line tool, Dockerfile and how Docker Compose works.

I will also assume a certain familiarity with the general architecture of Linux, shell scripting, web application development, and building applications from source.

Tips for building images

#1 Use the build-cache during development and minimise layers at the end

When you start building Docker images, it is a good idea to leverage as much as possible the build cache. The build cache allows to have a very fast feedback loop but has the drawback of increasing the final image size if you commit to memory information that should not be there.

Let’s consider the following fragment of a Dockerfile I’ve built in the past to host an headless Garry’s Mod server.

Having separate RUN commands causes the image to have multiple layers which can be reused in subsequent invocations of docker build if the respective commands are not changed. For example, let’s say that there is an error in the sequence of commands at line 8 and you have to correct one of the commands. The fact that only line 8 is changed entails that Docker will restart from the layer committed at line 7, meaning that the apt-get commands will no longer run, speeding up the feedback loop.

Once the image is verified to be working, it is a good idea to reduce the number of layers in order to make the image more space efficient. The best way to achieve this is by chaining every command with && in one unique RUN command.

An optimised version of the Dockerfile above will look as follows:

#2 Clean after yourself

With the intention of reducing the image size as much as possible, you have to always keep in mind that once a layer is committed, removing files in subsequent steps doesn’t reduce the image size. Layers in Docker are organised similarly to git commits, and, like in git, if you remove a file after it has been committed in the repository, the file will still be present because you have to be able to access it on checkout of a past commit.

There are three major things to clean-up when you are building layers:

  1. Package manager caches. These are the residuals due to the usage of various package managers like apt-get, yum or apk. By default, these applications maintain the artefacts in the file-system, increasing the layer size if not removed.
  2. Files downloaded from a server. When you are building an image, it is likely you will have to download archives and unpack them. If you do, you need to remove the archives if no longer necessary.
  3. Supporting packages. Base images come with very few packages installed to keep them as lean as possible, which means that very often you will have to install packages (like curl or wget) just to build the image. Keeping these packages after you are done using them is not necessary and they should be removed.

Follows an example of best practices for various base images and use cases.

#3 Use build web servers to avoid COPY and ADD commands

When building containers it is usually necessary to use files that are in the build context. These files can be added in the container using the COPY and ADD commands which, obviously, create new layers.

The issue is that if these files are just temporary and need to be removed, then their commit in the layer is unnecessary and it will just use up space.

In order to avoid this issue, a trick I normally use is to run an auxiliary NGINX container that mounts the necessary files in a web-server which is successively used during the build with curl or wget.

In order to configure the build environment this way, I generally perform the following operations:

  • Create an archive directory in the root of the build context where all the files that need to be transferred are going to be hosted.
  • Start-up a NGINX container that mounts the static content from the archive folder.
docker run --name nginx -d --rm \
-p 8080:80 \
-v $PWD/archive:/usr/share/nginx/html:ro \
nginx
  • Modify every ADD/COPY command that transfer the temporary files with equivalent wget or curl in one unique RUN ommand.
  • Add archive to the .dockerignore file to avoid the directory being used in the build context.

This is an example of that transformation:

I used this trick to reduce the size of Oracle DB Docker images from 10 GB to 5.7 GB, which is a really significant saving.

#4 Convert configuration file based applications to environment variables based applications

If you have an existing application that you want to containerise, it is very likely that you will have some configuration file in the form of: application.properties, config.xml, config.json, etc…

Although it would be preferable to convert the application to use environment variables instead, it might be ideal to keep the configuration as is in order to avoid to introduce unnecessary changes.

When this happens, I like to use a bash templating engine like Mustache. The nice thing about Mustache is that it takes as input a file and it outputs the same file with the same content but with certain placeholders substituted with environment variables.

For example if you have a file application.properties with:

property={{MY_PROPERTY}}

and you run:

MY_PROPERTY=hello_world mo application.property

Mustache will output:

property=hello_world

In general, I set-up my containers as follows:

  • Prepare a file named application.properties.template with all the necessary configuration substituted with Mustache placeholders.
  • Install Mustache in the container as executable.
  • Use a Docker entry point to run Mustache on the template and generate the configuration file.
  • Run the application using the generated configuration file.

Here’s an example of such configuration:

Here’s the content of entrypoint.sh:

#5 Always use array specification for ENTRYPOINT and CMD

You might have noticed that in the last tip I used the following syntax to define the entry point and the command for the container:

ENTRYPOINT ["entrypoint.sh"]                       
CMD ["/bin/sh", "-c", "java -jar /app/app.jar"]

I would always recommend to specify these configurations this way because it is a very reliable way to ensure that the command is always executed after the entry point. The two instructions have weird interactions and this set-up guarantees they are run appropriately while keeping as much information as possible in the Dockerfile instead of hiding it behind a script.

In general, if you want the entry point to be executed always before the CMD instruction or any other command you decide to run from the image:

  1. Add as last instruction of your entry point exec "$@".
  2. Include your command in a /bin/sh -c execution to have all variable substitutions be executed transparently.

#6 Run the container as a non-root user

Considering Docker will run the process using the host kernel services, process management of the container will be handled with the host process management service. This entails that if you start a container as root (UID 0), you are effectively running a process as root in the host itself.

This fact might lead to unpleasant situations. If the application executed in a container is hijacked by a malicious user and the user leverages a vulnerability in Docker to escape the chroot of the container, you would have a malicious user having root access in the host.

Because of this risk, it is generally a good idea to run your containers as a non-root user. In the attack scenario above but while running as a standard user, even if the application is hijacked and the attacker escapes the chroot they would have only access to a non-root process, meaning that they won’t be able to produce significant harm to the system.

The problem is that if the containerised application needs write access to part of the file-system (for example to use Mustache from tip #4), running as non root user might not be feasible, because the directories and files will not be writeable by the user.

In order to solve this issue, you have two choices:

  1. Create users in the container, change the permissions for the necessary directories during the build and use the USER command in the Dockerfile to specify the running user.
  2. Use group level permission to set up the user dynamically.

I personally prefer the latter approach because it is extremely flexible and doesn’t assume that the UID is known in advance. This approach is taken directly from the Openshift Origin documentation, and it was introduced because a container in Openshift runs as an unknown user and the technique is perfectly adapted for this PaaS deployment.

The approach requires mainly two things:

  1. Change the permissions of the necessary files and directory to be writeable by users of the root group.
  2. Define an entry point which sets up the running user as member of the root group.

Here’s an example of such entry point:

And here’s an adaptation the Dockerfile of tip 4 that allows the container to run as any UID:

Please note that without the instruction chmod -R g=u /app /etc/passwd both the configuration of the user permissions and the Mustache execution would fail in uid_entrypoint.sh when running as non-root user.

#7 Use multi-stage builds

When Dockerising existing applications you might encounter cases in which the system you need to build the application in is significantly bigger than the system you need to run them.

Classical examples of these cases are application that you need to build from source. If you have a git repository that contains the application sources and you want to build a container for that application during the build you will need at the very least: git, the run-time to run the application, the compiler, the library dependencies. However, git, the compiler, and the development dependencies become superfluous once the application is built.

Multi-stage builds in a Dockerfile allow to separate the build phase from the the execution phase by allowing the build stage to produce the executable and transferring them to the execution stage with a simple COPY command.

Here’s an example of an application of such technique that I’ve created to prepare a docker container with FFMPEG with support for the Fraunhofer FDK AAC codec, which is an high quality M4A encoder, disabled in the default distribution of FFMPEG.

Tips for Docker Compose

#1 Use sidecar containers to access additional logs

When you are containerising applications that were not designed with Docker in mind, it often happens that you need to work around certain design choices.

One example of such an application is Tomcat. The default Tomcat container uses the standard output of Catalina as source of logs, but it still uses the logs directory to store logs for the web-apps deployed in the application server and for the access logs. These logs are not easily retrievable as they are not available when Tomcat starts, but they become available only after a few seconds when Catalina has started successfully and web-apps have been initialised.

The best way I’ve found to access these logs is via sidecar containers in docker-compose. Sidecar containers are normally used by Kubernetes pods as assistants to the main application. Being part of the same pod, they are deployed in the same host and hence they share a local network and a filesystem. This last detailed can be leveraged to access the additional log files in a docker-compose based deployment, because in docker-compose all services are deployed in the same host.

The main idea is the following:

  • Create a shared volume in docker-compose and mount the volume in the main application in the location where the logs are put.
  • Create as many sidecar containers as there are log files to follow.
  • Have one sidecar container pool for the existence of each file at regular intervals, and start to tail -f the file once it becomes available.

Here’s an example of such setup for a Tomcat container:

The stop_signal: SIGKILL configuration is necessary as tail -f doesn’t stop with the regular SIGNINT signal.

I’ve used busybox since it is the most lightweight container that has tail, but other images could be used as well.

#2 Use shared volumes to orchestrate containers start-up

Another possible use of shared volumes is to utilise them as synchronisation mechanism between multiple services in a docker-compose deployment. If the application you are containerising is not designed to recover from the absence of certain services automatically or if it relies on other services to be available to establish the start-up configuration, it is necessary to delay the start-up of the application until these additional services become available.

A very common example of this use case is a web-application that needs a database. In general, the application expects the database to be available before starting, but when you are running both the DB and the application with compose, there is no guarantee that this will happen. Certain databases take several seconds before they are ready to accept connections and this might be slower than what it takes for the web-app to start looking for the DB.

In order to solve this issue, a common way to synchronise the start-up is by having the web-application container to delay the start-up with a specific script that attempts to connect to the DB. Once the script manages to connect to the DB the application will actually start.

This is a good solution if your application container already contains the necessary tools to check the connection, but this might not be the case.

For example, the standard MySQL container starts to accept connections before the DB is completely set and the only way to ensure the DB is initialised is to attempt a connection. However, it is necessary to install the mysql client in order to test the connection and this will introduce dependencies in the web-app container that should not be there.

A solution to this problem is, once again, to use shared volumes and sidecar containers in docker-compose.

The high-level description of the technique is as follows.

  • Create a shared volume in docker-compose and mount the volume in the main application.
  • Create a side-car container that mounts the shared volume and sets up the volume to be writeable by any process.
  • Create a side-car container that tests the connection to the DB. Once the test is successful, create a file with a certain content in the shared volume.
  • In the main application container, delay the start-up by short-polling the file in the shared volume for the expected content.

Here’s an example of how this is achieved with a mysql and a tomcat container:

The start-up sequence of the containers in the compose file above is always guaranteed to be db and then tomcat because of the synchronisation caused by the sidecars. All sidecar containers will terminate with exit code 0 once their aim is achieved.

Please note that the technique described in this last snippet is extremely flexible and allows to have more complex checks than the ones used. For example, in the past I’ve used a shared volume to store the log of one container (by simply doing a | tee /shared/sync/db.log on the command of the container) and leveraging the fact that the container was outputting DATABASE READY! in the log, I could reliably determine when to connect to the database.

Having a volume_configurer sidecar is important due to the fact that volumes are always mounted in the container as belonging to root, so, in absence of a volume configurer, no container could write in any directory within the volume.


Make sure to follow us for more blogs about software development.

Want to find out more about us? Head on over to our website, or get in touch via our Twitter: @NCREDINBURGH.