What Is Docker?
Some of you that will be reading this will already be familiar with what Docker is and can skip this small introduction part. I will however give a small introduction about the problem and what Docker is trying to solve.
Let’s first look at the definition given by Wikipedia:
Docker is a set of platform-as-a-service (PaaS) products that use OS-level virtualization to deliver software in packages called containers.
Containers are isolated from one another and bundle their own software, libraries, and configuration files; they can communicate with each other through well-defined channels.
All containers are run by a single operating-system kernel and are thus more lightweight than virtual machines. — Wikipedia
That sounds like a great sales talk, but it uses a lot of expensive words. Basically, Docker is just a tool to make the creation, deploying, and running of applications easier by using so-called containers.
These containers allow us, developers, to package an application with everything it needs (e.g. libraries, dependencies). We can then ship that package as a whole.
Now, what I will show you in this piece are all the key instructions that you can find inside a Dockerfile. This will allow you to understand and create basic Docker files, without all that much theory.
If you want to try out the examples, you’ll have to install Docker. An overview of all the instructions I will discuss here:
Almost every Dockerfile you’ll come across will start with the
FROM instruction in the following form:
This instruction will set the base image for the Dockerfile, which means that every instruction that follows applies to this base image.
You can optionally specify a specific
tag that Docker needs to use or pull during the execution of the
docker build command, the default is the latest tag.
You can also specify multiple
FROM instructions which is called a multi-stage build, but I’d like to keep things simple and give a link to Docker for those that are interested in diving deeper into the subject.
So, let’s start with a practical example! Create a folder for this tutorial somewhere in your operating system. Add an empty Dockerfile (without extension) and add the following content:
As said before, this will pull the latest base image from Ubuntu. Let’s build our first Dockerfile! For this, we’ll use the following
docker build command:
We now have created a Docker image with the name
-t command line option is a shorthand for
-tag and gives a name to your Docker container.
We also specify
., which is just a shorthand for our current folder. Now we have our Docker container, let’s run it and see what happens:
This command runs the Docker container and creates an interactive Bash shell in the container (
-it). It also automatically removes the container when it exits (
To exit a Docker container, we can thus use the simple
exit command. And that’s how you build and run your first Docker container. Our command has just one build stage now. Not so complicated so far right?
This instruction does what it says, it runs commands. Or more specifically, it executes a specific command. It is possible to perform multiple commands using multiple lines with
RUN, but I’d advise combining as many commands in one line.
There is an important reason for this: layer caching. If one layer is cached and the other is not, you might end up in a different state than you’d expect. This can result in a mess. Read more on Dockerfile best practices on Docker.com.
Now, let’s give an example, put the following lines in your Dockerfile. The
apt-get update command downloads the package lists from the repositories and updates them to get information on the newest versions of packages and their dependencies.
Afterward, we install the tree command which is not installed in my current operating system. Even after building and running the Docker container, it won’t be. It only affects the Docker image.
Now let’s run the
build command as we did earlier:
Then, afterward, we run the Docker container as we did earlier as well:
We go to the home folder and make a directory structure with four folders, afterward we also create a file in the directory
d. If we now run the tree command, we get an overview of the directory structure, this looks a bit like the following image.
Afterward, we exit the Docker container and it’s all gone again. And that’s how I conclude that the
RUN instruction is something that is pretty easy to work with.
CMD instruction sets a default command, which will be executed when you run the Docker image without specifying a command. This allows you to do the following, for example:
Build and run the Docker again. You might have noticed that it is possible to run different commands after each other as well. I’ll show you.
If you would do the same by specifying every instruction on a different line, only the last instruction would be executed. You can try it out for yourself if you’d like to see that in action.
It’s also possible to provide a different command to the Docker image, which will be executed instead of the one specified in the instructions in the Dockerfile. I give two examples of this:
- The first command starts the Bash interpreter by specifying
- The second command tells you “hi”.
That covers the basics of the
COPY instruction copies a folder from your local machine to your Docker image. When the Docker image is up and running, you’ll have the folder at your composal (get it?). Let’s see how we can do this exactly with an example:
First we need to create a Dockerfile:
I’ll explain the
COPY instruction. Here we say that we will copy the contents of the awesome folder (with a file
wow.html inside) into the awesome folder inside the root folder of the Docker container.
This command would also copy subfolders automatically if we specified some. To prevent some folders or files from being copied, you can specify them in a docker ignore file. I’ll create a
docker_copy container to show you exactly what this would look like:
Keep in mind: The source path must be inside the context of the build. You cannot write
COPY ../folder2, because in the first step of the
docker build, the context directory and subdirectories are send to the Docker daemon.
ADD serve similar purposes. They both let you copy files from a specific location into a Docker image. The
ADD instruction lets you do the same as the
COPY instruction, but on top, it supports two other sources:
- You can extract a TAR file from the source directly.
- You can use a URL instead of a local file directory.
Extracting a TAR file
Let’s run the
run commands to inspect what happens:
ADD instruction allows you to specify where you want it, so you can just say copy it inside the root folder. It’s slightly different from the regular
COMMAND instructions as you don’t specify the resulting folder name.
Using a URL
Now let’s build and run that again:
“Well that’s unexpected,” is what I thought the first time. But it does what it is supposed to do.
According to the docs, when
<src> is a URL and
<dest> does end with a trailing slash, then the filename is inferred from the URL and the file is downloaded to
Keep this in mind when you are using the
ADD instruction. I’d recommend using curl to download if you want to download more than just one file.
WORKDIR instruction is used to define the working directory of a Docker container at any given time. The command is specified in the Dockerfile.
ENTRYPOINT instruction will be executed in the specified working directory. I will first give you an example of what working without the
WORKDIR instruction could look like:
The problem with this Dockerfile is the amount of repetition. In every instruction we explicitly specify the root folder. In the last two commands, we even need to specify a subfolder inside the root folder.
This is only a small example, but it would be nice to specify the current working folder just once. The
WORKDIR instruction does just that for us.
The Docker build and run steps did exactly what we told them to do. However, we want to avoid repetition, so in the next Dockerfile we specify that our root folder is our current working directory.
We even specify another working directory later on, which we do by specifying the
/root/a folder. The
WORKDIR instruction improves the readability of the Dockerfile and avoids repetition.
It would become a bit harder to change every instruction when the location of the working directory would change during development.
The Dockerfile above will now result in nine steps during build. The result is exactly the same as in the previous example, so I’ve left that out. But you can always try it for yourself!
The only difference we do have here is that when we start the run the Docker container, we’ll be in the
That concludes the
WORKDIR instruction. Most of the time you’ll use it to specify the folder where
test commands are executed.
ENTRYPOINT instruction of a Dockerfile looks similar to the
CMD instruction at first sight. It specifies the default command to execute at runtime, and you’ll not be able to override it.
So, my advice would be if you want to specify a command that will run once the container starts and you don’t want the user to override it, use
The Dockerfile above results in the following build and run. As you can see, the
pwd command does not override the
ls command we have in the Dockerfile. This is different from the
ENV instruction allows you to set environment variables for your Docker container.
An environment variable is a dynamic-named value that can affect the way running processes will behave on a computer. They are part of the environment in which a process runs. — Wikipedia
Most programmers are familiar with environment variables so I’m not going into too much detail here. I’ll just give an example Dockerfile and I’ll discuss what happens.
CMD echo $workdir
So, what I do here is I create an environment variable
workdir using the
ENV instruction. This allows me to use it at any later point in time when the container is running.
In this example, I’ll output the environment variable using the dollar sign prefix. When building and running the container, you see that the
root dir is printed to the console. It’s a very useful instruction.
LABEL instruction allows you to add metadata to an image. It’s always a key-value pair, and it’s recommended to put your label value in between quotes.
I’ll give a simple example of a Dockerfile using the
This Docker becomes active when you run the usual
You can also specify a
LABEL instruction when running a container using the command line. When there are a lot of running containers, the
LABEL instruction allows you to search for specific keys.
It also can give you more information on the Docker image, the author, or many more things. I’d agree that this is not the most exciting feature. You could for example also filter your active containers on the label:
docker images --filter label=description="Baby don't hurt me."
HEALTHCHECK instruction tells Docker how to test a container to check that it is still working. To explain the
HEALTHCHECK instruction, I’ll first create a Dockerfile:
Let me explain the different parameters:
interval(default is 30 seconds): The health check will first run for five seconds after the container is started, and then again five seconds after each previous check completes.
timeout(default is 30 seconds): If a single run of the check takes longer than three seconds then the check is considered to have failed.
retries(default is three times): It takes two consecutive failures of the health check for the container to be considered unhealthy.
- The command: Check if the
nopedirectory exists, otherwise
exitwith an error.
Let’s start and build our Docker container:
When we open a different terminal, we can inspect the running Docker containers. In the first ten seconds, it’s impossible for the container to realize if it’s unhealthy or healthy.
After two retries, that becomes possible and we see after eleven seconds that the
HEALTHCHECK concludes that it’s not unhealthy. This is indeed the case, as the folder
nope never exists in the base image of Ubuntu.
Status: Up 5 seconds (health: starting)
Status: Up 9 seconds (health: starting)
Status: Up 11 seconds (unhealthy)
When I run
mkdir nope inside my Docker container, the status will become healthy. I now run the Docker
ps command again after waiting an instant until the first health check is successful.
You’ll notice that this command is extremely useful when you have dependencies between Docker containers (e.g. waiting until the database is up, before executing database queries).
Status: Up About a minute (healthy)
If you’ve ever tried to kill a Docker container with the
docker stop command, you might have noticed that Docker will first ask nicely for the process to stop and if it doesn’t comply within 10 seconds it will forcibly kill it. This is because the default
STOPSIGNAL of a Docker container is a
You can change this by specifying the
STOPSIGNAL instruction to become a
SIGKILL. When you’d run
docker stop on your container, the Docker container will now shutdown immediately.
The process is immediately terminated. To check this out yourself, use the following Dockerfile.
Let’s build and start the container:
Now open a new terminal and execute the following steps:
There is one running container with identifier
docker stop 50b3f43b5588
Now there are no containers running anymore, as the target Docker container exits immediately.
Let’s create a Dockerfile with the root folder specified as a volume. In our initial image, there will be a
hello.txt inside the root folder.
Now let’s spice things up by creating a Docker image, adding a new
--volumes-from parameter allows us to use the volume of a different container, therefore allowing us to persist data between containers.
EXPOSE instruction informs Docker that the container listens on a specific network port at runtime. You can also specify whether the port listens on TCP or UDP (the default is TCP).
Exposing ports is a way of documenting which ports are used, but does not actually map or open any ports. Exposing ports is optional. By default, when you create a container, it does not publish any of its ports to the outside world.
To make a port available to services outside of Docker, or to Docker containers that are not connected to the container’s network, use the
-p flag. This maps a container port to a port on the Docker host.
So that’s it. I’ve covered the most important Docker instructions.
If you missed something, please have a look at the Docker documentation. I’m certain that you’ll find everything you would be missing here. You should know enough to get started. I’d say: “Go and create!”