Applications, contain yourselves: getting started with Docker
This is the first part of “Applications, contain yourselves”, a series of articles describing how to use containers in day-to-day dev work, but also to run your own apps in containers.
Part 1: Getting started with Docker
Part 2: Running existing Docker images as single-use containers
What are these “containers”, anyway?
Containers are a way to fully isolate a process (a running program) from other processes, including CPU, memory, network and disk access, while still running in the same process space.
Imagine ten people sitting in a room. They each have a blindfold, mufflers over their ears, they don’t speak and their movement is restricted by rope; they can’t see, hear, talk, smell, move much or interfere with each other in any way, they’re isolated.
But, they still in the room, using the chair they’re sitting on, enjoying the climate controlled air, not having rain fall on their head, they might even be able to do some work with resources available, as long as they don’t need to talk to anybody.
Containers are basically workers in open office spaces, but instead of people, they’re computer programs.
This approach follows the tried and tested Principle of least privilege which says:
(…) the principle of least privilege (…) requires that (…) every module (such as a process, a user, or a program, depending on the subject) must be able to access only the information and resources that are necessary for its legitimate purpose.
The major requirements of container technology are:
- Isolation: file-system, network, privileges
everything should be private to the container unless configured otherwise - Limits: CPU, disk, memory, network
resource usage should be restrictable
Brief history of process isolation
The idea of isolating a process is nothing new (as detailed in the excellent talk in Figure 1), going back several decades. It goes by many names (depending on the level or features of the isolation) such as “jail”, “sandbox”, “security context”, “container”, etc.
One of the initial solutions was chroot, an operating system operation which allowed the OS to place a process in a “jail” and restrict its access to majority of the file-system, except for what was put in the jail. The operation can still be done on any modern Linux system, see Figure 2.
As the complexity of the problems grew, so did the number of solutions, under an umbrella term Operating-system-level virtualization, as opposed to Full virtualization, which relies on creating virtual machines.
Brief history of Docker
Docker started its life as a unnamed byproduct of a startup called dotCloud.
Having realized they have an unexpected winning product on their hands, the company pivoted and rebranded to Docker Inc., opensourcing their product. It saw a very quick adoption rate and has since become important enough to be written about by Medium authors.
Since Docker was using pre-existing technology, why did it become synonymous with containers? Mostly because they made creating and distributing images much easier and, by which, brough containers to the masses.
See Figure 3 for a detailed overview of this process as it was happening (in 2013.) by the (then) CEO of the company.
Getting started with Docker
From now on, we’re assuming you’ll be using Docker on your workstation.
To get started, you should install Docker, either for Mac, for Windows or some Linux like Fedora or Ubuntu.
After that’s done, you should have terminal (or Powershell) access to the docker command-line utility, which we’ll be using for the rest of the examples.
Verifying it works
If you haven’t noticed (in which case, welcome to IT), things related to computers don’t usually work. Sometimes they do, but it’s an anomaly which quickly corrects itself. Let us try to prolong that anomaly a bit for Docker.
Docker has a client/server architecture, we’ll go more in detail in later articles. For now, suffice to say docker command-line utility is just one of the clients accessing the Docker server (or the more usual “Docker daemon”, dockerd).
The simplest way to confirm they both work is to run docker version:
If the daemon is not running correctly (or is not available where expected), this check will throw an error:
Note for Linux users: you should not have to use sudo to do any of this (as it will complicate other things down the line), a better way is to add your user to the docker user group, verify you are already in it:
Hello, world!
Once you’ve introduced the client to the server, we can run a sample container, using docker run hello-world:
Let’s break down what happened here:
- We asked
dockercommand line client to run thehello-worldimage (don’t worry, we’ll go into images in-depth in later articles), it relayed our wishes to the server - The image name looks like
<registry>/<image>:<tag>where only theimageis mandatory, the other params have defaultsregistry: docker.io, tag: latest - Since we haven’t provided the optional params, Docker prefils them with defaults, making the final full image name
docker.io/hello-world:latest - It checked if that image is available in the local image storage (or more correctly, “image registry”) which it wasn’t so it decided to pull it from the remote registry, in this case, Docker Hub, a publicly available repository of ready-made Docker images.
- Once pulled, the image is stored in the local registry and reused until explicitly deleted or updated, so if we re-run
hello-world, it will not re-pull the image (this is important, more on that in the next article) - The image is run (specifics later) at which point a container was created using the image as a template, it did what this specific images does (prints text) and then the container was terminated
We did it, we made a container do something! Let’s end on a high note here.
In the next article, we’ll look into running ready-made images in much more detail.
