Docker: All you need to know — Containers Part 2

Maria Valcam
Beamery Hacking Talent
11 min readJul 11, 2019
Photo by Mervyn Chan on Unsplash

This is the second part of a Containers blog posts series. This part focuses on Docker and its many features.

Check the first part Why You need Containers if you did not read it yet.

Introduction to Docker

Docker is the de facto tool for creating containers. You can use Docker to create a package for your application (code, tools, settings and dependencies) and then run it on a container.

What is a Container? — docker.com

Docker was created in 2013 by Docker Inc. Docker open sourced its code and partnered with a community of contributors for its development.

Docker follows the standardized way of creating containers, defined by Open Container Initiative. This means that the same images can be used for rkt, Docker and containerd.

I will explain Docker in six sections:

  • Docker Architecture
  • Docker Images
  • Image Registries
  • Containers
  • Networking
  • Volumes

To understand the basic of Docker, you just need to read Docker Architecture, Docker Images, and Containers. You can read the whole article for a deeper understanding.

Docker Architecture

The Docker Engine (container runtime) is the infrastructure software that runs and orchestrates containers. It has a modular design with many swappable components (based on OCI standards when possible). Major components are:

  • Docker daemon: It is just an HTTP REST API. It communicates with containerd over gRPC. We can use Docker’s cli tool to send requests to the Docker daemon.
  • containerd: Its purpose was to manage lifecycle operations (start, stop, pause, rm,…). It converts the required Docker image into an OCI bundle and tells runc to use it to create a new container.
  • runc: its only purpose is to create containers. The container process is started as a child-process of runc and as soon as it is started, runc will exit. Then the associated containerd-shim process becomes the parent of the container. Containerd-shim reports containers status back to containerd and keeps STDIN and STDOUT streams.

Docker images

A Docker image is package that includes everything needed to run an application: code, runtime, system tools, system libraries and settings.

An image is like a stopped container. In fact, you can stop a container and create a new image from it.

Images are made up of multiple layers that get stacked on top of each other and represented as a single object. The data lives in these layers. They are independent and has no concept of being part of a collective image.

If we run docker pull ubuntu && docker history ubuntu we can see all the layers that compose that image.

Ubuntu Image layers

Each layer has two identifiers:

  • Content hash. Identifies each layer.
  • Distribution hash. Identifies a compressed layer. Layers are compressed to save bandwidth.

The image itself is really just a configuration object that lists the layers and some metadata. Important concepts for images:

  • Digest. Identifies an image.
  • Manifest. Information about an image, such as layers, size, and digest.
  • Manifest lists. Allows images to support multiple architectures (Windows, Linux, ARM,…). A manifests list describes a list of architectures supported by a particular image tag, each with its own manifest detailing the layers.
  • Dangling Images. When a different image is tagged with the exact same name as other, the old one becomes a dangling image. It appears as <none>:<none>.

Cheatsheet for image command

# build an image
docker image build [--squash] -t username/app:tag .
# show images
docker image ls [--digest] [--quiet] [--filter dangling=true] [--filter=reference="*:latest"] [--format "{{.Repository}}: {{.Tag}}: {{.Size}}"]
# delete dangling images
docker image prune [-a]
# delete an image
docker image rm mariavalcam/cryptogo:v1
# deletes all images
docker image rm $(docker image ls --quiet) -f
# pull all tags from a repository
docker image pull -a mariavalcam/cryptogo
# See layers of an image
docker image inspect mariavalcam/cryptogo:v1
# See build history of an imagedocker history ubuntu

How to create a Docker image

We define how to build an image in the Dockerfile. The directory containing the application is referred to as the build context. It’s a common practice to keep your Dockerfile in the root directory of your build context.

Note: When creating an image, we want it to be small and reduce the number of dependencies it has. This improves security and saves bandwidth.

I want to start with an example of a Dockerfile for a simple Javascript application:

FROM node:6-alpine             # Using official node6 imageWORKDIR /src                   # Set /src as working directory
COPY . /src # Copy all the files to /src
RUN npm install # Install dependencies
RUN npm run build # Build application
EXPOSE 3000 # Expose port 3000CMD ["npm", "start"] # Default command: start the server

Build a JS application using this Dockerfile and running the commanddocker build -t myapp:v1 .

Check the image history to be how big is each layer:

docker history myapp:v1

Note that CMD and EXPOSE layers have a size of 0B as they are just adding metadata. Also, notice that the layers with <missing> id are from the node:6-alpine image.

Cheatsheet for Dockerfile instructions:

  • FROM: the base layer of the image.
  • LABEL: (metadata) its simple key-value pairs.
  • RUN: executes an instruction on a container using an image with the previous layers. It creates a snapshot of the container when the instruction finishes.
  • COPY: copies files from the build context.
  • WORKDIR: (metadata) set the working directory for the rest of the instructions.
  • EXPOSE:(metadata) specifies the port it uses
  • ENTRYPOINT & CMD: (metadata) sets the default command that the container should run.

Features for creating an image:

  • Multi-stage builds. They define several FROM in its Dockerfile. Each FROM instruction is a new build stage. You can add a specific name for it, like: FROM node:latests AS compilestage. So a compile stage can create compile your app. Then the final stage can copy the binary files COPY --from=compilestage /app/bin/myapp /usr/bin/myapp resulting in a new smaller image.
  • Squash images. You can squash an image to produce a single layer. This is a good practice for images that will be used in FROM.
  • Build cache. When building a new image, Docker will check its build cache for a layer that was built form the same image and using the same instruction asked to execute. If it does, it uses that layer, if it doesn’t, it is cache miss and it builds a new layer. You can force to build process to ignore cache using --no-cache=true. Note: COPY and ADD instructions include steps to ensure that the content is copied into the image has not changed since the last build (it performs a checksum against each file being copied).

Cheatsheet for the image build command

# build an image
docker image build [--squash] -t username/app:tag .

Image Registries

The default docker registry is DockerHub. Here you can find official repositories (approved by Docker) and unofficial repositories. DockerHub supports manifests and manifests lists.

Images names are [<registry>/]<repository>:<tag>. Note: as DockerHub is the official Docker registry, if you do not specify the registry on your image name, Docker will try to pull the image from DockerHub.

Cheatsheet for image registry command

# look for a repository
docker search image-name [--filter is-official=true] [--limit 100]
# login to a registry
docker login [--password] [--username] [server-name]

Containers

A container is the runtime instance of an image. It will run until the process they are executing exits.

Note: Once you have started a container from an image, you cannot delete the image that it uses until that container has been stopped and destroyed.

Features:

  • Persist data. Files that you create on a container will still be there if you restart the same container. The preferred way to store persistent data in containers is Volumes.
  • Kill a container. docker container stop NAME sends a SIGTERM signal to the PID1 process inside of the container. If it does not exist within 10 seconds, it will receive a SIGKILL.
  • Restart policies. There are 3 restart policies: always, unless-stopped and on-failure.

Cheatsheet for container commands

# show containers
docker container ls [--all]
# run a container
# Ctrl-P Ctrl-Q to exit a container without terminating it
# -it flag to make container interactive and attach to it
docker container run [--name ubuntutest] [-it] [-d] [--restart always] [--network my-network] [-p 80:80] [-v /home/maria/myrepo:/app] [--mount source=myvolume,target=/vol] ubuntu:latest [/bin/bash]
# stop a container
docker container stop ubuntutest [--force]
# start a stopped container
docker container start ubuntutest
# remove a stopped container
docker container rm ubuntutest
# delete all containers
docker container rm $(docker container ls --all --quiet) -f
# see detailed information about a container
docker container inspect ubuntutest
# attach to a running container
docker containers exec -it CONTAINER_NAME bash
# check container logs, some logging drivers dont work with this
docker container logs mycontainer

Networking

Docker Networking uses libnetwork, which is based on an open-source pluggable architecture called the Container Network Model (CNM).

CNM specification defines three parts for the container networking:

  1. Sandbox. contains the configuration of a container’s network stack (ethernet, container interface, routing table, and DNS). Note sandboxes are placed inside of containers.
  2. Endpoint. it is a virtual network interface (veth). It joins the Sandbox to a network.
  3. Network. it is like a switch. It helps endpoints to communicate directly.

Libnetwork provides the network control and management plane (native service discovery and load balancing). It accepts different drivers to provide the data plane (connectivity and isolation).

Some of the network drivers that we can choose are:

  • bridge: it creates single-host bridge networks. Containers connect to these bridges. To allow outbound traffic to the container, the Kernel iptables does NAT. For inbound traffic, we would need to port-forward a host port with a container port.

— Note: Every Docker host has a default bridge network (docker0). All new container will attach to it unless you override it (using --network flag).

  • MACVLAN: Multi-host network. Containers will have its own MAC and IP addresses on the existing physical network (or VLAN). Good things: it is easy and does not use port-mapping. Bad side: the host NIC has to be in promiscuous mode (most cloud provider does not allow this).
  • Overlay: it allows containers in different hosts to communicate using encapsulation. It allows you to create a flat, secure, layer-2 network.

Note: Docker creates an Embedded DNS server in user-defined networks. All new containers are registered with the embedded Docker DNS resolver so can resolve names of all other containers in the same network.

Cheatsheet for network commands

# list networks
docker network ls
# create a network
docker network create -d bridge mynetwork
# shows low-level details
docker network inspect mynetwork
# Inspect the underlying Linux bridge in the kernel
ip link show docker0
# create a macvlan network
docker network create -d macvlan --subnet=10.0.0.0/24 --ip-range=10.0.0.0/25 --gateway=10.0.0.1 -o parent=eth0.100 mymacvlan
# see all bridges on the system
brctl show
# view port mappings
docker port my-container-name
# delete all unused networks
docker network prune
# Delete a specific network
docker network rm mynetwork

Volumes

Every container gets its own non-persistent storage. It’s automatically created, alongside the container, and it is tied to the lifecycle of the container. To have persistent data, you need to use a volume.

Volumes lifecycle is not attached to the container lifecycle.

What happens: you create a volume, then create a container and finally mount the volume to your container.

Note: you can share the same volume between two containers, but it could end up in data corruption.

Docker supports the following storage drivers (configured on /etc/docker/daemon.json):

  • overlay2. It is the preferred storage driver, for all currently supported Linux distributions, and requires no extra configuration.
  • aufs. It is the preferred storage driver for Docker 18.06 and older, as it has not support for overlay2.
  • devicemapper. It is supported, but requires direct-lvm for production environments, because loopback-lvm, while zero-configuration, has very poor performance. devicemapper was the recommended storage driver for CentOS and RHEL, as their kernel version did not support overlay2. However, current versions of CentOS and RHEL now have support for overlay2, which is now the recommended driver.
  • btrfs and zfs. They are used if they are the backing filesystem (the filesystem of the host on which Docker is installed). These filesystems allow for advanced options, such as creating “snapshots”, but require more maintenance and setup. Each of these relies on the backing filesystem being configured correctly.
  • vfs. It is intended for testing purposes, and for situations where no copy-on-write filesystem can be used. Performance of this storage driver is poor and is not generally recommended for production use.

Each storage driver has its own subdirectory on the host (usually under /var/lib/docker/STORAGE_DRIVER/). So if you change the storage driver, existing images and containers will not be available.

Each storage driver uses a type of storage. Overlay drivers (overlay2 and aufs) can use:

  • File System Storage. Stores data as files. Each file is referenced by a filename, and typically has attributes associated with it. Some of the more commonly used file systems include NFS and NTFS.

Snapshotting filesystems (devicemapper, btrfs, and zfs) can use:

  • Block storage. Stores chunks of data in blocks. A block is identified only by its address. Block storage is commonly used for database applications because of its performance.

Cheatsheet for volume commands

# create a persistent volume
docker volume create myvolume -d overlay2
# list volumes
docker volume ls
# inspect volume
docker volume inspect myvolume
# delete all unused volumes
docker volume prune
# Delete a specific volume
docker volume rm myvolume

Other commands

# to show your current storage driver
docker system info
# see docker version using
docker version
# add a user to the docker unix group
usermod -aG docker USERNAME
# troubleshooting if using systemd
journalctl -u docker.service
# if not using systemd
tail -f /var/log/upstart/docker.log
tail -f /var/log/daemon.log

Thanks for reading!

Hope you liked this blog post. I got most of the information I know about Docker from the book Docker Deep Dive by Nigel Poulton. It is a really good book and I recommend it for those that want to learn more about Docker.

Please leave a comment or send me a message on Twitter to @Marvalcam1.

--

--

Maria Valcam
Beamery Hacking Talent

Engineer with an MBA. I am interested in Business, Doversity and Engineering.