Getting started with docker

What is Docker?

So what is Docker? Docker is an open-source container-based platform, which packages your application and all its dependencies together in form of containers so that your application works seamlessly in any environment ( development, test or production).

It is a platform used for building, shipping and running our applications.

Why do we use docker?

So we have discussed what Docker is. However, what is the need for the Docker? Well, Docker containers are lightweight and they are super easy to create and deploy.

Docker provides us with containers. And containerization consists of an entire runtime environment, an application, all its dependencies, libraries, binaries and configuration files needed to run it, bundled into one package. Each application runs separately from the other. Docker solves the dependency problem by keeping the dependency contained inside the containers. It unites developers against dependency of their project.

Benefits of using Containers over Virtual Machines

Now let’s discuss what is the benefit of Docker over VMs.

  • Unlike VMs( Virtual Machines ) that run on a Guest OS, using a hypervisor, Docker containers run directly on a host server (for Linux), using a Docker engine, making it faster and lightweight.
  • Docker containers can be easily integrated compared to VMs.
  • With a fully virtualized system, you get more isolation. However, it requires more resources. With Docker, you get less isolation. However, as it requires fewer resources, you can run thousands of container on a host.
  • A VM can take a minimum of one minute to start, while a Docker container usually starts in a fraction of seconds.
  • Containers are easier to break out of than a Virtual Machine.
  • Unlike VMs there is no need to preallocate the RAM. Hence docker containers utilize less RAM compared to VMs. So only the amount of RAM that is required is used.

How does Docker work?

Since we now understand the benefits of using Docker. Let’s talk above the functioning of Docker. Docker has a docker engine, which is the heart of Docker system. It is a client-server application. It has three main components:

  • A server which is a type of long-running process called a daemon process.
  • A client which is Docker CLI( Command Line Interface), and
  • A REST API which is used to communicate between the client( Docker CLI ) and the server ( Docker Daemon )

The Docker daemon receives the command from the client and manages Docker objects, such as images, containers, networks, and volumes. The Docker client and daemon can either run on the same system, or you can connect a Docker client to a remote Docker daemon. They can communicate using a REST API, over UNIX sockets or a network interface.

In Linux, Docker host runs docker daemon and docker client can be accessed from the terminal.

In Windows/OS X, there is an additional tool called Docker toolbox. This toolbox installs the docker environment on Win/OS system. This toolbox installs the following: Docker Client, Compose, Kitematic, Machine, and Virtual Box

Technology Used in Docker

The programming language used in Docker is GO. Docker takes advantage of various features of Linux kernel like namespaces and cgroups.

namespaces: Docker uses namespaces to provide isolated workspace called containers. When a container is run, docker creates a set of namespaces for it, providing a layer of isolation. Each aspect of a container runs in a separate namespace and its access is limited to that namespace.

cgroups( control groups ): croups are used to limit and isolate the resource usage( CPU, memory, Disk I/O, network etc ) of a collection of processes. cgroups allow Docker engine to share the available hardware resources to containers and optionally enforce limit and constraints.

UnionFS( Union file systems ): are file systems that operate by creating layers, making them very lightweight and fast.It is used by Docker engine to provide the building blocks for containers.

Docker Engine combines the namespaces, cgroups, and UnionFS into a wrapper called a container format. The default container format is libcontainer.

Installing Docker

I have written this blog to give you a basic understanding of Docker.
You can check the information on how to install the docker on the link below:

https://docs.docker.com/install/linux/docker-ce/ubuntu/#install-docker-ce

Docker Architecture

Docker has the following main components:

  • Docker Daemon
  • Docker Client
  • Docker Image
  • Docker Container
  • Docker Registry
  • Dockerfile
  • Docker Compose

Now let us understand what they are and what each of them does.

Docker Daemon

Docker Daemon is a background process or service that runs on the host machine. Daemon manages the building, running and distributing Docker containers.

The user does not directly interact with Docker daemon. We run a command using client CLI which instructs daemon to create images or containers.

Docker Client

CLI( Command Line Interface ) or command line tool used to interact with Docker Daemon. It accepts commands from the user and communicates back and forth with the Docker Daemon.

Docker Image

Docker uses AuFS( advanced multi layered unification filesystem ) by default, which is a layered filesystem. It allows us to have a read-only part and a write part, which are merged together by AuFS. Docker image is a file system and configuration of our application. It is used to create containers. Images are created with build command. Images are available at Docker Hub (hub.docker.com). You can create an image using docker commit container-id, or docker build -f Dockerfile

An image is the “union view” of a stack of read-only layers. A single image comprises of multiple layers and each layer is an image on its own. They have a base layer which is read-only. Any changes made to the image is saved on top of the base layer. Containers are generated by running the image layers which are stacked one above the other.

Commands for Images

$ docker run image-name:tag // runs an image pulled from docker hub with that image tag 
$ docker run image-id // runs an image pulled from docker hub with that image id

When you start with a base image, make your changes, and then commit those changes using docker, and it creates an image. The newly created image contains only the differences from the base. When you want to run your image, you also need the base, and it layers your image on top of the base using a layered file system called AuFS. AUFS merges the different layers together. And all you have to do is just run it. You can keep adding more and more images (layers) and it will continue to only save the difference.

Docker Container

A container is a “union view” of a stack of layers the top of which is a read-write layer. A container is a read-write layer on top of an image (of read-only layers itself). It does not have to be running. A container is created from docker image and includes an application and all of its dependencies. It runs as an isolated process in user space on the host OS. Container runs an instance of an image. You can start, stop, move, delete an image container. The moment you start an image you create a container. So you can have multiple containers of the same image.

$ docker container run imagename // downloads that image from docker hub, runs an image, creates a container and then immediately exits out of it

Docker Registry

Docker registry is a repository to store Docker images. You can create public or private repository to store images on docker hub. A store is a registry of Docker images.

Docker Hub is a service provided by Docker for finding and sharing container images with your team.

Dockerfile

A docker file contains instructions to build a Docker image.

It automates image construction. We write all the commands or instructions in this file to create an image.

Docker Compose

We can divide our application into multiple containers. Docker Compose allows us to simplify the linking of containers. It is used to define applications using multiple Docker containers. It declares the information of multiple containers in a single file called ‘docker-compose.yml’

Now let’s discuss what happens when we create an image or pull it from a registry.

Creating a new image:

When we run build command from client CLI, docker daemon creates a new image and stores it in the registry ( either public(docker hub) or private( your local repo ) ).

Pulling a new image from the registry

When we run pull command from client CLI, docker daemon pulls an image from the registry.

Create Containers

So how do we create containers? 
When the run, ‘docker run‘ command from client CLI, the docker daemon pulls that image if it does not already exist on your local repo and then creates a container out of it and runs it. The ‘docker run’ command is equivalent to docker create + docker start commands

Common Docker Commands

// Iteratively runs multiple commands at once.
$ docker build
// Runs on a running container and executes a process in that running container's process space.
$ docker exec
// Fetches the metadata that has been associated with the top-layer of the container or image.
$ docker inspect <container-id> or <image-id>
// Takes an image-id and recursively prints out the read-only layers (which are themselves images) that are ancestors of the input image-id.
$ docker history <image-id>
// Pulls that image if it does not exits on your local repo and then creates a container out of it and runs it
$ docker run <image-name>
$ docker pull <image-name:tag>
$ docker run <image-name:tag or image-id>
// Lists down only top-level images. Only those images that have containers attached to them or that have been pulled are considered top-level.
$ docker images
// Shows all the images on your system. 
$ docker images -a
// List down only the running containers
$ docker ps
// List down all conatiners including the ones which are not running
$ docker ps -a
// Start a container
$ docker start <conatinerId>
// Stop a container. It issues a SIGTERM to a running container which politely stops all the processes in that process-space. What results is a normal, but non-running, container.
$ docker stop <conatinerIdOrName>
// It issues a non-polite SIGKILL command to all the processes in a running container.
$ docker kill
// Uses a special cgroups feature to freeze/pause a running process-space
$ docker pause
// Remove a container. It removes the read-write layer that defines a container from your host system. It must be run on stopped containers. It effectively deletes files.
$ docker stop <containerNameOrId> // first stop the container before removing it.
$ docker rm <containerNameOrId>
// Delete an image.It removes the read-layer that defines a "union view" of an image. It removes this image from your host, though the image may still be found from the repository from which you issued a 'docker pull'. You can only use 'docker rmi' on top-level layers (or images), and not on intermediate read-only layers (unless you use -f for 'force').
$ docker rmi <imageId>
$ docker rmi <repo:tag>
// The command 'docker commit' takes a container's top-level read-write layer and burns it into a read-only layer. This effectively turns a container (whether running or stopped) into an immutable image.
$ docker commit <container-id>

Originally published at medium.com on January 30, 2019.