Understanding the Docker Containers

Deep dive into the framework and working of Docker containers.

Mayank Patel
Geek Culture
6 min readApr 6, 2021

--

Table of Content :

  1. What are Containers?
  2. Docker Containers.
  3. The Framework of Docker Container.
  4. Security Mechanisms in Docker.

What are Containers?

Containers are basically a standard unit of software that encompasses the code and carries all the required dependencies by the application to run independently of the system on which the container is deployed.

Let me simplify it for you

Sharing programs or making the same program work on a different system can be tedious at times. A program developed on a Linux environment might not work on other OS like windows or mac, the used programming language may be of different versions, also if the versions are changed for the program to work then some other program stops working 😕, versions of dependencies might also clash, etc.

The solution to this is to take an empty box, add all the code and its dependencies into it while keeping the development environment in consideration, now close the box. This box is your container which contains everything the program needs to run and can work independently to your systems configuration. Containers are ready to be deployed (just place the box anywhere you want, any number of times you want).

This process is called containerization, A container realizes its virtualization at the OS level whereas the VM virtualizes at the hardware level.

Before moving forward with Containers lets first understand Virtual Machines

Virtual machines have been around since the 1960s and are considered the foundation of Cloud Computing. If scalability or optimizing server capacity was the case then VM was the go-to solution. Virtual machines virtualize or emulate the complete hardware and then OS is installed over it (consider in that empty box along with program and dependencies the underlying OS is also added). Virtual machines take time to boot up, with sizes ranging in GBs, special thanks to that bulky OS being installed every time.

Container Technology can be seen as a ‘lightweight VM’. Multiple containers can run above the host computer while sharing the same hardware and OS kernel and being completely isolated from each other. How awesome is that 🔥

Docker Containers

Docker Containers are state-of-the-art virtualization technology and can achieve higher efficiencies than conventional Virtual Machines. Docker containers are small in size ranging in MBs, You can find and download any Docker container you need on Docker Hub, it’s a container repository provided by Docker for finding and sharing images with others. Docker containers can be quickly deployed locally, or a locally developed image can be easily uploaded to Docker Hub for others to use.

This is what has made everyone move to Docker Containers, its simplicity of creation and deployment, its minimum use of hardware resources, a properly organized repository where thousands of images can be found and shared, all this effectively shortened the software development lifecycle.

Docker Images when deployed it’s called Docker containers.

The Framework of Docker Container

Docker follows a Client/Service framework, each module of Docker is independent and collaborates with each other. It consists of 3 modules

  1. Docker Client
  2. Docker Daemon
  3. Docker registry

Docker Client

It provides the interface for the user to communicate with the Docker Container, It can be the docker-cli or the docker-desktop. When the user runs commands on the client, the client sends these commands to dockerd, which carries them out.

Docker Daemon ( dockerd )

It is a background process created automatically by the system when the Docker service starts. Docker daemon (dockerd) listens for Docker API requests and manages Docker objects such as images, containers, networks, and volumes. A daemon can also communicate with other daemons to manage Docker services.

Docker Registry

The responsibility of Docker Registry is to store and organize Docker Images, for example, Docker Hub.

Existing Security Mechanism

As containers are realized at OS level, multiple Docker containers may be running simultaneously on the same host sharing the same OS kernel, hardware, and software resource, it becomes necessary to isolate these containers in terms of Hardware and Software resource access. Hence to isolate containers Docker makes use of Linux’s Namespaces and Cgroups Mechanisms.

Resource Isolation:

Docker achieves container isolation by making use of namespaces in Linux.

Namespaces are basic name tags that are used to define the scope of access. Consider namespaces as the nodes of a tree and each node can have complete access to its children but not the ancestors. The root node will have access to all resources available in the system but a node at level 3 will have access to all its children but not its ancestors. By Namespaces, each container instance can have its own complete network, file system, IPC which is isolated from other containers.

There are 8 different kinds of namespaces :

  1. Mount
  2. Process ID (PID)
  3. Interprocess Communication (IPC)
  4. Network
  5. UNIX Time Sharing (UTS)
  6. User
  7. Time
  8. cgroups

The host system sees containers as just another process running on the system, hence two processes ( containers ) can be isolated using the PID namespace.

Resource Control:

Docker utilizes the Control Group in Linux, cgroups is a Linux kernel feature that limits, accounts for, and isolates the resource usage of processes. It ensures that resources are fairly available to all running containers and no container is misusing its resources including CPU, memory, block I/O, bandwidth, etc. Properly configuring cgroups can effectively control Dos Attack.

Limitation of Kernel Capabilities:

There are typically 2 types of users in Linux — root and non-root users. Docker and host run in a shared kernel model, hence a root user on the host can access or run any activity on the containers. But non-root users cannot access anything which he doesn’t have permission to. As many different containers can be running on a single host is not a good idea to run them with root access, rather we create a new user for each container and providing minimum required permission to that user, this will strengthen the security of the system. The white-list mechanism is used to endow the kernel capability in a default way and users can give the extra kernel capability-based on their practical need.

WHITELIST OF KERNEL CAPABILITIES ADOPTED BY DOCKER BY DEFAULT

Conclusion

Docker is widely accepted and has again revolutionized cloud, scalability, and resilience of servers improved by companies moving to Microservices architecture, which was made easy by the adoption of Docker containers. More than 25% of companies have already adopted Docker. The market size of Docker is projected to grow 993 million USD by 2024, Docker has even simplified adoption AI/ML tools with GPU support and ensures consistency between local and remote deployments.

Leave a Clap 👏, Follow for More 🔥, and KEEP LEARNING 🤓

--

--