Apium Innovations
Published in

Apium Innovations

Introduction to Docker

As a software engineer, you might have come across terms such as Docker and Kubernetes. Docker is one of the most famous technologies used in the IT industry and having a proper understanding of Docker will help you to build, ship, and run distributed applications more effectively.
Before learning about Docker there are several concepts that we need to understand properly. These concepts are the building blocks of Docker.
I will be explaining docker and docker-related concepts at a high level. If you wish to learn docker in a more detailed way I highly recommend you to check out Docker Official Documentation.

Virtualization

All the computers are built using physical parts called hardware. Operating systems are a special type of software that can control the hardware of a computer. By default, every computer has a Single Operating system and hardware such as CPU, RAM, and Storage. In Virtualization, we can build virtual computers with a dedicated amount of hardware borrowed from the physical host computer.
With that, we can run multiple Operating Systems on top of the existing Operating systems of the computer. However, we need to install a hypervisor on the existing OS to create VMs. all the Operating Systems of VMs are called guest OS, and the existing OS is called host OS. With the help of the hypervisor, guest Operating Systems can use the host OS’s resources. Hypervisor created virtual hardware for each guest OS but what it does is use the host OS’s hardware.
As an example, you can build virtual windows and Linux computers within your existing Mac OS.

Virtualization

Containerization

Containerization is the process of packaging all the binaries, libraries, dependencies, and configuration files we need to run an application into an isolated space called a container. Containerization brings virtualization to the operating system level. We can say that a containerization is also a form of virtualization but containerization is more efficient than virtualization in many ways.
When it comes to containerization there are no guest Operating systems. Unlike virtual machines, it utilizes the host Operating system and shares necessary libraries and resources whenever needed. Containerized applications can run on various types of environments including within Virtual Machines and the cloud as the container has everything that needs to run the application.

Containerization

The big difference between virtualization and containerization is that in virtualization we can create many Operating systems in a single host machine while in containerization we create containers for each application as required

Docker

Docker is a containerization platform that behaves like an operating system for containers. When we package our applications to docker containers it ensures that our application works the same way in any environment. With Docker, each application runs on an isolated and separated container that has its own set of libraries and dependencies.

It is important to keep in mind that we can create containers even without using Docker. However, the Docker platform makes it much safer and easier for us to build, deploy and manage containers. We can consider Docker as a toolkit that allows us to manage containers including build, deploy, update and run containers using simple commands.

There are several terms in Docker which we need to understand properly to work with docker.

Docker Images
This is a template or a package that is used to build containers. Docker image is a read-only file that includes all the instructions that need to create a docker container. The relationship between the docker image and docker container is similar to classes and objects in OOP. Docker containers are running instances (Objects) of Docker Images (classes). We can create multiple containers from the same docker image. The most important feature of the docker images is that they can run on any environment which has the docker installed and it runs the same way everywhere.
Most of the companies have containerized their products and those are publicly available to use in the docker hub as images. We can use docker commands to run instances of any of these images on our local machine.

Docker Image and containers

Docker File
The next important term is the docker file. This is a simple text file we build which contains all the instructions step by step to build a docker image. Dockerfile includes very specific commands that guide how to build a specific docker image.
With the help of the Dockerfile, we can build the same image over and over without any manual process.

Docker Architecture

Now let’s take a look at the Architecture and how the Docker container platform works. Below is the simplified diagram of the Docker architecture which includes several components.

Docker Architecture

Docker engine is the main part of the docker system which follows the client-server architecture. Docker engine is what we install on our host machine. Docker engines consist of three main components.

Docker Deamon
Docker demon is responsible for managing containers, images, storage, volumes, and networks. The host Operating System of your local machine is responsible for launching the docker demon by running it.

Docker Engine Rest API
Docker Engine REST API acts as a middleware between the docker demon and the docker client. An HTTP client has access to the docker REST API. with the API, docker CLI can provide instructions to the docker demon on what it should do inside the Docker engine.

Docker CLI
Docker client is used to communicating with the Docker demon vis HTTP. Docker CLI is the simplified method to manage the docker commands. With this, we can directly talk to the server by executing commands inside the client (CLI) to create and manage docker objects such as containers and images.

The last important component in the above architecture is the Docker Registries. This is a store where all the Docker Images are stored. These registries can be either public or private. However, Docker Hub is the default registry for the docker platform and most companies have made their products publicly available as images here.

Docker can consider as the future of containerization and it intends to simplify the process of infrastructure management. I hope this article gives you a basic understanding of docker and its architecture.

At Apium Innovations we build world-class software products while following the best practices in DevOps, follow us to learn more.

Thank you for reading!!

Apium Innovations is a place which likes to challenge the norms. We like to add a bit of creativity to business, education and lifestyle. We like to say we add eccentricity to the generic mundane software.

Recommended from Medium

WordPress: Manually add loading spinner to your site without using a plugin

How to be RESTful when writing code…

Turn Slack into a Workflow Engine

AWS S3 at Speed

Implement Your Own OS

From Ec2 Self managed Postgres service to RDS managed service

SRE + Honeycomb: Observability for Service Reliability — Honeycomb

Image of person typing on laptop with two large monitors on the desk.

Top.gg Staff Experience

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Jinali Pabasara

Jinali Pabasara

Software Engineer | Tech Enthusiast

More from Medium

Want to get hired? Build REST app

Automate HTTPS Certificates with Ansible Roles ft. Let’s Encrypt & CloudFlare

How To Deploy and Run Java APIs on AWS App Runner from GitHub

Debugging NodeJS Google Cloud Functions Locally with Functions Framework in VSCode