What is containerization and what role does Docker play in it?
Of late as the demands on technology have grown, so has the size and complexity of our applications. Today, one of the biggest challenge that a developer faces is to manage huge applications and it’s underlying infrastructure, which can often be diverse, huge and as complicated as the application itself. Deploying new features, maintaining a smooth infrastructure, code changes and streamlining DevOps workflows often become a tricky workflow when applications and infrastructure grow on a rapid scale. One solution to these many challenges is through virtualization which is known as Containerization.
What is a container?
In layman’s term, a container is nothing but a portable computing environment. We can relate this container to a ship’s container which can be moved across various ships (In our case Host Machines) in a smooth fashion without having to care about its interior (The dependencies, binaries of an application) as well as its exterior (Any 3rd party software, Databases etc). So when developer D1 works on an environment EV, Developer D2 can also work on the same environment EV, without worrying about the dependencies and version mismatching of the related software, which can then be deployed in a testing environment and later on in the production environment.
Benefits of containerization:
- Portability : The abstraction in containerization ensures that the container works in the same way anywhere you deploy it. It follows the principle of “Write once, Run Anywhere” (Reminds you anything about your early Java days?)
- Faster rate of Delivery : Due to the microservice architecture, code changes can be applied to the isolated segments without affecting the application as a whole.
Containers V/s VMs:
When starting with containers we often tend to wonder what’s the difference between a container and a VM, since both of their underlying concept is based on virtualization techniques.
The main point to ponder here is the level at which the virtualization occurs . The OS, as seen from a broader aspect, has 3 layers namely:
- Layer 3: Application Layer
- Layer 2: Kernel
- Layer 1: Hardware
In traditional virtualization, a hypervisor virtualizes the physical hardware which in turn creates separate VMs with their own guest OS, a virtual copy of the hardware which the OS needs to operate on.
In case of a container, instead of virtualizing the underlying hardware, containers virutalize the Kernel so that each individual container contains only the application and its libraries and dependencies. Due to this, containers enable the microservice architecture in which containers can be scaled and deployed at a more granular level.
While there are still several reasons to use a VM, containers provide the flexibility and portability perfect for a multicloud environment. For example, company ABC might run its application in its own private cloud today, but the next year it might switch to public cloud from a different provider. Containers are also helpful in automating and for various DevOps pipelines including CI and CD.
Talking of containerization, today Docker is the go-to PAAS containerization platform. It helps developers to package applications into containers — standardized executable components combining application source code with the operating system (OS) libraries and dependencies required to run that code in any environment. Docker also provides a public repository (Docker hub: https://hub.docker.com/) which contains several official images of applications(Nginx, Node, Mongo etc). It also provides an easy way for deployment to the cloud. Most of the docker containers have Linux as their underlying OS, mostly Alpine.
Docker Containers V/s Docker Images:
An image is just a read only template for the application with all the dependencies and libraries. We cannot use an image without starting a container. In short a container is a running image which uses its own computing resources. Images are immutable where as you can change the contents of a container, according to the needs.
Main docker commands:
- docker pull <image:version> : Pulls the docker image from the repository.
- docker start <containerid/containername> : Starts an already stopped container
- docker run <image> : Creates a container layer over the specified image, and then starts it
- -d : detached mode
- -v : mounts the virtual file system of the container to the file system of the host
- -name : name the container (Docker names a docker by default)
- -p <hostPort:ContainerPort> : Binds the port of host to the container
4. docker ps : Lists all the running containers
- -a : Lists all the containers (Even the stopped ones)
5. docker logs <containerid/containername> : Logging the container
6. docker exec -it <containerid> <shellname> : Access inside the container via a terminal.
It becomes a tad bit of a struggle when we have to run several containers at an instant. With docker compose, a YAML configuration file is used to configure the various containers that our application will use. Example:-
- docker-compose -f <filename> up : Starts the compose file
- docker-compose -f <filename> down : Stops the compose file
A docker file is a blueprint for building docker images. It contains all the commands a user could call on the command line to assemble an image. Using docker build a user can create an automated build that executes several command-line instructions in succession.
docker build -f DockerFile : Builds the image out of the docker file