Let’s start from the beginning to understand the evolution of container:
In starting, organizations ran applications on physical servers. There was no way to define resource boundaries for applications in a physical server, and this caused resource allocation issues.
For example, if multiple applications run on a physical server, there can be instances where one application would take up most of the resources, and as a result, the other applications would underperform or may starve of resources.
A solution for this would be to run each application on a different physical server. But this did not scale as resources were underutilized, and it was expensive for organizations to maintain many physical servers. It leads to an era of virtualization.
As a solution, virtualization was introduced. It allows you to run multiple Virtual Machines (VMs) on a single physical server’s CPU. Virtualization allows applications to be isolated between VMs and provides a level of security as the information of one application cannot be freely accessed by another application. Let me give you some overview of Virtual machine :).
A Virtual Machine is primarily an emulation of a real computer that executes programs like a real computer. VMs runs on the top of Hypervisor, which in turn runs on the Host Machine. The Hypervisor is a piece of software that creates the virtualized hardware which may include the virtual disk, virtual network interface, virtual CPU, and more. The hypervisor acts as a middle man between the host machine and VMs a.k.a. Guest machine. Virtual machines include a guest kernel that can talk to this virtual hardware.
Security, resource underutilization were the reasons which gave birth to virtualization but there were many limitations of virtual machines like developing and running OS-dependent applications. Didn’t get it :). Let’s say I have developed an application and running on a machine. Now if I need to scale up this, I have to create a virtual machine, have to install all the different libraries which are needed to run my application. It’s pain if I have to repeat this creation and installation part while adding a new machine, also its a heavy and time-consuming process. One of the main limitation that is overcome by containers. Let's go a little bit deeper inside containerization :).
Containers are similar to VMs, but they have relaxed isolation properties to share the Operating System (OS) among the applications.
A container is a standard unit of software that packages up a given code and all its dependencies so the application runs quickly and reliably from one computing environment to another. (Source Docker)
What is Containerization?
It is the process of bundling up the application code with required packages/libraries that required at runtime to execute the application quickly and reliably in any supported computing environment. The bundle/container includes everything needed to run an application: code, runtime, system tools, system libraries, and settings, etc. a.k.a. container image.
Didn’t get it? No worries, let’s try to understand in a different way :). In our day-to-day lives, we use containers to store something inside it so that it remains confined within a specified space and can be kept safe from external harm or disturbances. We also use containers when we have to transport something from one place to another safely. In the same manner, we use a container to store or encapsulate an application with all stuff needed to run it successfully.
VMs provides hardware-level virtualization on other side containerization provides OS-level virtualization. Containers share the host system’s kernel with other containers. As the diagram shows, containers package up just the user-space not the kernel or virtual hardware as VM does. Each container gets its own isolated userspace to allow multiple containers to run on a single host machine. The only parts that are created from scratch are the bins and libs. This is what makes containers so lightweight.
Below are the advantages of containers over virtual machines(VMs):
What is Docker?
Docker is one of the most popular and great innovation in IT. It’s a platform-based service aka PaaS that uses OS-level virtualization. As a platform, it provides services to create images, deploy and run applications anywhere by using containers. For details, you can refer to docker official documentation.
What is Container Orchestration?
It is all about managing the lifecycles of containers, especially in large, dynamic environments. It is used to control and automate many tasks:
- Provisioning and deployment of containers
- Redundancy and availability of containers
- Scaling up or removing containers to spread application load evenly across host infrastructure
- Movement of containers from one host to another if there is a shortage of resources in a host, or if a host dies
- Allocation of resources between containers
- External exposure of services running in a container with the outside world
- Load balancing of service discovery between containers
- Health monitoring of containers and hosts
- Configuration of an application in relation to the containers running it
Kubernetes or Docker Swarm are some of the container orchestration tools which provide an easy way to describe the configuration of your application in a YAML or JSON file depending on the tool. I will discuss Kubernetes in an upcoming article, stay tuned :).