From Hypervisors to Containers
“Great things are not done by impulse, but by a series of small things brought together.” ― Vincent Van Gogh
Back to the Past
In the old days after developing an application, a separate team will configure a separate machine, routers, switches and putting them into the racks, and taking the application and install. This is sometimes a manual and a long set of processed and dedicated hardware needed to be installed in a server farm. and mostly at one server used for one application which sometimes causes wastage of resources. And even though it can deploy more than one application on the same server which is very insecure.
Due to the wastage of physical hardware resources Hypervisors came as a solution as a way of sharing computer resources including memory and processes. How Hypervisors does is that is it creates isolated environments based on the host machines hardware this allows for the creation of more than one separate virtual environment inside the same server/host machine.
This adds portability, extra security to the application because one VM is running isolated from another. And one VM contains full components and dependencies including its own operating systems ( n VMs = n OSs).
There are two types of Hypervisors
- Bare Metal Hypervisors - Which runs directly on the hardware.
Ex - Citrix XenServer, VMware ESXi.
- Hosted Hypervisors - Runs on top of OS in the host machine.
Ex — Oracle VirtualBox, Microsoft Hyper-V.
Although the Hypervisors brought isolation of application, better resource management, Hypervisors are still resourced consuming, updates and patch management should get done.
A container is a unit of necessary components, dependencies to run an application despite its operating system or running environment, and unlike the VMs the container runs as a separate process instead of a separate machine.
Although people think containers are the newest solution for all the problems it is not new technology containers were there from 1970 according to the reports as a mechanism of Unix OS to isolate application code.
Before diving into the containers furthermore, there are basic concepts on containers that need to understand.
namespaces - “namespaces” is something that came up with the Linux operating system for ensuring isolation in containers each container creates a separate namespace that limits what that container can view.
cgroups - Control Groups are limiting to the containers that the exact container can use.
Containers are widely started using with the Microservice architecture. it provides the ability to build and deploy independently without having to do the same server configurations or network configurations again and again and handle the scalability according to demand.
Unlike the VMs containers came with few advantages.
Standard - When creating a VM it can be different in settings and when the environment is based on the vendor. But when it comes to the containers Open Container Initiative has provided and governance a common standard of monitoring the containers and this helps the portability of containers without having to worry about the vendor, running OS, or environment.
Lightweight - In VM per when creating one VM each one of it had to have a separate operating system. But when it comes to the containers its core engine shares the OS kernel with all the other running containers. This is very efficient in size and also performance.
Secure - Container runs its application as a completely separate process, this provides isolation from one another.
But how do find these containers? Based on different vendors developers can choose a variety of container applications. Docker, rkt (Rocket), LXC(Linux Containers), and Docker are the most popular of them.
Docker is the most popular container application among software developers. Docker architecture has brought many things for the easiness of developers including the full automation of CI/CI cycles.
- Docker Image - Docker image is a template for creating a working docker container. Docker images can be customized by the developers according to the application that they running. As an example, if we want to create a docker image to run a Java application we have to install OS libraries, and Java runtime, and also even our Java application inside the docker image. Docker image is customizable and can change which contains everything to run the application (code, system libraries, configurations, runtime)
- Docker Container - After creating a docker image it should execute, That running docker image is called the “Docker Container”. the docker container can start, stop, remove or delete using the Docker API or CLI. A Docker container is defined by the Docker image.
From creating images to managing containers and keep track of them should be in a proper process. Docker architecture demonstrates how each of these processes is handled based on client-server architecture.
Docker Client is forwarding the client requests and talks to the Docker daemon, usually, this is the CLI interface people interact with.
Docker Daemon handles all the requests coming from the Docker client and also creating, deleting, running all the docker images and containers. Docker Daemon can communicate with even remote daemons.
Docker registry is like a repository for the docker images, After developers creating various docker images and various versions of the same images Docker Registery is responsible for handling them and storing them. By default “Docker Hub” is used as the public docker registry but can use private registries in need.
Again when it comes to the Docker images Docker Daemon is responsible for checking Docker image in the local cache and if not pull it from the remote docker registry.
For More info about developing container-based applications find the references below.