Docker, containers, VMs and orchestration technology for the beginners

Within few years into the market, docker has gained much popularity as one of the most prominent technology. Even small to giant IT and Cloud companies like Google, Microsoft, IBM, etc has adopted docker and many more are embracing it as a de facto tool for development, shipping and deployment of applications.

So, What is docker?

Docker is a containerization tool for packaging application and all its dependencies together in the form called a docker container to ensure that application works seamlessly in any environment. This environment might be a local, staging or production server.

Why docker?

If you are a developer or a system administrator, you undoubtedly come across a situation where you need to run multiple applications within same server or within multiple servers and chances are, you face conflicts while running applications. Suppose you need to run two different applications, one with a library JRE version 1.7 and other with JRE version 1.8, both in a same physical or virtual server. This will cause version conflicts on your server. Using docker containers developers can eliminate such situations and run many applications seamlessly and simultaneously.

How containers are different than VMs?

  • Virtual Machines commonly known as VMs are the emulation of PC and run on top of physical hardware. Hyper-visor is responsible for the hardware-level virtualization in VMs. Unlike VMs, containers provide operating system level virtualization abstracting “user space”. Each container get its own user-space allowing multiple containers to run simultaneously on a single host.
  • VMs have their own operating system know as guest OS but containers share the same operating system kernel of the host, only binaries and libraries are build from scratch, making containers lightweight and fast.
  • As VMs has full copy of OS, applications and necessary binaries and libraries taking GBs of space and are slow to boot up. But, containers shares the OS kernel space with other containers, so takes less space and are faster while booting up.

What made docker so popular?

VMs and containers serve same purpose i.e isolation of the applications and its dependencies into self sufficient unit that can be virtually run anywhere. But, what makes docker stand out and why it’s adoption is increasing at remarkable rate?

Here, we will discuss some compelling features why companies are embracing docker and containers.

  1. ease of use : Docker containers are portable, one can package applications and its dependencies in one device and can run it anywhere in the public, private cloud or bare-metal or any environments. Developers and system administers can build and test applications quickly using docker containers.
  2. speed: VMs have their own guest OS but containers share the same OS kernel therefore, they are lightweight and fast. While docker containers can start and run in seconds, VMs takes more than minute to boot up complete operating system.
  3. isolation : Docker containers package software application and everything need to run that application: code, runtime, system tools, system libraries and settings in loosely isolated environment called container.
  4. repository : Docker hub has thousands and thousands of images publicly available that can be pulled and used from anywhere. One can also make private hub account to store and retrieve images privately.
  5. modularity and scalability : For example, in micro service architecture, you can run different applications in different containers and link all containers to create one application. This makes easy while scaling up or scaling down your applications. Also, changes in any component can be made independently without affecting other components.
  6. security: Since, docker allows you to package and run application in isolated environment called container, this default isolation capabilities of docker provides security and you can run many containers simultaneously on a given host or VMs in the cloud.
  7. continuous integration/continuous deployment(CI/CD): Docker containers enable developers to isolate code in container, making it easier to modify and update particular code without affecting others.
  8. open source: Docker is open source project. This means that anyone can contribute to Docker and extend it to meet their own needs if they need additional features that aren’t available out of the box.

An overview of docker architecture

  • Docker engine: Docker engine is a layer on which docker runs. It’s a lightweight run-time and tooling that runs and manages containers, images, builds, and more. It is available both as a free community-supported engine as well as a commercially-supported enterprise engine.
  • Docker client: It is user interface responsible for interacting with docker containers.
  • Docker daemon: It is background process responsible for receiving commands and passing them to containers via command line.
  • Docker registry: Commonly knows as docker hub is very much like github for developers, where docker images and files are stored, shared and retrieved from.
  • Docker objects : These are main components of docker: docker files, images, containers and volumes.
  • Docker file : Instructions are written on dockerfile for building docker image. Instructions includes application requirements and its dependencies.
  • Docker images : Once instructions are written in dockerfile, docker build command is used to build image. The instructions written in dockerfile are added as layers in docker image. Docker uses union file system for this.
  • Docker containers: Docker images are read only, when they run on docker engine they become docker containers. Docker containers add read-write file system on top of read-only image layers. Network interface is created in container and IP is assigned to it thus, container is able to communicate through localhost. Now, the docker container can be run in any environment.
  • Volumes in dockers : Data volumes in docker differ from the default union file system and remain as normal directories/files. All the data of containers are stored in data volumes. They remain intact even if you destroy, update, or rebuild a container.

To install docker engine :

Docker is available for different platforms. To install docker in ubuntu, you can follow the instructions as :

For installing docker in different platforms, you can follow the instructions by docker itself :

Running a simple application in docker :

To get along with docker, you can start by building and running a flask application in your local environment as :

Some useful commands in docker :

Below are the bunch of some useful docker commands you may find useful :

Docker Today

Docker container technology was launched in 2013 as an open source Docker Engine. It was limited to Linux world, which used linux primitives known as cgroups and namespaces to create containers on top of an operating system. Now, docker containers is supported everywhere: Linux, Windows, Data center, Cloud, Serverless, etc.

Will containers replace VMs?

No. Not anytime soon. Containers do have several advantages over VMs, but both are important and best suited for different purposes. For instance, if you need to run multiple applications in multiple servers than VMs are the best option. But, if you need to run copies of single application than containers are arguably the ultimate tool. As micro service architecture requires breaking down of the applications into smaller yet standalone applications, docker containers gives more flexibility and less overhead in deployment than that of hyper-visor-based VMs.

When it comes to management, unlike small number of VMs, managing large number of containers seems quite daunting. But, container orchestration technologies has made managing applications less difficult.

So, instead of who replacing who, docker and VMs are bound to coexist together giving developers and system administrators more advantages for fast and efficient deployment, running and management of applications in cloud.

Container Orchestration Technologies

When there are handful of containerized applications, it is not that difficult to manage the deployment, running and maintenance of containers. But, if there are thousands of containers and services, then it become difficult to manage. Orchestration tools and technology automates the deployment, management, scaling, networking and availability of container-based applications across cluster of physical or virtual machines.

There are many orchestration technologies available in the market. Some of the popular technologies are :

  1. Kubernetes : Being the flagship project developed by Cloud Native Computing Foundation and backed by tech giants Google, Amazon Web Services(AWS), Microsoft, IBM, Intel, Cisco, and RedHat, Kubernetes has now become a de facto standard for container orchestration. Features like autoscaling of nodes and easy portability has made kubernetes popular.
  2. Apache Mesos’s Marathon : It is open source project developed in University of california, Berkely and adopted by top notch companies like Twitter, Uber and Paypal. Mesos’s lightweight interface can easily scale up more more than 10,000 nodes and allows frameworks run on top of it. Marathon’ is one kind of framework build as production grade orchestration tool.
  3. Docker Swarm : Docker swarm developed by Docker is slightly less extensible and more complex than Kubernetes. As a containerization tool, it is popular among docker enthusiasts for fast container deployments. In docker enterprise edition, one can get blend of swarm and kubernetes functionalities.
  4. Titus : Titus is container orchestration tool developed by Netflix for to optimize Netflix streaming, recommendation, and content systems. It is open sourced project developed on foundation of Apache Mesos.

Among these, kubernetes is my personal favorite due to its tremendous functionality, large open source community, enormous resources for reference and constant support from the active community.

Before wrapping up,

Today, many companies use docker along with container orchestration technology to make fast, cost effective, fault tolerant and scalable products, to achieve company goals faster and more efficiently.

Lastly, if you want to learn more about docker containers and orchestration technology, following links might be useful for you.

Video links:

Blogs and articles: