From zero to Docker for operation team
Introduction
Docker is a platform for developing, shipping and running applications using container virtualization technology. The Docker Platform consists of multiple tools.

A History Lesson
Before proceeding with the new terms and technologies lets have a look at the history/traditional approaches used for the application since ages.
In the traditional age of development and deployment of applications.

The problems in the past were slow deployment time, difficulty in scaling, huge cost of infrastructure, unused resources, difficulty in migrations, vendor dependency etc.
The problem got the solution by a technology called “Hypervisor-Based
Virtualization”. In one physical server, we can create multiple OS and running applications. Each application needs a VM to run the application binaries.

Benefits of Hypervisor
•One physical machine divided into multiple virtual machines and effective use of system resources like memory/CPU/Disk/Network.
•Easier to Scale
•The new era in technology has given as even more flexibility with Virtual Machines in the Cloud
•AWS (amazon web services), Google cloud, Azure etc.
•Rapid elasticity
•Pay as you go model
•Better Resource PoolingBut even though we have certain limitations for VM’s like every VM still requires CPU, memory and disk space (i.e.) storage which help to run operating system.
The more VM’s you run, the more resources of system you need. Application portability is still a big question.
Introducing Containers
Container based virtualization uses the kernel on the host’s operating system
to run multiple guest instances. Here guest instances are called containers. Each container has its own root filesystem, processes, memory, network ports.

Container vs VM’ s
• Containers are more light weight
• No need to install dedicated guest OS, no virtualization as in the case of VM is required
• Stop/Start time is quite fast
• Less CPU, RAM, Storage Space required
• More containers per machine than VM’ s
• Greater Portability
Docker Engine
Docker Engine is a client-server application with these major components:
- A server which is a type of long-running program called a daemon process (the
dockerdcommand) - A REST API which specifies interfaces that programs can use to talk to the daemon and instruct it what to do.
- A command line interface (CLI) client (the
dockercommand).

The CLI uses the Docker REST API to control or interact with the Docker daemon through scripting or direct CLI commands. Many other Docker applications use the underlying API and CLI.
The daemon creates and manages Docker objects, such as images, containers, networks, and volumes.
Docker architecture
Docker uses a client-server architecture. Client takes user inputs and sends them to the daemon. Daemon builds, runs, and distributes containers. Client and daemon can run on the same or different hosts.

IMAGES AND CONTAINERS
Docker Image
- Read only template used to create containers
- Built by you or other Docker users
- Stored in the Docker Hub or your local Registry
Docker Container
- Isolated application platform
- Contains everything needed to run your application
- Based on images
REGISTRY AND REPOSITORY
Registry is where we store images. Registry can be private or public (Docker Hub) Repositories are inside Registry
Docker Installation
Here we go with the easy steps to install Docker CE in Ubuntu machine.
Conclusion
This blog is to understand the overview of docker for beginners. It clearly explains why we need different containers for our applications. It also shows how docker is becoming industry standard nowadays. If you decide to start implementing docker, then you need to further study documenation related to images, containers, building images, docker file, docker compose, volumes, container networking and docker in continuous integration.
References
docs : https://docs.docker.com/
“Indmax is an IT Consulting and Services firm providing End-to-End IT Support unified support across Web, App, Data, Network and Security Layers. We have demonstrated experience in managing the complete tech stack deployment in DC, Co-Lo, Private, Public and Hybrid cloud for reputable clients over the last 11 years of our existence with a proven record of managing 7K+ Virtual and Physical devices. Notably, we have pioneered and practiced Infrastructure automation, Continuous Integration and Deployment for businesses across multiple domains”
