Intro to Docker
Intro to Problems
Before going to know what Docker is and what was the need to create it we first need to look at what gave it life.
For making it easy to understand let’s take this as an example. There is a guy name Karan. Karan is a Developer and wants to deploy a scalable application on a virtual platform. Karan knows about virtual machines and How they perform. So, he deploys a Host OS on the hardware, installs a Hypervisor (VMware, Hyper-V, KVM) and deploys Guest OS on top of it. Once that is done Karan will install all the application level dependencies and then deploy his app on the VM.
Everything went great for 2–3 months. But Karan and his team-mates follow Agile methodology for their application development life-cycle. And now Karan must deploy the new version of app which uses Python 3 instead of Python2. But to change that he must remove all the previous dependencies from his already existing environment and then install the requirements and pray that everything works fine. And while he is doing that his application maybe down which introduces more expenses.
Introducing Docker
What is Docker?
Docker is a tool designed to make it easier to create, deploy, and run applications by using containers. Containers allow a developer to package up an application with all the parts it needs, such as libraries and other dependencies, and ship it all out as one package.
That sounds good but did not make any sense to Karan. So, let’s take a deep dive into the subject.
What Docker is?
To answer this let me quote Wikipedia:
“Docker is an open-source project that automates the deployment of software applications inside containers by providing an additional layer of abstraction and automation of OS-level virtualization on Linux.”
To understand that let’s take a look at how traditional virtualization platforms work. So, there are two kinds of Virtualization Platforms:
1. Type-1 (VMware ESX, ESX-I)
2. Type-2 (VMware Workstation, Windows Hyper-V, Linux KVM)
And here is the basic diagram of how they work:

So, traditional virtualization platforms need us to do:
1. Get Hardware
2. Deploy a Host OS (Optional)
3. Deploy Hypervisor (Type 1 or Type 2)
4. Deploy Guest OS
5. Deploy dependencies
6. Deploy Application
Now let’s compare them with how Docker works by looking at the following diagram:

Here is how Containers work:
1. Get Hardware
2. Deploy Host OS
3. Deploy Docker Software
4. Deploy your Containers
So, Docker is not a traditional virtualization platform but it creates an Abstraction layer on top of your Host OS so that all your containers can use same Kernel to complete their work without introducing unnecessary overhead.
Karan saw these diagrams and got excited for a moment. Then he read them and again got confused like what are Images, Containers, Docker Server. I have an application running on 5 different VMs so how can I deploy in on Docker?
So, let’s take all different parts of Docker one by one.
Docker Components
In Docker terminology there are 3 main things:
1. Docker Image
2. Container
3. Docker Engine
1. Docker Image
To define Docker Image here is what official docker site says about it:
“An Image is an ordered collection of root filesystem changes and the corresponding execution parameters for use within a container runtime. An image typically contains a union of layered filesystems stacked on top of each other. An image does not have state and it never changes”
So,
An image is an inert, immutable, file that’s essentially a snapshot of a container.
Image will contain all your application code and its dependencies. You can mention all your dependencies in a dockerfile. What makes it different than traditional build is Docker Images store their changes as a stacked layer of filesystem changes. So, if your latest image is introducing any bugs you can execute docker history to see previous versions of the same image and can load more stable image from there.
You can go through some basic commands related to Images over here.
2. Docker Containers
According to official site of Docker:
“A Docker container is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings.”
So, Container is a running instance of your Image. Consider this in VM terminology, if you want to test your build then you will deploy your OVF file to create a VM and start the VM.
In docker you just execute docker start <<imagename>> and it will start one instance of your image which is called a container. So, if Image is a recipe than Container is a cake.
Here are some regular commands related to containers.
3. Docker Engine
Docker engine is the heart of whole Docker platform. Here is what it does:
- Give an Abstract Virtualized layer on top of Host OS
- Segregate all deployed container dependencies
- Manage deployed containers
- Create custom Docker Images

One other thing to note here is that if you can see the diagram of difference between VM and Docker, Docker uses same libraries for containers of same images. In case of VMs you will just have to create another VM which will introduce overhead of an extra Storage and Memory overhead of a Guest OS and dependencies.
In conclusion, your docker images will be running on your docker engine. But how do you run your services on top of your docker engine and expose them to outside world? How do you assign storage to your container? Let’s take a look at that now.
Port Forwarding
Docker containers are directly connected to Internet as when they are deployed they create an extra rule in iptables with name as MASQUERADE. But if you are going to run a service which will listen on a specific port then that details must be specified while running the container.
There are two options to make your container listen through Host OS port.
1. Mention all ports that you want to expose as — expose <port> in dockerfile. And execute the run command with -P or –publish-all=true|false parameter. That will run a container which will listen through those ports. More here.
2. Or run the run command as docker run -p IP:host_port:container_port to bind host port to container port.
Mounting a Volume
If your application is writing data on any volume let it be logs or user data, we have to Mount a drive to your container so that it can store that data over there.
You first need to create a volume by docker volume create vol1
You can see the created volumes by docker volume ls
Now if your data is temporary than you can bind the volume as temporary space so when you poweroff the container that data will be deleted.
That can be done by running ::
docker run -d \
-it \
— name tmptest \ ####-> Name of your Container
— mount type=tmpfs,destination=/app \ #-> Where you will mount the temp drive
nginx:latest #-> The image which you want to run as container
If you want to mount your volume as a driver and don’t want to increase your container size than you can create a volume with create command first and then mount it by running ::
docker run -d \
— name devtest \ ####-> Name of your Container
— mount source=myvol2,target=/app \ #-> myVol2 is the volume name /app is Where docker will mount the drive
nginx:latest #-> The image which you want to run as container
Registry and Repository
Docker gives the feature of Registry to store your images. By default, your docker installation will be connected to Docker Hub. Docker Hub is a public docker registry hosted by Docker. Where you can find many open source docker images. To install any image from registry is simple. Just execute command: docker pull <<imagename>>. And that image will be loaded on your docker setup from that registry.
You can create your own custom Registry and make appropriate changes into config.yaml.
After doing that your docker setup will point to your custom registry. So, that when you make changes into your image and push, it will be stored on your registry.
Repository is a collection of images with same name and different tags. Tags are alphanumeric identifiers to differentiate images with same name.
Consider that if you have an Image with name HelloWorld and your registry name is myRegistry. Then all your images with name HelloWorld will be stored in myRegistry/HelloWorld URL.
Conclusion
We learned about Docker and its components in this document. How Docker uses less space and makes it easy for developers to automate their deployment and not worry about the system dependencies. There are more things to Docker like Docker Swarms which you can use to make your application more resilient to power outages and to remove Single Point Of Failure.
This was my first post on Medium. Let me know if you guys have any feedback. And clap if you liked it.
