Docker Tutorial: Containers, VMs, and Docker for Beginners

Level Up Education
11 min readJul 12, 2018

--

Original Article Posted Here : https://www.level-up.one/docker-tutorial-containers-vms/

In 1956, an American trucking businessman named Malcom McLean created the first metal shipping container. This was an amazing idea because it revolutionized International trade. But even Malcom wouldn’t have thought that the concept will be used in modern day operating systems. Oh, Malcom woudn’t have thought there would be Docker tutorial as well!

Consider the ship as an application/system, the containers as software packages/database servers, etc. The containers are individual units. But they are nearby for easy communication and uniform load management. This is just an idea so you could visualize it in a better way. Have you heard about Docker? It is a tool that helps in packing, shipping, and running applications. All this within the container itself. Even IT giants like Google, VMware, and Amazon are building services that support Docker. So it is important to clear the basic concepts and this Docker tutorial is exactly meant for it.

So let us begin the Docker tutorial. Are you ready?

What do “containers” and “VMs” mean in terms of Docker?

We will individually discuss containers and VM’s but first of all, why do they exist? Both of them have similar goals. They isolate an application and all its dependencies. The isolation converts them into a self-contained unit that could run anywhere. Further, the containers and VM’s don’t need physical hardware. So the cost and maintenance reduce significantly. The computing resources are used efficiently and there is a reduction in energy consumption.

Now you must be thinking that what is the difference between containers and VM’s? Their objectives are similar as well! The main difference is in their architectural approach. So let us discuss them individually and understand this difference.

Virtual Machines

VM or Virtual Machine is simply a virtual computer. It executes each and every program like a real computer. It runs on top of a physical machine with the help of a hypervisor. As such we are discussing a virtual machine, but hypervisor is related to the physical machine. The hypervisor runs on some host machine or on bare-metal (a computer system environment, usually a hard-disk used for operating an OS).

It is time to open the virtual machine now!

First of all, what is a hypervisor? It is a combination or either a single piece of software, firmware, or hardware. The virtual machine runs on top of the hypervisor and the hypervisor runs on the physical computer. This computer is known as the host machine. All the resources like Ram, CPU, etc are provided by the host machine.

These resources are divided between virtual machines as per the requirement. The distribution is dynamic. For example, consider that a virtual machine needs more resources for running a big application. More resources will be made available to it as compared to other virtual machines. Even if the virtual machines are running on the same host machine, the distribution will be dynamic.

The virtual machine that is running on the host machine is also known as a guest machine. It contains application as well as everything else like libraries that are needed to run the application. The guest machine also has a stack of its own with storage, CPU, network adapters, etc. This also means that it has its own guest operating system. From within, the guest machine appears to be an individually operated own unit. From outside, it is a virtual machine that has shared resources. We already discussed that a guest machine either runs on a hosted hypervisor or on a bare-metal hypervisor.

So what is the difference between them?

The hosted virtualization hypervisor runs on the operating system of the host machine itself. The virtual machine won’t have a direct access to the hardware and will have to go through the host operating system. The advantage of implementing a host virtualization hypervisor is that the underlying hardware is of less importance. The host operating system is responsible for the hardware drivers.

The hypervisor itself isn’t responsible for it. Overall, this fosters hardware compatibility. On the other hand, this layer creates more resource overhead between hardware and the hypervisor. This affects the performance of the VM negatively. So there is an advantage and there is a disadvantage.

Now, let us move on to the bare-metal hypervisor.

Here, the issue of performance is tackled by installing and running the host machine hardware. How? It interfaces directly with the hardware. There is no need for a host operating system. So here, the hypervisor is installed first on the host machine’s server. As compared to the hosted hypervisor, bare-metal has its own device drivers.

The interaction among them occurs with the help of each component directly running for any I/O, processing, etc. This approach leads to a better performance as compared to the hosted hypervisor. It enhances scalability and stability. On the other side, the disadvantage is that it limits the hardware compatibility. The hypervisor can only have some amount of device drivers built into it.

But why do we need an additional hypervisor layer between the virtual machine and the host machine?

The virtual machine has its own operating system right? So here, the hypervisor plays an important role in terms of providing a platform for managing and executing the guest operating system.

The host computers are allowed to share their resources among virtual machines that are running as guests. The virtual machine literally packages the virtual hardware, a kernel, and userspace for every new virtual machine.

Container

As discussed, the virtual machine provides hardware virtualization but the container provides operating system level virtualization. The container abstracts the userspace. How? We will see that pretty soon!

Containers and virtual machines look similar. Both of them provide private processing space and both can execute commands as root. Also, both of them have a private network interface and IP address. There are many such similarities.

But what is the difference then? The major difference is that containers share the host system’s kernel with other containers. Whereas, the virtual machine doesn’t share it.

The above image signifies that the containers only consume or include the user space. It doesn’t include the kernel or virtual hardware part. On the other side, a virtual machine includes the kernel as well. Here, each container gets its own isolated user space that allows multiple containers to run on a single host machine. Here, all the operating system level architecture is shared across containers. Only the bins and libs are created from the scratch. Due to this, containers become lightweight.

Docker Basics

It is time to understand the Docker basics part by part. The Docker architecture is quite simple.

The Docker Engine

Docker runs on the layer of Docker engine. It is a lightweight runtime and it manages the containers, images, builds, etc. Docker engine runs on Linux systems and is made of a Docker Daemon (it runs on the host computer), Docker Client (communicates with docker daemon for command execution), and a REST API (for remote interaction with the Docker Daemon).

Here, Docker client is the one with whom you communicate as an end-user of Docker. Docker client is a kind of UI for Docker.

For example, when you execute the below-mentioned command, you are actually communicating with the Docker client. Further, this client communicates or passes on the same instruction to the Docker Daemon. Here is the command:

“docker build iampeekay/someImage.”

The Docker Daemon

Docker Daemon executes commands sent to the Docker Client. All the tasks like building, running, and distributing containers, is done by the Docker Daemon. Where does the Docker Daemon run? It runs on the host machine but you won’t be able to communicate with the Daemon directly as a user. So like we usually say, “It is good to stay away from Daemons!”. Docker takes it seriously.

The Dockerfile

The instructions are written on a docker file for building the docker image. There could be any instructions such as:

RUN apt-get y install some-package: For installing a software package

EXPOSE 8000: For exposing a port

ENV ANT_HOME /usr/local/apache-ant: For passing an environment variable.

The above are just a few examples. Once you set-up the Dockerfile, use the docker build command for building an image from the command. Here is an example of a docker file. Please have a look:

The Docker Image

Images are nothing but read-only templates. They are built with commands. These commands are written down in a Dockerfile. The Docker Images include application requirements and its dependencies. The processes are also mentioned.

Each and every instruction in the Dockerfile adds a layer to the image. A new layer every time. These layers represent a portion of the images file system, generally about the layers above and below it. The layers are a key to Docker’s lightweight and powerful structure. For achieving this, Docker uses a Union File System.

The Union File Systems

Union File Systems to build the image. It is a sort of stackable file system that contains files and directories of different file systems. They are transparently overlaid to make a single file system. The content of directories with the same path is considered as a single merged directory. It avoids the need for creating separate copies of each layer.

Layered systems have their own benefits. First, it is duplication free. With the help of layers, duplicating a complete set of files every time while using an image for creating a new container isn’t possible. Second, layered systems offer precise layers of segregation. Thus, making changes is faster. Docker only propagates the updates to the layer.

Volumes In Docker

The data part of a container refers to Volumes. You can only persist and share a container’s data if volume permits. The data volumes differ from the default Union File System and remain as normal directories/files. The interesting part is that the data volumes remain intact even if you destroy, update, or rebuild a container. For updating a volume, make changes to it directly.

The Docker Containers

The Docker Containers wrap up an application’s software. It is an invisible box that includes everything like the operating system, application code, runtime, system tools, system libraries, etc. The Docker images make up Docker Containers. The images are “read-only”. Docker adds a read-write file system over the read-only file system. Together, they make up a container.

After creating the container, Docker creates a network interface. With the help of this interface, the container is able to talk with the local host. The container is assigned an IP address. Now just create the container and it will be able to run on any environment. There is no need for any changes.

Source: Docker

Why Docker?

Source: https://www.docker.com/docker-birthday

In between the Docker tutorial, I wanted to ask you that isn’t the Docker whale cute? Most of the developers love Docker whale and the technology. Ok, let’s get back to work.

Docker is an open-source project based on Linux containers. It uses the features based on the Linux Kernel. For example, namespaces and control groups create containers. But are containers new? No, Google has been using it for years! They have their own container technology. There are some other Linux container technologies like Solaris Zones, LXC, etc.

These container technologies are already there before Docker came into existence. Then why Docker? What difference did it make? Why is it on the rise? Ok, I will tell you why!

Number 1: Docker offers ease of use

Taking advantage of containers wasn’t an easy task with earlier technologies. Docker has made it easy for everyone like developers, system admins, architects, and more. Test portable applications are easy to build. Anyone can package an application from their laptop. He/She can then run it unmodified on any public/private cloud or bare metal. The slogan is, “build once, run anywhere”!

Number 2: Docker offers speed

Being lightweight, the containers are fast. They also consume fewer resources. One can easily run a Docker container in seconds. On the other side, virtual machines usually take longer as they go through the whole process of booting up the complete virtual operating system, every time!

Number 3: The Docker Hub

Docker offers an ecosystem known as the Docker Hub. You can consider it as an app store for Docker images. It contains many public images created by the community. These images are ready to use. You can easily search the images as per your requirements.

Number 4: Docker gives modularity and scalability

It is possible to break down the application functionality into individual containers. Docker gives this freedom! It is easy to link containers together and create your application with Docker. One can easily scale and update components independently in the future.

Docker Tutorial Conclusion

A lot of people come and ask me that “Will Docker eat up virtual machines?” I don’t think so! Docker is gaining a lot of momentum but this won’t affect virtual machines. This reason is that virtual machines are better under certain circumstances as compared to Docker. For example, if there is a requirement of running multiple applications on multiple servers, then virtual machines is a better choice. On the contrary, if there is a requirement to run multiple copies of a single application, Docker is a better choice.

Docker containers could create a problem when it comes to security because containers share the same kernel. The barriers between containers are quite thin. But I do believe that security and management improve with experience and exposure. Docker certainly has a great future! I hope that this Docker tutorial has helped you understand the basics of Containers, VM’s, and Dockers. But Docker in itself is an ocean. It isn’t possible to study Docker in just one article. For an in-depth study of Docker, I recommend this Docker course.

Author : James Lee

James Lee is a passionate software wizard working at one of the top Silicon Valley-based startups specializing in big data analysis. In the past, he has worked on big companies such as Google and Amazon In his day job, he works with big data technologies such as Cassandra and ElasticSearch, and he is an absolute Docker technology geek and IntelliJ IDEA lover with strong focus on efficiency and simplicity.

--

--