Docker, Containerization and Your Imaginary Friend Who Helps You Understand What They Are
To understand the purpose of Docker, one needs to understand what containerization is. And to understand what containerization is, one needs to understand the problem that it is solving, which is actually something that everyone can easily relate to.
Let’s take a look at this particular problem through a real-world example:
Imagine that there’s a popular new game and you have a friend who downloads, installs and runs that game to start playing it on their own computer. Your friend does not have any issues with installing and running this game.
Since your friend enjoys playing this game, they recommend that you also play it so they send you a link for you to (legally) download it. However, when you’re trying to install the same game, you run into all kinds of errors. So you do some troubleshooting and you find out that you have to first install another program to be able to install this game. You follow the instructions and install the game but you still can’t play it yet because it doesn’t run! Through some googling, you find out that you have to configure a particular file to be able to run the game. So you do exactly that. Now, you don’t get the same errors as before, but when you try to run the game, it still doesn’t run. And you don’t even know why this time. Or, alternatively, it runs, but you spent hours trying to get it to run.
In short, this problem can be summarized as “It works on your computer, why doesn’t it work on mine???” and probably all of us have experienced it at some point in our lives.
Now let’s think of an impractical solution to this problem so we can better understand and appreciate the practical solution that is containerization.
Imagine that your friend did not send you a link for you to download the game because your friend is a bit weird but also very kind and wants to be 100% sure that you are able to play this game. So your friend goes out, buys a new laptop, installs the game on that new laptop, makes sure that the game runs on it and then brings it to your house and gifts the laptop to you so you can play the game for sure. Now this is an awesome solution to the problem mentioned above because not only are you certain that you will be able to run this game on this new laptop, but you also have a new laptop!
However… this is obviously not a practical solution to our problem. Despite all the imagination exercises we have been doing, I cannot even begin to think of the implications of having to receive a new computer for every piece of software that we want to run successfully…
This is where containerization steps in. Containerization is a practical solution to the very common problem of applications not running on certain computers for whatever reason. It ensures that a piece of software runs successfully, no matter where it is being run. Instead of getting a new computer for each new application just so you can be 100% sure that you will be able to run a certain application, you run a container on your own computer which contains not only the application, but also all the libraries, dependencies and configurations that that particular application requires to run.
But don’t get the wrong idea here. Containers are not computers within your computer; they do not have an operating system of their own. In other words, containers are not virtual machines. This is because it would be costly in terms of CPU and start-up time to have a separate virtual machine run the application within each container. Instead of having their own operating systems, all containers share the kernel of the operating system of their host machine (your computer), and what makes containers great is that the OS is the only thing that they share with other containers and their host machine.
This is where Docker steps in. Docker is what allows a container to use the kernel of a host machine’s operating system. This way, containers have the benefits of a virtual machine without the costs associated with being a VM.
Thanks to the Docker Engine, containers are able to use the system resources (CPU, Hard Disk, Network etc.) of their host machine while being isolated from other containers and the host machine itself. What isolation means here is that the system resources allocated for a particular container is segregated from the system resources of the host machine and other containers. For example, if an application in a container depends on a certain library to run, and that library exists in the host machine but not in the file system of the container, that container would not be able to run.
Using less compute resources is the benefit of choosing containerization over virtualization to achieve isolation, but the isolation itself has a separate set of benefits. The most important benefit of isolation is portability, which is what prevents the problem we had with trying to play the game our imaginary friend recommended on our own computer. Other benefits of isolated containers include security, efficiency, fault isolation, ease of management and agility.
To learn more about containerization, Docker and related concepts and technologies, check out this in-depth article from IBM and the incredible Udemy course by Stephen Grider, both of which I used as a reference point while writing this post.