Containerization, Docker, Docker Compose: Step-by-step Guide

Sciforce
Sciforce
Published in
9 min readJul 23, 2021

--

Docker helps to develop, test, and deploy software simplified. Moreover, Docker has become a synonym for containerization today, gaining wide popularity. But what if you are new to this technology and are taking baby steps? Then read on our close-up on containers, Docker vs. VMs, and Docker Compose.

Containerization explained

Before we delve into the details of Docker and Docker Compose, let us define the principal idea of containerization. Skip to the following blocks without any ado if you are eager to reveal Docker’s nuts and bolts immediately. To the rest of the readers:” Welcome on board, cabin boys and girls!”

Container technology today is mainly associated with Docker, which also helped to accelerate the overall trend of native cloud development. Meanwhile, technology is rooted in the 1970s, as Rani Osnat tells in his blog on container’s brief history. Containerization, in essence, packages up code with all its dependencies that it can run on any infrastructure. It is often accompanied by virtualization or stands as an alternative.

How did we develop software before the containers?

Usually, a team of developers working on some applications has to install all the needed services on their machines directly. For instance, for some JS applications, you would need PostgreSQL v9.3 and Redis v5.0 for messaging. That also means that every developer in your team or testers should also have these services installed.

The installation process differs per OS environment, and there are many steps where something could go wrong. Moreover, the overall process could be even trickier depending on the application’s complexity.

How do containers enhance the deployment process?

To better grasp the idea, let’s describe the typical workflow before using containers. Development teams produce artifacts with some instructions. You’d have a jar file or something like this with the list of instructions on how to configure and a database file with the same list of instructions for server configuration. The development team would give these files to the operations team, and they would set up the environment to deploy those applications.

In this case, the operations team would have everything installed on their OS that could lead to dependency version conflicts. Moreover, some misunderstandings between the teams may also arise since all the instructions are in textual format. For example, developers could miss some crucial points regarding configuration, or the operations team could misinterpret something. Consequently, this could lead to some sort of back and forth communication between the teams.

With containers, these processes are actually simplified. Now the development and operations team could be on the same page using the containers. No environmental configuration is needed on the server, except Docker Runtime. You need to run the docker command to pull the application image from the repository to run it. No environmental configuration is required on the server as you have set up the Docker Runtime on the server. That is just a one-time effort.

So, you need not look for packages, libraries, and other software components for your machine and start working using this container. All you need is to download a specific repository on your local machine.

One command is the same regardless of the OS you are using. For example, if you need to build some JS application, you need to download and install only the containers required for this application, and that is it.

What is a container technically?

There are two different technical terms — Docker image and Docker container. Images are the actual packages (Configuration + PostgreSQL v9.3+ Start script), i.e., it is an artifact that is movable around this image.

The container is layers of stacked images on top of each other. Most of the containers are mostly Linux Base images because of being lightweight. On top of the base image, you would have an application image.

When you download an image and install it on your machine, it starts the application and creates a container environment. It is not running in the Docker image, and the container is the exact thing running on your machine.

Docker Explained

Docker — is a containerization platform that packages applications into containers, combining source code with all the dependencies and libraries of the operating system. With containers, it is possible to run code in any environment. Docker, in essence, provides the toolkit for developers to build, deploy, and run containers using basic commands and automation. Thus, it is easy to manage containers using Docker. Docker Inc. provides an enterprise edition (Docker EE) and an open-source project.

What is the difference between container and image?

The container is a running environment for the image. For example, application images (PostgreSQL, Redis, Mongo DB) that may need a file system, log files, or any environmental configurations — all of this environmental stuff provided by the container. The container also contains a port that allows talking to an application running inside the container.

What is Docker Hub?
Docker Hub contains images only. Every image on Docker Hub has different versions (you can select the latest version by default when you have no dependencies).

Check out also the list of the basic Docker commands.

Docker’s Pros and Cons

People often use the terms “Docker” and “container” interchangeably nowadays, so it is hard to define some cons, but we tried to provide a well-balanced list. Let’s start with benefits:

Efficient usage of resources

Docker containers make not only the apps isolated from each other but the OS. This way, you can dictate how to use system resources like GPU, CPU, and memory. It also helps to make a cleaner software stack and keep code and data separately. Compared to virtual machines (VMs), containers are very lightweight and flexible. Since every single process could run in a single container, you could easily update or repair containers when needed.

Moreover, when there is a need to optimize Docker images for the best performance with less effort, it is also possible. Check out how we have optimized the size of Docker images by over 50% in blog Strategies of docker images optimization.

Improved portability

You can use Docker and do not take care of machine-specific configuration since applications are not tied to the host OS. This way, both the application and host environment are clean and minimal. Of course, containers are built per specific platforms (container for Windows won’t launch on macOS and vice versa), but there is a solution for this case. It is called manifest and is still in its experimental phase. In essence, it is images for multiple OSs packed in one image, so that Docker could be both a cross-environmental and cross-platform solution.

Enhanced microservices model

In general, software consists of multiple components grouped into a stack, like a database, in-memory cache, and web-server. With containers, you can manage these pieces as one functional unit with changeable parts. Each part is based on a different container, so you can easily update, change or modify any of them. Check out our ever-green blog Microservices: how to stay smart and avoid trendy words for more details.

Since containers are lightweight and portable, you can quickly build and maintain microservice-based architecture. You can even reuse existing containers as a base of images (i.e., templates) to make new containers.

Orchestration and scaling

Since containers are lightweight and use resources efficiently, it is possible to launch lots of them. You can also tailor them for your needs and amount of resources, using third-party projects like Kubernetes. In essence, it provides automatic orchestration and scaling of the containers, like a scheduler. Docker also provides its system, Swarm mode. Meanwhile, Kubernetes is a leader by default. Both solutions (Docker and Kubernetes) are bundled with Docker Enterprise Edition.

However, Docker is not a silver bullet for all your needs. Thus, consider the following:

Docker is not a VM

Docker and VMs are using different virtualization mechanisms since the first one virtualizes only the application layer. VM virtualizes the complete OS — both the applications layer and the OS kernel. VM has its Guest OS, while Docker is running on the Host OS. Thus, Docker is smaller and faster than VM. But, Docker is not that compatible as VM that you can run on any host OS.

Before installing Docker, check whether your OS can host Docker natively. If your OS can not run Docker, you need to install Docker Toolbox to create a bridge between your OS and Docker that enables it to run on your computer.

Docker containers do not have persistency and immutability

Docker’s image is immutable by default. That means that when you have created it, you can not change it. The same is about the persistency — you won’t have any stateful information when restarting the container associated with the old one. It also differs from VMs with persistency through the sessions by default since it has a file system. Thus, containers’ statelessness makes developers keep the application’s data and code separately.

Docker installation

The installation differs not only per OS but the version of the specific OS. Thus, we strongly recommend checking the prerequisites before installation. For Mac and Windows, some OS and hardware criteria have to be met to support running Docker. For example, it runs only on Windows 10 natively. For Linux process is different per distribution.

As we have mentioned, if your OS can not run Docker, you need to install Docker Toolbox that creates a bridge between your OS and Docker that enables it to run on your computer.

By installing Docker, you will have the whole package — Docker Engine, a necessary tool to run the Docker containers on your laptop. Docker CLI client will enable you to execute some Docker commands, and Docker Compose is a technology that helps you orchestrate multiple containers, and we are covering it further.

For Mac users, it is crucial to know that once you have multiple accounts on your laptop, you could experience errors if you will run Docker on various accounts. Do not forget to quit Docker on the account when switching to another one when you use Docker on that account top.

Windows users should have virtualization enabled while installing Docker on Windows. Virtualization is by default always enabled if you have not disabled it manually.

Download the Docker file and follow the Wizard guides to run Docker. You need to start Docker manually after installation since it won’t start automatically.

What is Docker Compose?

Docker Compose, in essence, is a superstructure above Docker. You can easily use Docker Engine for a few containers, but it is impossible with lots of them. So, Docker Compose (which is also automatically installed within the complete package) comes in handy. It is an orchestration and scheduling tool to manage the application’s architecture.

Docker Compose uses YAML file specifying the services included in the application and can run and deploy the applications using one command. With Docker Compose, you can define a constant volume for storage, configure service dependencies, and specify base nodes.

Bottom Line

You did it! A journey of a thousand miles begins with a single step, and here is your first one. Docker will help to develop software simply and efficiently. And with Docker Compose, you will orchestrate containers. Moreover, a robust community stands behind this technology to not be alone in your journey of mastering the best DevOps practices.

--

--

Sciforce
Sciforce

Ukraine-based IT company specialized in development of software solutions based on science-driven information technologies #AI #ML #IoT #NLP #Healthcare #DevOps