Do we really need Docker in Future?

Samir
Thinkport Technology Blog
7 min readFeb 21, 2022

My Name is Samir Hamiani, I am 28 years old and Senior Cloud Engineer at Thinkport in Frankfurt. I love watching animes, playing football and always like to question everything from time to time.

These days, it’s almost impossible not to have heard of Docker. Docker is already almost an integral part of my daily work. Honestly, are there any non-containerized applications being developed these days? For those who have managed to get through life without Docker so far, I’ll try to explain in a few sentences what this is all about.
Docker is an open source software that uses different techniques of the Linux kernel to separate and manage services in containers using the Docker daemon. These containers run a service, such as a web server, that is independent of other containers or services. The advantage of this software is that you can reproduce software installations using Docker and Docker images. Okay, so what’s so cool about it now? A few key benefits of using Docker are that:

  1. modularity: the Docker approach to containerization focuses on taking only a portion of the application out of service for repair or upgrade without having to take the entire application out of service.
  2. layers and image version control: each Docker image consists of a number of layers. These layers are combined into a single image. When the image changes, a layer is created. Each time a user enters a command such as run or copy, a new layer is created. With Docker, these layers are reused for new containers, which speeds up development tremendously. Intermediate changes are shared between images, further improving speed, size and efficiency. Version control is an integral part of layering. With each new one, you basically have a built-in change log and thus full control over your container images.
  3. rollback: probably the best thing about layering is rollback, i.e. reverting to the previous version. Each image has layers. Thus, in the event of a failed deployment, it is an easy matter to roll back to the last working version.
  4. improve deployment speed: Deploying new hardware and getting it up and running usually took days and involved a huge amount of effort. With Docker-based containers, deployment can be reduced to seconds. By creating a container for each process, you can quickly share similar processes with new apps. And because adding or moving a container doesn’t require booting the operating system, deployment times are much shorter.

Another reason why Docker is used in almost every project is probably the smooth integration with Kubernetes. This simplifies the deployment, scaling and management of Docker containers. But wait a minute? Wasn’t there something ? Kubernetes 1.20 deprecated Docker. Oh no, was that it now? Do I need to find the quick alternative?

Calm down ;)

Let’s first understand exactly how Docker works in Kubernetes. In a Kubernetes cluster, there is a component called Container Runtime, which is responsible for retrieving and running container images.
Docker is a whole tech stack, and part of it is called “containerd”, which is needed by Kubernetes. However, Kubernetes has to use a program called Dockershim to get to it. This is not ideal because it adds another object that needs to be maintained and could get corrupted. Why does Kubernetes need Dockershim ? Docker is not compatible with CRI, the container runtime interface .

In fact, it has already been decided that Dockershim will be removed from Kubelet as early as version v1.23, effectively eliminating support for Docker as a container runtime. Another drawback to using Docker in Kubernetes is that not everything works as well as it would on a virtual machine. An example of this is the Docker service logs, which are the same as the log viewer for Docker Swarm (the cluster orchestration system for Docker). Unfortunately, in this case, the logs are not displayed in chronological order, which may be caused by the time differences between the individual computers in the cluster.

What does this change mean for us developers?

As developers, Docker is still as useful to us as it was before this change was announced. The image that Docker creates isn’t really a Docker-specific image — it’s an OCI (Open Container Initiative) image. Any OCI-compatible image looks the same to Kubernetes, regardless of which tool was used to create it. Both containerd and CRI-O know how to pull and run these images. It will likely cause some issues, depending on how you interact with Kubernetes, this may mean nothing or a little work for you. For example If you are using a managed Kubernetes service like GKE, EKS, or AKS (containerized by default), you will need to make sure your worker nodes are using a supported container runtime before removing Docker support in future Kubernetes releases. If you have node customizations, you may need to update them based on your environment and runtime requirements.
Likewise If you are relying on the underlying docker socket (/var/run/docker.sock) as part of a workflow within your cluster today, moving to a different runtime will break your ability to use it. This pattern is often called Docker in Docker. DinD is often used to build images on CI (Continuous Integration) pipelines, although it has some drawbacks. One disadvantage concerns LSM (Linux Security Modules) like AppArmor and SELinux: when starting a container, the “inner Docker” may try to apply security profiles that can conflict or confuse the “outer Docker”. So it is recommended to get away from this pattern anyway.

As you can see, using Docker brings some risks and effort. But well, we have to live with that. After all, there is nothing else… or is there ? Everyone is talking about Docker. Aren’t there any alternatives? The answer is — YES!

We don’t have to use Docker by force! I will briefly introduce what I think is a good alternative — Podman.

More than an alternative — Podman

Podman is a Linux-native open source tool for developing, managing and running containers and pods according to the Open Container Initiative (OCI) standards. Presented as a user-friendly container orchestrator developed by Red Hat, Podman is the default container engine in RedHat 8 and CentOS 8, and is one of a set of command-line tools designed for various tasks in the containerization process that can function as a modular framework. Podman can use OCI-compatible container engine like Docker, which makes it easier to transition to Podman or use it with an existing Docker installation. And can Kubernetes use Podman? Yes, it can. In fact, Kubernetes and Podman are similar in some ways. Podman has a different conceptual approach to containers. As the name implies, Podman can create container “pods” that work together, a feature similar to Kubernetes pods. Pods organize separate containers under a common name to manage them as individual entities. The main benefit is that developers can share resources.

Other differences with Docker include:

  1. architecture: To create images and run containers Docker uses in program in the background. This program is called daemon. Podman does not need such a program, which means that it can run containers under the user who launches the container.
  2. roots privileges: Podman does not have a daemon that manages its activities. Thus, root privileges are not needed for the containers. Meanwhile, with Docker, it is also possible to start the containers in a rootless mode, as this is simply more secure. Podman took this approach much earlier which brings us to the next point. This makes Podman fundamentally more secure than Docker. In Docker, daemons have root privileges, which makes them extremely attractive to attackers, providing them with a loophole. Podman does not allow root privileges for containers by default, which makes them more secure than Docker containers.
  3. systemd: Without a daemon, Podman needs a tool to manage its services and support running containers in the background. Systemd creates control units for existing containers or to create new containers. By using systemd, vendors can install, run, and manage their applications as containers, as most are now packaged and delivered exclusively this way.
  4. dependencies: As a self-sufficient tool, Docker can build container images on its own. Podman requires the assistance of another tool called Buildah. Which brings us to our last point:
  5. all-in-one versus modular: Docker is a monolithic, powerful, independent tool with all the benefits and drawbacks implied, handling all of the containerization tasks throughout their entire cycle. Podman has a modular approach, relying on specialized tools for specific tasks.

Podman is definitely a serious, well-functioning alternative to Docker. It makes me wonder why I should continue to rely so doggedly on Docker?

Think for yourself!

As a user of Docker, should I be concerned? It’s kind of in between. I think the market is growing tremendously and so are the alternatives.
Docker still has its right to exist, but in my opinion, in the future we should look more at the pros and cons of each component when designing our architectures, and most importantly, we should consider alternatives, even if they are not as popular. The most popular solution does not necessarily have to be the best solution for one’s own use case. Docker in combination with Kubernetes is a very good solution. With this article, I just want to encourage people to think for themselves. Don’t just take what the majority uses, but make your own decisions.
At Thinkport, we deal with all kinds of technologies and are always moving with the times. We offer various workshops to prepare our customers for the future. Among other things, we offer also a workshop for Kubernetes & Docker.

--

--