Kubernetes said “Goodbye Docker, Hello Containers”. What should developers say?
As the news have already been spreading for some time, starting with the next version 1.20 Kubernetes will no longer support Docker in the same way it did before. We already see some panic in the eyes of admins and developers, but actually this change is not so dramatic. At least not for the developers.
So what’s in it for developers? We’re already confused but should we freak out?
Firstly, we need to understand that with the name “Docker” we actually have been talking about two different environments: one (1) for running containers (container runtime), and another (2) for building containers (development environment). This is the main reason for confusion.
Firstly, good news — this change does not mean you should stop using Docker as a development tool. Docker is and most probably will be for the foreseeable future a useful tool for building containers. The container images we create by running docker build will still run in our Kubernetes clusters. So yes, we will still write Dockerfiles and we will still build containers using Docker.
This change actually is related only with Docker container runtime. Inside of our Kubernetes cluster, there’s a thing called a container runtime that’s responsible for pulling and running your container images. Docker was always a popular choice for that runtime, but it was not designed to be embedded inside the Kubernetes. And that is where the problem was hiding all this time.
In fact, since version 1.11.0 Docker is no longer a single monolithic thing — it is an entire stack, where only one part of it, called containerd, is solely responsible for running containers. Much more things in this Docker stack are related to container building and user experience enhancements that make easy for humans to interact with while we’re doing development work.
As a result of this human-friendly abstraction layer, your Kubernetes cluster (which is not human after all) must use another tool called Dockershim to reach what it needs, which is containerd. And that is not good, because we don’t like complexity in development — it brings additional things to maintain with the possibility of breaking. Now back to the news from Kubernetes. The recent update is mostly related to the fact that dockershim is being removed from Kubelet version 1.23, which is a part responsible for container’s runtime, and it will no longer be supported. But if containerd is a part of Docker, why do we need Dockershim in Kubernetes?
The issue here is that Docker is not compliant with Container Runtime Interface (CRI). If it was, we wouldn’t need the shim.
In a nutshell, there is only one thing to note for developers: If you are relying on the underlying docker socket (/var/run/docker.sock) as part of a workflow within your cluster today, moving to a different runtime will break your ability to use it. This pattern is often called Docker in Docker. There are lots of options out there for this specific use case including things like kaniko, img, and buildah.
To be frank, it is good news for developers
In case you missed it, the container images that we’ve been building with Docker actually are not really a Docker-specific images — they are OCI (Open Container Initiative — is a Linux Foundation project to design open standards for operating-system-level virtualization, most importantly Linux containers) images. The best part is that any OCI-compliant image, regardless of the tools we use to build it, looks the same to Kubernetes. Both containerd and CRI-O know how to pull those images and run them. This is why we have a standard for what containers should look like.
So, this change is going to cause issues for some of us, however it will not be catastrophic, and generally it will be a good thing for most — it’s going to make things easier. Depending on how you interact with Kubernetes, this could mean nothing to you, or it could mean a bit of work. So basically, we will just need to change our container runtime from Docker to another supported container runtime, and that’s it.
Container runtimes we’ll now need to choose from
So in a nutshell the container runtimes do some or all of the below tasks:
- Container image management
- Container lifecycle management
- Container creation
- Container resource management
Some of the the most popular OCI-compliant container runtimes are below ones:
- containerd is a project from Cloud Native Computing Foundation (CNCF). It manages the complete container lifecycle, including image management, storage and container lifecycle, supervision, execution and networking. Actually, Docker donated containerd to the CNCF with the support of the five largest cloud providers: AWS, Google Cloud Platform, Microsoft Azure, IBM Softlayer and Alibaba Cloud, with a charter of being a core container runtime for multiple container platforms and orchestration systems.
- LXC provides OS level virtualization through a virtual environment that has its own process and network space, it uses linux cgroups and namespaces to provide the isolation.
- RunC is a CLI tool for spawning and running containers according to the OCI specification. It was originally developed as part of Docker and later extracted out as a separate open source tool and library as the first OCI runtime-spec compliant reference implementation.
- CRI-O is a Red Hat’s lightweight implementation of the Kubernetes CRI to enable using OCI-compatible runtimes. It is a lightweight and one of most popular alternatives to using Docker as the runtime for Kubernetes. Similarly, containerd is also based around the runc implementation, but cri-o states that it is “just enough” of a runtime for Kubernetes and nothing more, adding just the functions necessary above the base runc binary to implement the Kubernetes CRI. Probably something similar we now see in Intel vs AMD competition.
- rkt is an CoreOS application container engine developed for modern production cloud-native environments.
So, if all of the above is something new to you and especially if you’re a DevOps, I encourage you to start getting familiar with all of these OCI- compliant container runtimes. Get ready for the changes, just don’t panic!
Useful links for further reading:
- Demystifying the Open Container Initiative (OCI) Specifications — https://www.docker.com/blog/demystifying-open-container-initiative-oci-specifications/
- Open Container Initiative — https://opencontainers.org
- Kubernetes v1.20: The Raddest Release — https://kubernetes.io/blog/2020/12/08/kubernetes-1-20-release-announcement/