A brief history of container virtualization

How it started, changed the way we distribute and what could be the next step?

Stefan Dillenburg
An Idea (by Ingenious Piece)
5 min readMay 3, 2020

--

Not so long ago container virtualization wasn’t even a thing. Nowadays it’s hard to find a tutorial where Docker, Kubernetes and Co. aren’t mentioned. These tools got accepted and used by many in our open-source community. Based on this fact, we have seen a lot of improvement since then. I don’t need to list any single benefit container virtualization put into the game. Better let me try to compromise it into my own experience through these three words:

patterns, automation and reliability

A rusty shipping container
Photo by Justus Menke on Unsplash

Back to the (ch)roots

It all got started in 1979 where the concept of container virtualization was born. Within UNIX a so called system call chroot brought to light. It smoothed the first ways of providing an isolated disk space for processes. Three years later this feature got added into BSD (Berkeley Software Distribution).

In 2000 when FreeBSD jails hit the road. Conceptual it worked like chroot. Every jail could consist custom software installations and configurations. Besides that there were aspects of isolating networking as well. These sandbox features created the foundation for application testing in identical environments.

Until 2008 there where many solutions that worked around the concept of sandboxing. During that time LXC (Linux Containers) got released. It provided new features like resource limitation and prioritization. Features like these where only known in the context of classic virtualization. Also applications had an isolated view on the environment, where they were working in. This was very unusual compared to other solutions. It consolidated the isolation more against potential outbreaks.

Fun fact — Early versions of Docker used LXC as container execution driver (support dropped in v1.10).

What has a whale to do with it?

Two whales swimming side by side
Photo by guille pozzi on Unsplash

When Docker entered the market in 2013, there popularity begin to grow. They got accepted in a wide range on groups of interest. From software developers to service operators. Everyone could profit on fast deployments and identical environments provided by Docker. Phrases like “Oh — on my machine it worked” should belong to the past.

Okay — Phrases like that could still happen, even with Docker. To address this issue you could fill a whole story by itself.

All these changes are also the success story of Docker. They crafted a whole ecosystem of tools, which are easy to use. Thinking of Docker Hub, which is like a library of software solutions. It wasn’t the range of features that made Docker so successful. It was the functionality between the tools. Everything is well thought and good connected. People could use it without sinking into hours of video lectures.

How the helmsman took over

As Docker was still pushing container virtualization on our localhosts. Big companies like Google developed their own environments behind the scenes. Google realized early that managing in scale can be quite challenging. That is why they worked on Borg.

Back in 2014 they released it under the name Kubernetes. It is a open-source container orchestration. Although Docker Swarm exists as a counterpart, it never struck a chord. Google needed something that fit there new created demands on microservice infrastructures.
Docker missed that kind of topic in their ecosystem. The more companies used Kubernetes, the more it got manifested.

Microsoft is on board

With Satya Nadella in the driving seat Microsoft started to think more of open-source. They started to do their part on container virtualization with Windows Container. In 2016 they also helped to bring Docker on Windows. From this point on they cooperated with Docker in many ways and where quite interested in pushing it.

What get’s shipped these days?

We can now get container virtualization in many forms. You can install it on any type of machine, even on IoT-devices. Companies deploy full container stacks to run applications behind the scenes. Cloud-Provider offers a wide range from containers on demand to full functional clusters.

The way we deploy has changed. For most on-premise instructions a “How to run it on a container” is always present. Companies want to share there products in an easy manner. They want to improve their products very fast without having downtimes. All these aspects made container virtualization such a strong topic.

Docker as a company had not such a happy outcome. They had many financial issues. In the end they suffered from the lack of money they could acquire out of their products. Even under the hood they seem to face issues. Many products removed Docker as their container executioner. It looks like that there is a need of an independant method. This will may take time caused by a slow progress of the CNCF (Cloud Native Computing Foundation).

The ecosystem around containers in general has expanded a lot. This thicket of tools becomes difficult to see through. Every tool address another issue or follows another concept. That’s why you can build infrastructures exactly the way you need them. This ecosystem has become very flexible, but in the same way very complex.

A look into what could be next?

Since almost everything is running in a container nowadays, what could be the next step?

In many eyes a combination of serverless execution and containers could be the next big hit. There are already some solutions out there like Kubeless. Which provides all benefits from Kubernetes on the operational side. It extends the deployment routine with it’s serverless approach.

From my perspective operating systems can leverage more with containers. Running containers is one thing, but being a container itself another. Why not working on an operating system that provides itself through containers. For example iOS from Apple is providing sandboxes for every App. It makes it for developers a consistent platform to develop on. Out of a security view it is much harder to compromise the underlying components.
Projects like Rancher OS are already experimenting with it. These experiments are still focused on a server operating system, but who knows how far it will take them.

One day it could be likely that the browser you are reading this in could run inside a container.

--

--

Stefan Dillenburg
An Idea (by Ingenious Piece)

System Engineer @Fraunhofer on that behalf I’m interested in state of the art tech. Otherwise I’m totally into martial arts and fitness.