Comparison of Two Different Approaches Towards Container Management

Next Visions
6 min readSep 4, 2018


Askin Askin, a DevOps Engineer at Porsche Digital explains the concept of containerization and deploys an AI-API architecture using two different methods. Askin compares an off-the-shelf container orchestration solution and a custom-made orchestration tool that she has written herself.

Containerization has gained a lot of attention recently and created various different projects around it. One category of these projects deals with container orchestration. Many different approaches have emerged, some cloud-only proprietary solutions such as Amazon ECS or, some open-source ones like kubernetes. They all target the same: easier container orchestration. But will they really provide what they claim: easier management or simpler deployments?

In this post, after I briefly talk about what containerization is, I will deploy an AI-API (an AI chatbot with a simple API) architecture using two different orchestration methods. One using kubernetes and one using nothing but a container-based manual controller plane.

Containers and Virtual Machines

After Amazon Web Services started offering Virtual Machines (VM’s) in a cloud environment, most of the server deployments around the world now use some sort of virtualization system. If you can utilize 100% of the resources all the time, Virtual Machines tend to be a bit more expensive when it comes to price vs. performance. They are now easier and faster to deploy, easier to manage and require less maintenance if you buy free service layers built on top of them (like Amazon RDS). Just like the difference between a physical server and a virtual machine, containerization made it even easier to manage and deploy servers or services.

Virtual Machines in container management
Image 1: Virtual Machines

While Virtual Machines can share physical resources, they have to bring their own operating systems onboard, and of course their own kernels. While this creates ideal isolation, it also creates its own problems like wasted resources from running multiple kernels and operating systems as well as updates and maintenance for security reasons. What containerization does is creating isolated workspaces with their own filesystems by utilizing kernel namespaces. So multiple server applications running, using only one operating system and sharing the same kernel is what containers are all about.

Image 2: Containers

Also, containers have a layered image system. This is also present on Virtual Machine solutions, but it is not utilized as much as they did with containerization. Most of the applications that would require sometimes hours to build and install have an image that you can download and run within seconds. If you need a running containerized WordPress installation, you need to run docker run wordpress. If you have already downloaded image ‘4cbb58b1ec3d’ which is the second layer in our WordPress image before, you do not need to download it again.

So let’s now move on to container orchestration.

Container Orchestration

Ease of creation and management of containers enabled a lot of automated workflows. Initially, all container-based deployments used some proprietary stack to orchestrate and run them. But after docker got open-sourced and started dominating the field, docker is now the standard for running containers. Because of this, docker images are also the standard way of distributing container images. So docker became the general basis when projects about custom orchestration solutions emerged.

This is the first type of orchestration I wanted to create. But one important issue I needed to tackle was the boot time. Our software had a long boot time, so we wanted to have one ready at all time to serve the next request. In my architecture, I wanted to have a controller container which would act as a load balancer and http server to relay requests to can offtorrect AI containers while creating new containers that are ready for the next request.

controller container
Image 3: Controller Container

I have achieved this task with the docker-py library and used flask for serving http requests. was well documented and easy to use with one docker file created for both controller and AI. The process was straightforward and during the development I have learned even more about docker. So this was a very primitive proprietary container orchestration solution that I created, but it did its job.

This is the moment in which I needed to introduce kubernetes. Because essentially this would serve a similar purpose for orchestration, and I have created my kubernetes-based solution to reduce the amount of code I need to write.

To apply the same idea in kubernetes, I had to re-think my architecture from the start. Because kubernetes simply asks you for a deployment scheme (like Amazon ECS) and tries to keep that scheme alive in a stable state. While I created my own containers for the next request, the orchestration solution should have something in place for a procedure like this. And after some searching, I have noticed I could use the labels feature of kubernetes for my procedure.

Kubernetes in container management
Image 4: Kubernetes

The idea I had was to simply label all newly created AI containers with assigned:not_assigned (false and no was giving me a lot of headache which I did not really dug into) applied to every container. I would declare, I want 3 of them with label assigned:not_assigned. When a new request comes, my controller container should change this label to assigned:assigned. Changing labels would break the state, 2 of the 3 deployed containers would have the label assigned:not_assigned. When kubernetes notices the state is broken, it would start one more container with the assigned:not_assigned label.

Therefore, I have written another class, just to manage the kubernetes cluster. It did not really need to implement some functionality like creating or managing containers, but it needed to relay messages and remove labels. This removed a lot of code that resulted in a lot fewer lines of code to maintain, which meant a smaller attack surface. Creating a connection to kubernetes host from a pod was easy and simple. I have spent some more time on creating a service and routing requests to correct containers.


In this experiment, I have tried to use an off-the-shelf container orchestration solution and a custom-made orchestration tool that I have written myself. Writing my own orchestration solution was fast. The concepts were not foreign and there were a lot of how-to articles. But when it came to kubernetes, it was a completely different story. To be able to use kubernetes, knowledge about containers is not enough, I had to learn new concepts and a new way of thinking (e.g., instead of containers, deployments and services as primary-citizens) to be able to use it for my purposes. But at the end, we can safely assume, using kubernetes for container orchestration made my structure safer and more stable, because most of the tricky parts of my software like maintaining a stable number of containers on hold, were done with the help of an open-source project which is used and promoted by Google.

Askin Askin works as a DevOps Engineer at Porsche Digital

Askin Askin is a DevOps Engineer at Porsche Digital. Please find more about inspiring men & women on Twitter, LinkedIn and Instagram.



Next Visions

There’s more to Porsche than sports cars // #NextVisions is a platform about smart technologies and the people that drive our digital journey.