Container Orchestration : Conductor of Your Container
“It’s work on your machine but not on mine! Why?”
Ever you experience this case in your development team? It’s usually occur when your Devteam finish an application and want to distribute to other division, like Quality Assurance or User Acceptance team. When other team doesn’t have the same environment as yours, most likely they will get an error when running the application.
It’s important to make sure your application can run smoothly on other device. If you are in small app development, You and your development team maybe can manage all other division team environment on their device. But how about your customer? How about the app become bigger and need more people in your company? Manage all devices become very frustrating. Why not you mange the application, so it can compatible and run properly in all devices?
In this article, I will introduce virtual machine and container to pack your app. I only ever experience on container, and in my opinion it’s better to use container in stead of virtual machine. So, I just go deeper with the container and explain how to manage containers with Container Orchestration
What is Virtual Machine and Container?
Both virtual machine and container can optimize the available computer hardware and software resources. First I will introduce it to you.
A virtual machine (VM) is an emulation of a computer system. Put simply, it makes it possible to run what appear to be many separate computers on hardware that is actually only one computer.
The operating systems (OS) and their applications share hardware resources from a single host server, or from a pool of host servers. Each VM requires its own underlying OS, and the hardware is virtualized. A hypervisor, or a virtual machine monitor, is software, firmware, or hardware that creates and runs VMs. It sits between the hardware and the virtual machine and is necessary to virtualize the server.
With containers, instead of virtualizing the underlying computer like a virtual machine (VM), just the OS is virtualized.
Containers sit on top of a physical server and its host OS — typically Linux or Windows. Each container shares the host OS kernel and, usually, the binaries and libraries, too. Shared components are read-only. Sharing OS resources such as libraries significantly reduces the need to reproduce the operating system code, and means that a server can run multiple workloads with a single operating system installation. with containers you can create a portable, consistent operating environment for development, testing, and deployment. There are three common types of container, it’s Linux Containers (LXC), Docker and Windows Server Containers
When You Use These?
VMs are a better choice for running apps that require all of the operating system’s resources and functionality when you need to run multiple applications on servers, or have a wide variety of operating systems to manage. Containers are a better choice when your biggest priority is maximizing the number of applications running on a minimal number of servers.
Now, how can you manage all those containers? If you have ten containers and four applications, it’s not that difficult to manage the deployment and maintenance of your containers. If, on the other hand, you have 1,000 containers and 400 services, management gets much more complicated. When you’re operating at scale, container orchestration becomes essential.
Container orchestration is all about managing the life cycles of containers, especially in large, dynamic environments. Simply it’s can automating the deployment, management, scaling, networking, and availability of your containers. When you use a container orchestration tool, like Kubernetes or Docker Swarm , you typically describe the configuration of your application in a YAML or JSON file, depending on the orchestration tool. These configurations files (for example, docker-compose.yml) are where you tell the orchestration tool where to gather container images.
Containers are deployed onto hosts. When it’s time to deploy a new container into a cluster, the container orchestration tool schedules the deployment and looks for the most appropriate host to place the container based on predefined constraints (for example, CPU or memory availability). You can even place containers according to labels or metadata, or according to their proximity in relation to other hosts — all kinds of constraints can be used.
Container Orchestration in My Project
Before I explain about implementation of container orchestration In my project, here is my architecture. The project name is SmartCRM, it is a costumer membership application. So my apps can register and identification a costumer to be a member only with his or her face . I use Heroku for deployment server and Gitlab for work. My application use Vue.js as Frontend and Python (Django) as Backend. In additional, we also connect to our partner company API (XQInformatics) that responsible as Machine Learning.
Firs user will faced our UI in front end using Vue.js. Then, when user click or do some request it will pass through to the Backend. There our python logic process the request, if needed our Backend will request to XQ’s API for response. After all request finished processing, Backend will send response to Frontend and show it to user.
Here I will explain container orchestration on my Backend, It’s because Vue.js don’t need it (Vue build the app before its deployment, and you can run it properly).
All Heroku applications run in a collection of lightweight Linux containers called dynos. Dyno configurations are defined in the Procfile.
There’s tell to dyno before run, you must migrate all file using command deployment.sh
After that, run the “web” dyno configuration. Web dynos are dynos of the “web” process type that is defined in the Procfile. Only web dynos receive HTTP traffic from the routers. Once the web dyno is started, the dyno formation of the app will change (the number of running dynos of each process type) — and subject to dyno lifecycle, Heroku will continue to maintain that dyno formation until you change it.
Python using WSGI build and deploy, WSGI is the Web Server Gateway Interface. It is a specification that describes how a web server communicates with web applications, and how web applications can be chained together to process one request.
Thus the writing that I made about container orchestration, hopefully what I have written can be implemented in your project. Then it can help you pack your application into containers and manage it.
Gusti Ngurah Yama Adi Putra
Computer Science of University of Indonesia 2017