Let’s Talk: Containers and Kubernetes In Google Cloud Platform — Part 1
In recent years, the use of containers has become very useful in the proper management of applications and other extensive orchestrations in the cloud.
By definition, a container is a runtime process executed within a namespace which is a resource managed by cgroups and various other LSMs and security features to ensure complete process isolation during runtime.
Let us do an exercise of mind imagination and picturing; looking at the container above, imagine it has automated processes embedded in it like an automatic door, automatic cook, automated toilets, automated buffet system, smart kitchen, smart coffee maker and many other automated actions you may prefer — just hold that picture in your mind. Then obviously you are the human being and the container is placed somewhere in the skies to operate with some sort of source of power and internet to run the operations — cool right? Now that is exactly how it is when involving containers and Kubernetes in your applications running — while the container is synonymous to Kubernetes or others, the sky is synonymous with your public cloud (Amazon Web Services, Goole Cloud Platform, DigitalOcean, Azure and the rest), the people or rather you are synonymous to the applications, the internet and power supply then finally the automated operations synonymous to automated healing of pods, application maintenance, downtime intervention, dependency update and many others.
Google says that “Containers allow you to package your application and its dependencies together into one succinct manifest that can be version controlled, allowing for easy replication of your application across developers on your team and machines in your cluster. Just as how software libraries package bits of code together, allowing developers to abstract away logic like user authentication and session management, containers allow your application as a whole to be packaged, abstracting away the operating system, the machine, and even the code itself”.
Light History Of Applications and Containers
Get a cup of coffee and let’s stroll down memory lane a bit and follow some major highlights in the journey of container orchestrations and application management.
- Recall the early days of building home servers, running each application on a separate physical machine and operating systems (P.S: It had its disadvantages in poor speed, more machines, excessive heat and several other uncomfortable situations).
2. Next, also recall the era of virtual machines where you can deploy the application and its dependencies on the same physical machine (P.S: It had its disadvantage of running while trying to squeeze its operations into one virtual machine plus peradventure there needs to be an update in the dependency of one app, it could affect other applications running in the virtual machine — quite disheartening).
3. Furthermore, the industry evolved to the era of creating a container that carries the application code and probably its dependencies which is always scalable and lightweight without being bothered about the hardware or the hypervisor — this pattern was quite innovative because it ensures the same operating system can run everything at once with the hardware. Taking containers in detail, here, the application and its dependencies are called an image; in simple terms, the container is simply running an instance of an image. Without worrying about running, the container can help the application run smoothly. Containers use several LINUX technologies and some of them include the following:
- Process: An instance of a running program is called a process. Every time you run a shell command, a program is run and a process is created for it. Each process in Linux has a process id (PID) and it is associated with a particular user and group account.
- Linux Namespaces: This helps the container to make certain things visible like ID number, IP addresses, and others. In other terms, they are a feature of the Linux kernel that partitions kernel resources such that one set of processes sees one set of resources and another set of processes sees a different set of resources. Some popular example include Process isolation (PID namespace), Network interfaces (net namespace), Unix Timesharing System (uts namespace), User namespace, Mount (mnt namespace) and Interprocess Communication (IPC)
- Control Groups: This is another technology that is used to control what the application can utilize like CPU time, bandwidth and others. It can be managed extensively with systemd. In other news, the control group is very essential in managing Kubernetes workloads and in turn foster better container orchestrations. Some other functions of the control group include deciding priorities when contentions surface in containers, controlling read/write access, then a possibility of providing high-level processes account information running on a given system.
- Union File Systems: The union file systems encapsulate applications and their dependencies into a set of clean minimum layers. In layman terms, it is the building block of any container. Over the years, it has been used to solve two major problems; inefficient disk space utilization and bootstrap latency all in the conventional file system. Some of its unique properties include its logical merge of multiple layers, the read-only lower layers, the writable upper layer, the potential to start reading from the upper layer than defaults of lower layers then the Copy on Write (CoW) function and finally the simulation of removal from lower directory through whiteout file.
In building these containers, the base layer can be downloaded like the Ubuntu layer fixed just as shown in Fig 3.0 (e.g one repo site is gcr.io which is a google registry for base images). Other ways of building the container include using cloud build, local building with docker push then buildpacks as well. Since this is not a tutorial or how-to article, I suggest you look up the documentation for more details on how to implement the process and trust me it’s quite simple — just follow the steps. The structure of the cloud build and how it runs is displayed in Fig 4.0 below. By definition, it is a service that executes builds in the GCP infrastructure process. It has the ability to import source code from Cloud Storage, Cloud Source Repositories, GitHub, or any other version control space, execute a build to the specified preferences, and produce artefacts such as Java archives and even docker containers as required in this piece.
Cloud Build executes your build as a series of build steps, where each build step is run in a Docker container. The beauty of cloud build is you can use supported build steps or write your custom build steps depending on the project. You can check out the documentation for some how-to steps and trust me it’s pretty easy.
Well after embracing the idea of containers, dockers, and the other terminologies, it has to be managed and scaled on good infrastructure to meet product needs and maintain elasticity. At this point, the containers have to “hang their boots” (in my community, this means they have to hand over the baton and retire) and let the KUBERNETES domain perform its magic in container and applications orchestrations. Kubernetes manages the infrastructure on-premises or in the cloud regardless of if there is an engineer on-site or not — it is built to automatically control operations. It is a container-centric open-source environment. It automates deployment, scaling, logging, and other features of the application. It also features IAAS like user preferences and config flexibility. You can also use declarative config to help the applications do your desired wish at all times but also the imperative config ability is never put aside. It is used in temporary fixes and other light engagements. Kubernetes also has some key features and they include
- Supporting several workload types
- Supporting stateful/stateless applications
- Following up resource limits
- Extensibility of resources for products
- Extremely portable for workload either on-premises or the cloud because of its open-source. Also, it can be deployed anywhere without the knowledge of the vendor or even customers/users.
I understand you may be asking about security and networking in containers, well I am as awaken to such a concept as you are but stick around, I’d cook up articles on how those works as they are very important in container orchestrations.
Well, after enjoying the privilege of Kubernetes as detailed above, there is a need for some extra automation and that is where Google Kubernetes Engine comes in handy but I will not talk about it today, stay around for deep gist on how the Google Kubernetes engine works in infrastructure building (foundations), workload orchestration and most important productions. But in a nutshell, since Kubernetes could be difficult to maintain, the Google Kubernetes Engine is the way out. It will help deploy, manage the Kubernetes environment for applications on GCP. It makes it easy to bring apps to the cloud. It is fully managed, optimized, and controlled by google then it helps for clusters(the system being managed) upgrade, then the node (VM in the resources that help the clustering process) is processed for auto repair in case of unhealthiness (which means it will delete the node and create a new one) and other marvellous features. Like I said, stick around, I got a lot of content for you.
Now, remember, this article is not only for experts in the cloud space, even newbies could hop in and learn a lot and that is why I make everything clear both in layman and professional terms, so if you have any questions, shoot or you can also reach out to me on Twitter or find me on Github.
Thanks for reading ❤️
Please leave a comment if you have any thoughts about the topic — I am open to learning and knowledge explorations.