Let’s Talk: Containers and Kubernetes In Google Cloud Platform — Part 1

Samuel Arogbonlo
Mar 4 · 8 min read

In recent years, the use of containers has become very useful in the proper management of applications and other extensive orchestrations in the cloud.

By definition, a container is a runtime process executed within a namespace which is a resource managed by cgroups and various other LSMs and security features to ensure complete process isolation during runtime.

Let us do an exercise of mind imagination and picturing; looking at the container above, imagine it has automated processes embedded in it like an automatic door, automatic cook, automated toilets, automated buffet system, smart kitchen, smart coffee maker and many other automated actions you may prefer — just hold that picture in your mind. Then obviously you are the human being and the container is placed somewhere in the skies to operate with some sort of source of power and internet to run the operations — cool right? Now that is exactly how it is when involving containers and Kubernetes in your applications running — while the container is synonymous to Kubernetes or others, the sky is synonymous with your public cloud (Amazon Web Services, Goole Cloud Platform, DigitalOcean, Azure and the rest), the people or rather you are synonymous to the applications, the internet and power supply then finally the automated operations synonymous to automated healing of pods, application maintenance, downtime intervention, dependency update and many others.

Google says that “Containers allow you to package your application and its dependencies together into one succinct manifest that can be version controlled, allowing for easy replication of your application across developers on your team and machines in your cluster. Just as how software libraries package bits of code together, allowing developers to abstract away logic like user authentication and session management, containers allow your application as a whole to be packaged, abstracting away the operating system, the machine, and even the code itself”.

Light History Of Applications and Containers

Get a cup of coffee and let’s stroll down memory lane a bit and follow some major highlights in the journey of container orchestrations and application management.

  1. Recall the early days of building home servers, running each application on a separate physical machine and operating systems (P.S: It had its disadvantages in poor speed, more machines, excessive heat and several other uncomfortable situations).
Fig 1.0 Image of home application servers in “Silicon Valley” series

2. Next, also recall the era of virtual machines where you can deploy the application and its dependencies on the same physical machine (P.S: It had its disadvantage of running while trying to squeeze its operations into one virtual machine plus peradventure there needs to be an update in the dependency of one app, it could affect other applications running in the virtual machine — quite disheartening).

Fig 2.0 Sort of VM in the cloud

3. Furthermore, the industry evolved to the era of creating a container that carries the application code and probably its dependencies which is always scalable and lightweight without being bothered about the hardware or the hypervisor — this pattern was quite innovative because it ensures the same operating system can run everything at once with the hardware. Taking containers in detail, here, the application and its dependencies are called an image; in simple terms, the container is simply running an instance of an image. Without worrying about running, the container can help the application run smoothly. Containers use several LINUX technologies and some of them include the following:

  • Process: An instance of a running program is called a process. Every time you run a shell command, a program is run and a process is created for it. Each process in Linux has a process id (PID) and it is associated with a particular user and group account.

Moving forward……

Fig 1.0 Container view with base image glance
Fig 1.0 Container view with base image glance
Fig 3.0 Container view with the base image glance

In building these containers, the base layer can be downloaded like the Ubuntu layer fixed just as shown in Fig 3.0 (e.g one repo site is gcr.io which is a google registry for base images). Other ways of building the container include using cloud build, local building with docker push then buildpacks as well. Since this is not a tutorial or how-to article, I suggest you look up the documentation for more details on how to implement the process and trust me it’s quite simple — just follow the steps. The structure of the cloud build and how it runs is displayed in Fig 4.0 below. By definition, it is a service that executes builds in the GCP infrastructure process. It has the ability to import source code from Cloud Storage, Cloud Source Repositories, GitHub, or any other version control space, execute a build to the specified preferences, and produce artefacts such as Java archives and even docker containers as required in this piece.

Cloud Build executes your build as a series of build steps, where each build step is run in a Docker container. The beauty of cloud build is you can use supported build steps or write your custom build steps depending on the project. You can check out the documentation for some how-to steps and trust me it’s pretty easy.

Fig 4.0 Structure of cloud build and its running structure

Well after embracing the idea of containers, dockers, and the other terminologies, it has to be managed and scaled on good infrastructure to meet product needs and maintain elasticity. At this point, the containers have to “hang their boots” (in my community, this means they have to hand over the baton and retire) and let the KUBERNETES domain perform its magic in container and applications orchestrations. Kubernetes manages the infrastructure on-premises or in the cloud regardless of if there is an engineer on-site or not — it is built to automatically control operations. It is a container-centric open-source environment. It automates deployment, scaling, logging, and other features of the application. It also features IAAS like user preferences and config flexibility. You can also use declarative config to help the applications do your desired wish at all times but also the imperative config ability is never put aside. It is used in temporary fixes and other light engagements. Kubernetes also has some key features and they include

  • Supporting several workload types

I understand you may be asking about security and networking in containers, well I am as awaken to such a concept as you are but stick around, I’d cook up articles on how those works as they are very important in container orchestrations.

Well, after enjoying the privilege of Kubernetes as detailed above, there is a need for some extra automation and that is where Google Kubernetes Engine comes in handy but I will not talk about it today, stay around for deep gist on how the Google Kubernetes engine works in infrastructure building (foundations), workload orchestration and most important productions. But in a nutshell, since Kubernetes could be difficult to maintain, the Google Kubernetes Engine is the way out. It will help deploy, manage the Kubernetes environment for applications on GCP. It makes it easy to bring apps to the cloud. It is fully managed, optimized, and controlled by google then it helps for clusters(the system being managed) upgrade, then the node (VM in the resources that help the clustering process) is processed for auto repair in case of unhealthiness (which means it will delete the node and create a new one) and other marvellous features. Like I said, stick around, I got a lot of content for you.

Fig 5.0 Image of Kubernetes Logo

Now, remember, this article is not only for experts in the cloud space, even newbies could hop in and learn a lot and that is why I make everything clear both in layman and professional terms, so if you have any questions, shoot or you can also reach out to me on Twitter or find me on Github.

Thanks for reading ❤️

Please leave a comment if you have any thoughts about the topic — I am open to learning and knowledge explorations.

I can imagine how helpful this post has been, do leave a clap 👏 below a few times to show your support for the author!

Nerd For Tech

From Confusion to Clarification

Nerd For Tech

NFT is an Educational Media House. Our mission is to bring the invaluable knowledge and experiences of experts from all over the world to the novice. To stay up to date on other topics, follow us on LinkedIn. https://www.linkedin.com/company/nerdfortech

Samuel Arogbonlo

Written by

A writer for Cloud and DevOps with a sprinkle of other interesting software concepts.

Nerd For Tech

NFT is an Educational Media House. Our mission is to bring the invaluable knowledge and experiences of experts from all over the world to the novice. To stay up to date on other topics, follow us on LinkedIn. https://www.linkedin.com/company/nerdfortech

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store