kubernetes | cloud concepts
Kubernetes for dummies: introduction
Part 1 — What is kubernetes after all?
Cheers to another successful Kubecon! As I celebrated how far this community has come, I started thinking about when I first met Mr k8s (Kubernetes).
I have been playing with Kubernetes for a while (since 2016) — minikube was in its baby steps. Back then and there weren’t a lot of good resources out there. Today learners are in a much better place with several good resources at hand — even so, I realized that we still have things we could improve after giving a few talks last year.
Fear not, young padawan! With this blog series, I am to give you enough information to start wielding the force. So you can make your way to a Jedi Master by deepening your understanding of the unseen. (Disclaimer: as a fellow disciple, I will share what I know).
Here we will start from zero. In this series, I plan to touch on key concepts for understanding Kubernetes. We will also talk about some of the adjacent technologies and patterns. First, let’s talk about what Kubernetes is and what it’s not.
- Platform (or as people like to say: “it’s an Orchestration Layer”)
- Distributed system framework (when done right)
- Extensible & Resilient
- Open Source
Kubernetes is not:
- Simple ( … goes without swaying … *pun intended*)
- Hypervisor (but it will use them)
- Containers (it allows you to run them)
Also, note that there are several cool things that Kubernetes enables you to do. For example, when combined with the correct adjacent tools and processes using canary releases, auto-scaling, auto-healing, becomes possible. We will talk about these in future posts.
In the realm of IaaS, PaaS, SaaS, Kubernetes classifies as a hybrid. This is because when you are using Kubernetes from hyperscalers (aka “the cloud”) there are some responsibilities that fall under the cloud provider (infrastructure) and others that fall under the customer (workloads). Running a Kubernetes cluster well is difficult since it’s a complex platform with security and operational concerns. If there is interest to explore this topic further let me know.
We can’t talk about Kubernetes without talking about containers. Back in the day, we used to run stuff on servers directly. You usually installed the Operating System (OS) then added your libraries and your software et voila. Today we call that running your applications on bare-metal. Sometimes the business needed to cut costs, so multiple applications had to share the same server — wasn’t pretty.
Applications would often clash and compete for resources, to solve that problem (among other things) someone thought about sharing the hardware but enforcing isolation with multiple operating systems. Thus hypervisors became a thing giving rise to virtual machines. Further down the road, we ran into dependency management issues. “But it runs on my machine” was the slogan Docker used to come out into the world making containers more mainstream. These would share the kernel but package their dependencies and application runtime creating a little ~container~ inside the machine. These can be deployed in a VM or Bare-metal, the container won’t know the difference.
Alas, sometimes a picture is worth a thousand words:
Containers use Linux features such as
cgroups to achieve a certain level of separation in compute, storage and network and to limit resource usage. To run a container you need to build them as defined by the OCI Specification. Then you need a container runtime — this is where dockerd and containerd fit into the picture.
When you build the containers you have to tell the packaging software what to include and any configurations to perform. Normally that is done via a Dockerfile and the image utility that can package and push it to a registry — recently alternatives such as buildpacks are also available but Dockerfiles remain the most widely used option. A Registry is a sort of online catalogue for containers like DockerHub.
After your container is packaged and ready to roll Kubernetes get into the scene.
At the same time that this was all happening with the way we run our applications, people started realizing that separations of concerns are a great thing to have! Heck, in 2008 I was designing systems that would be considered “microservices”, however that wasn’t a term back then.
Conceptually both Monoliths and Microservices are valid systems design patterns and both have their place with pros/cons. The Monolith’s served the bare-metal / VM era well since it was all packaged as a unit and meant to run in the same machine. Microservices on the other hand segment all the logical services into actual packages achieving a certain degree of separation of concerns which serves the container era pretty well. Nonetheless, don’t be fooled, everything can be misused — as monoliths have been historically abused so can microservices… on the other hand microservices solve many of the monoliths issues, but it comes with its own bundle of challenges. Remember, both patterns are equally valid but we will focus on Microservices here.
One such challenge is how do you efficiently and reliably run several containers in a large-scale production environment? Things like resiliency, observability, upgradability are top concerns. Those are the problems Kubernetes set out to solve.
If you are thinking of Kubernetes (or k8s for short) it’s likely because you want to provide a production-grade runtime for your containers as we just talked about above. To simplify maintenance and operational responsibilities, etc.
Kubernetes will take care of the shared infrastructure for your containers. For a very high-level view think of a k8s cluster as composed of control plane nodes and worker nodes. For high availability, the control plane runs on multiple nodes.
A control plane is the brain of the cluster and it will manage the worker nodes accordingly; there are key components within the control plane that are responsible for different aspects of the lifecycle of a node and we will talk more about those in a future post.
The nodes are typically responsible for running the workloads and there are several constructs around a container to enable Kubernetes to run it. One such construct is a POD which we will deep dive into soon.
Stay tuned for the next post where we will look under the hood for the control plane and nodes and see how that works. Follow me on Twitter (@fawix) or LinkedIn so you won’t miss the update! May the containers be with you!