What Is Kubernetes (or K8s)?
Learn about the architecture of Kubernetes, how to install it, and deploying your first pod to K8s.
Before I talk about what Kubernetes aka K8s is, let’s go back in time a little bit. In the 1990s or earlier, when it was time to deploy an application the only way to deploy was through a physical server but what if we wanted to horizontally scale? We would have to purchase more physical servers.
Later, we had something called a virtual machine/server aka VM that lives inside a physical server which enables us to run what appear to be multiple machines, with multiple operating systems, on a single computer/server that allows us to scale our app horizontally without having to buy more physical servers with the help of Hypervisor. Technology is always evolving which allows us to now have what is called a container (mostly called docker but docker is just one of the container runtimes out there).
Containers help solve two (could be more) important problems for software development which are “it works on my machine but why not yours?” and “allow new developers on the team to set up their development environments pretty quickly via docker-compose”.
Container technology is great but what if we want more features such as auto-scaling, self-healing and zero downtime deployment at ease? How can we do that with just the power of the container? That is where container orchestration comes into play. There are a lot of container orchestrations in the market such as Docker Swarm, OpenShift, Kubernetes and god knows how many more and yet Kubernetes is the winner in this market.
Kubernetes Architecture
Master Node (Control Plane): It’s the brain of the Kubernetes cluster because it implemented all important features such as auto-scaling, self-healing and zero-downtime deployment. To perform those fancy tasks requires to have multiple related components such as:
- API Server: It’s the frontend interface to the control plane either from external or components within K8s. It exposes the Restful API on port 443 that allows us to post YAML files to it which describe the desired state of the application such as which container image to use, how many pod replicas to run or what port to expose. YAML file is also called the manifest file.
- Cluster Store: it stores the entire configuration and state of the cluster. K8s use etcd as the distributed key-value database.
- Controller Manager: Monitors the cluster and responds to the event. It ensures the current state matches the desired state. For example, we want 5 replicas and suddenly 1 pod crashes so it will automatically spin up a new pod to match the desired state which is 5.
- Scheduler: it watches the API server for new tasks and assigns them to the healthy node. In a nutshell, it finds the healthy nodes to run the pods on.
Worker Node: is a worker of the Kubernetes cluster. It is where the container will be run. Here are the components requires on the Node:
- Kubelet: is responsible for registering the node to the K8s cluster after we install it. It watches the API server for the new tasks that are assigned to them and executes the tasks (spinning up the pods for example). It will report back to the control plane whether it can or cannot execute a task. Node can be used interchangeably with Kubelet.
- Container Runtime: in order for K8s to run a container it requires container runtime. For instance, Docker.
- Kube-proxy: runs on every node in the cluster and is responsible for local cluster networking and makes sure each node gets its own unique IP address and also implements local IPTABLES or IPVS rules to handle routing and load-balancing of traffic on the Pod network.
- Pod: we cannot run the container directly in K8s, it must be run inside the pod. Pod contain kernel namespaces, network stack, memory and volume. The good practice is to run a single container per pod but in some special cases like if we want to implement a sidecar pattern or service mesh or even a helper container; we can do so by injecting those containers within the same pod with the actual business container. When pods unexpectedly die, K8s will spin up a shiny new pod with a shiny new ID and IP address.
Installing Kubernetes
There are a ton of ways to install Kubernetes
- Local: we can install Kubernetes using Minicube and the easiest way is to enable Kubernetes in Docker Desktop by going to the setting -> Kubernetes -> check both of the checkboxes -> Apply & Restart
- Cloud: Google Kubernetes Service, Azure Kubernetes Service, Elastic Kubernetes Service, Linode Kubernetes Engine
- Playground: we can leverage this platform play-with-k8s without having to install Kubernetes on your computer.
Deploying your first pod to K8s
Usually, we never want to deploy pods directly to Kubernetes we leverage Kubernetes objects such as Deployment, Replicaset, Daemonset or Statefulset but in today's lesson, we will just deploy a regular pod to Kubernetes.
pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
zone: dev
version: v1
spec:
containers:
- name: nginx-ctr
image: nginx:latest
ports:
- containerPort: 8080
deploying pod
$ kubectl apply -f pod.yml
checking pod
$ kubectl get pods
deleting pod
kubectl delete pod nginx
Now you know how to deploy a pod to Kubernetes and we will dive deeper into Kubernetes Objects in the next article which is Kubernetes 102.
Conclusion
My take is that if you are building an application using microservice architecture then choosing Kubernetes for deployment is the right choice but if your application is monolith you might want to look into serverless infrastructure such as Cloud Run or AWS Fargate instead of Kubernetes. I hope you enjoy reading this and happy learning.