Kubernetes — A Ship to Sail

Sarthak
CodeX
Published in
4 min readSep 2, 2023

We have discussed containerization and docker and I hope you all got something from it. Let us start something more advanced and more interesting, yes! Kubernetes! Or also known as K8s. A ship for all your containers.

Now, let us understand something here, docker is something used to curate containers, but Kubernetes is something that is used to manage these containers. Let us suppose you work in an organization where there are multiple containers, say around 50. How will you manage these containers? How will they communicate with each other? How the users will be able to use these services launched on containers? So many questions and the answer to them is only one, Kubernetes. So, Kubernetes is a little bit complex, but it’s not hard, I mean once you get a particular concept it is very easy to perform the practical for the same. Do not worry, I will be beside you through everything and I will try my best to explain everything in a manner that you will start to understand and love K8s. So, let’s dive right in.

K8s is an open-source container management tool that automates container deployment, container scaling & load balancing as well as manages all the containers. It schedules, runs, and images isolated containers that are running on virtual/physical/cloud machines, all the top cloud providers support Kubernetes. Google developed an internal system called ‘borg’ (later named omega) to deploy and manage thousands of Google applications and services on their cluster. In 2014, Google introduced Kubernetes an open-source platform written in Golang, and later donated to CNCF (Cloud Native Computing Foundation). I guess that is enough theory about K8s past. There are many features that K8s gives us like:

  • Orchestration (Clustering of any no. of containers running on different n/w)
  • Autoscaling (vertical & horizontal)
  • Load Balancing
  • Platform Independent(cloud/virtual/physical)
  • Fault Tolerance (Node/POD failure)
  • Rollback (going back to the previous version)
  • Health Monitoring of containers
  • Batch Execution (one-time, sequential, parallel)

K8s works on a simple system called a master-node relationship and this relationship exists in the form of a cluster. Let us come to a different topic than k8s, I want you all to know containerD too because recently k8s has stopped supporting docker daemon but supports native containerD. It is easy, and I will cover it in this series of blogs on K8s. So, coming back let us now discuss the architecture of the Kubernetes.

K8s Architecture. The black boxes represent containers that are inside the pods. Don't worry we will see what pods are in the future blogs.

This link will take you to the architecture of K8s, you may have to zoom a little bit. Let us start with the master node.

Master Node/Control Plane

The control plane is a set of processes that run on the master node and are responsible for managing the cluster. The components of this master node/Control plane are:

  • Kube-API Server
  • Kube-Scheduler
  • Etcd
  • Control-manager

We will learn about each of them, but first, let us discuss what are the roles of this master node. The master node is responsible for:

  • Scheduling containers to worker nodes
  • Monitoring the health of the cluster
  • Providing a user interface for managing the cluster
  • The worker nodes are responsible for:
    — Running the containers that are scheduled by the master node.

Now let us discuss these components.

Kube-API Server

  • This is used for all the communications that take place in the cluster.
  • It interacts directly with the user.
  • It is meant to scale automatically as per the load.
  • It also directly interacts with etcd.
  • Kubeadm deploys kube API server as a pod in the kube-system namespace on the master node
  • Kubeadm is a tool that helps you create and manage Kubernetes clusters. It is a command-line tool that can be used to install and configure Kubernetes on a single node or on a multi-node cluster.
  • We can use Kubectl or we can make direct API calls to the Kube-API server like so:
    — Curl -X POST /api/v1/namespace/default/pods
  • If you have deployed the cluster manually, i.e not using kubeadm then you can view the kube-api service located at
    — /etc/systemd/system/kube-apiserver.service
  • You can also see the running process and the effective options by listing the processes.
    — ps -aux | grep kube-apiserver
  • We can call api-server as the front-end part of the control plane.

Kubectl

kubectl is a command-line tool for managing Kubernetes clusters. It allows you to create, delete, and manage resources in your cluster, such as pods, services, and deployments. It also provides a way to interact with the Kubernetes API.

etcd

Etcd is a key-value store for Kubernetes. It is a consistent and highly available store that stores metadata and the status of the cluster. It also stores information about the state of the cluster, such as the list of nodes, the services running on each node, and the pods running on each service. Etcd has the following features:

  • Fully replicated: The entire state is available on every node in the cluster. This means that if one node fails, the other nodes will still have the data
  • Secure: Implements automatic TLS with optional client-certificate authentication. This ensures that data is encrypted and that only authorized users can access it
  • Fast: Benchmarked at 10,000 writes per second. This means that it can handle a lot of traffic and is very responsive

Etcd is a critical component of Kubernetes. It provides the foundation for all other components in the cluster.

Let us stop here, because I guess this would be quite overwhelming, starting something new is. So, let’s continue this further in another part.

Until next time! and remember to always stay curious!

--

--