Kubernetes Architecture Explains 🥰

Mr Mostafizur Rahman
Coinmonks
8 min readFeb 28, 2023

--

Kubernetes is popularly known as K8s

Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.

The name Kubernetes originates from Greek, meaning helmsman or pilot. K8s as an abbreviation results from counting the eight letters between the “K” and the “s”. Google open-sourced the Kubernetes project in 2014. Kubernetes combines over 15 years of google experience running production workloads at scale with best-of-breed ideas and practices from the community.

Why is K8 so popular?

One of the primary reasons why K8s became so popular is the ever-growing demand for businesses to support their micro-service-driven architectural needs.

Microservice architecture Supported by Kubernetes :

  1. Scalability: Microservices architectures consist of many small, independent services that can be scaled independently. Kubernetes provides a scalable platform for managing and scaling these services based on demand.
  2. Portability: Microservices architectures require a platform easily deployed and managed across different environments. Kubernetes provides a portable platform that can run on public and private clouds, on-premises data centers, and hybrid environments, making it easy to move microservices between different environments.
  3. Resource utilization: Microservices architectures require efficient resource utilization to ensure that resources are used effectively and not wasted. Kubernetes provides features such as automatic scaling, resource quotas, and resource limits, which help organizations optimize resource utilization.
  4. Service discovery and load balancing: Microservices architectures require a mechanism to discover and communicate with each other across the network. Kubernetes provides built-in service discovery and load balancing, making it easy to manage network traffic and distribute traffic evenly across all service instances.

Kubernetes provides a powerful platform for managing microservices at scale. Its scalability, portability, resource utilization, service discovery, and load balancing make it an ideal platform for organizations adopting a microservices architecture.

Fundamental Architecture Of Kubernetes Cluster:

As we can see from the below diagram Kubernetes Follows master-worker architecture

  • Master Nodes(Control plane)
  • Worker Nodes or Slave Nodes

Worker Node

As a developer or K8s administrator, most of the time, you will deal with worker nodes. Whether you have to deploy your containerized app, autoscale it, or roll out any new app update on your production-grade server, you will often deal with worker nodes.

The role of the worker node is to execute the application workloads defined by the Kubernetes control plane. When a new workload is created or scaled up, the control plane schedules the workload to run on one or more worker nodes, based on available resources and other constraints.

For every worker, these are key processes:

  • Container Runtime
  • kubelet
  • kube-proxy
  • Pods

Container Runtime:

Every Microservice module(micro-app) you deploy is packaged into a single pod that has its container runtime. Therefore, one must install a container runtime into each cluster worker node so Pods can run there.

Some of the container runtime examples are,

  • containerd
  • CRI-O
  • Docker

Kubelet:

kubelet is a primary node agent of the worker node, which interacts with both the node and the container in the given worker node.

The kubelet is responsible for

The main functions of kubelet service are:

  1. Register the node it’s running on by creating a node resource in the API server.
  2. Continuously monitor the API server for pods scheduled to the node.
  3. Start the Pod’s containers by using the configured container runtime.
  4. Continuously monitors running containers and reports their status, events, and resource consumption to the API server.
  5. Runs the container liveness probes, restarts containers when the searches fail terminates containers when their Pod is deleted from the API server, and notifies the server that the Pod has terminated.

The Kubelet is the primary and most important controller in Kubernetes. It’s responsible for driving the container execution layer, typically Docker

Kube-proxy:

K8s cluster can have multiple worker nodes and each node has multiple pods running, so if one has to access this Pod, they can do so via Kube-proxy.

kube-proxy is a network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept.

To access the Pod via k8s services, there are specific network policies, that allow network communication to your Pods from network sessions inside or outside of your cluster. These rules are handled via kube-proxy

Kube-proxy has an intelligent algorithm to forward network traffics required for pod access which minimizes the overhead and makes service communication more performant

Pods

A pod is one or more containers that logically go together. Pods run on nodes. Pods run together as a logical unit. So they have the same shared content. They all share the same IP address but can reach other Pods via localhost, as well as shared storage. Pods don’t need to run on the same machine as containers can span more than one machine. One node can run multiple pods.

So far we have seen that the above processes need to be installed and running successfully within your worker nodes in order to manage your containerized application efficiently,

But then

  • Who manages these worker nodes, to ensure they are always up and running?
  • How does the K8s cluster know which pods should be scheduled and which ones should be dropped or restarted?
  • How does the k8s cluster know the resource level requirements of each container app?

You rightly guessed it Master Node.

Master Node in K8s cluster:

The master node is also known as a control plane that is responsible for managing worker/slave nodes efficiently. They interact with the worker node to

  • Schedule the pods
  • Monitor the worker nodes/Pods
  • Start/restart the pods
  • Manage the new worker nodes joining the cluster

Master Node Services

Every master nodes in the K8s cluster runs the following key processes

  • API Server
  • kubectl: kube-controller-manager
  • Scheduler
  • etcd

API Server

It is the main gateway to access the k8s cluster. It acts as the primary gatekeeper for client-level authentication, or we can say that the Kube-Episerver is the front end for the Kubernetes control plane.

So whenever you want to

  • Deploy any new app
  • Schedule any pods or
  • Create any new services
  • Query status or the health of your worker nodes

You need to request the API server of the master node which in turn validates your requests before you get access to the processes in worker nodes.

Apiserver is designed to scale horizontally — that is, it scales by deploying more instances. You can run several instances of kube-apiserver and balance traffic between those instances

Scheduler

In Kubernetes, the scheduler assigns workloads, or “pods,” to worker nodes based on available resources and other constraints. The scheduler is responsible for ensuring that pods are scheduled to run on nodes that can provide the resources needed for the workload, such as CPU and memory.

The scheduler operates on a continuous loop, constantly evaluating the state of the cluster and the availability of resources. It uses various algorithms to determine the best node to assign a workload to, such as bin packing or spread.

The scheduler takes into account various factors when assigning workloads to nodes, such as:

  1. Resource requests and limits: The scheduler checks a pod’s resource requests and limits, and schedules it to a node with enough available resources to meet those requirements.
  2. Node affinity and anti-affinity: The scheduler considers node affinity and anti-affinity rules when assigning a pod to a node. Node affinity rules specify that a pod should be scheduled on a node with certain labels or attributes, while anti-affinity rules specify that a pod should not be scheduled on a node that has certain labels or attributes.
  3. Pod priority: The scheduler considers pod priority when assigning workloads to nodes. Pods with higher priority are scheduled before those with lower priority.

Overall, the scheduler is a critical component of the Kubernetes control plane, ensuring that workloads are efficiently scheduled to nodes based on available resources and other constraints. This helps optimize resource utilization and ensures that workloads run effectively and efficiently.

kube-controller-manager(Kubectl):

The kube-controller-manager is a component of the Kubernetes control plane that manages various controllers responsible for maintaining the system’s desired state. Some of the critical controllers managed by the kube-controller-manager include:

  1. Node Controller: The Node Controller monitors the cluster nodes’ state and takes action if a node becomes unavailable. For example, if a node fails, the Node Controller will mark the node as “unhealthy” and reschedule the pods running on that node to other healthy nodes.
  2. Replication Controller: The Replication Controller ensures that the desired number of pod replicas are always running. If a pod fails or is deleted, the Replication Controller creates a new replica to maintain the desired state.
  3. Endpoint Controller: The Endpoint Controller ensures that Kubernetes services are properly configured by updating the Endpoints object, which specifies the IP addresses of the pods running the service.
  4. Service Account and Token Controllers: The Service Account and Token Controllers manage to create and delete service accounts and their associated authentication tokens.
  5. Namespace Controller: The Namespace Controller is responsible for creating and deleting namespaces and ensuring that the resources in each namespace are properly isolated.

In addition to these controllers, the kube-controller-manager also performs other important tasks, such as monitoring the overall health of the control plane and detecting and responding to changes in the cluster’s configuration.

Overall, the kube-controller-manager plays a critical role in maintaining the desired state of the Kubernetes cluster, ensuring that workloads are running effectively and efficiently, and helping to optimize resource utilization.

ETCD

Etcd is a distributed key-value store that is used as the primary data store for Kubernetes. It serves as the “brain” of the Kubernetes cluster, storing the configuration data and state information for all of the resources in the system. Some of the key roles of Etcd in Kubernetes include:

  1. Configuration data storage: Etcd stores all of the configuration data for Kubernetes resources, including pods, services, deployments, and more. This data is stored in a hierarchical key-value format and is accessible to all components of the Kubernetes control plane.
  2. Cluster coordination: Etcd ensures that the configuration data for the cluster is consistent and up-to-date across all nodes. It provides a distributed consensus mechanism that allows all nodes in the cluster to agree on the system’s current state.
  3. High availability: Etcd is designed to be highly available, ensuring that the configuration data is always accessible and up-to-date, even in node failures or network disruptions. It achieves this through distributed replication and failover mechanisms.
  4. API server communication: The Kubernetes API server communicates with Etcd to retrieve and update the configuration data for the resources in the system. The API server reads from Etcd to retrieve the current state of the resources and writes to Etcd to update the state when changes are made.

Etcd plays a critical role in the functioning of the Kubernetes cluster, serving as the primary data store and ensuring that the configuration data is consistent and up-to-date across all nodes in the system.

Overall, Kubernetes is a powerful and flexible platform for deploying and managing containerized applications, making it easier for developers to build and deploy applications in any environment. It provides scalability, portability, automation, flexibility, and an open-source ecosystem, making it a popular choice for container orchestration and management.

New to trading? Try crypto trading bots or copy trading on best crypto exchanges

Join Coinmonks Telegram Channel and Youtube Channel get daily Crypto News

Also, Read

--

--