preview image

Kubernetes

Avindu Dharmawardhana
9 min readJan 31, 2024

--

What is kubernetes?

Kubernetes is a portable, extensible, open source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation.

As previously discussed articles, containerization is handled by the help of Docker.

Before moving on further let’s get familiar with some words in kubernetes.

Any application or application system handle based on some architecture.In kubernetes,

Basic architecture is the Cluster architecture.

Cluster architecture

Cluster architecture helps to handle multiple number of various requests for a application system. When application organized within multiple nodes to execute the tasks to achieve the outcome from the application.

In cluster architecture, requests or parts of the user requests are divided among two or more computer systems, such that a single user request is handled and delivered by two or more than two nodes (computer systems). The benefit is unquestionably the ability of load balancing and high-availability.

According to the above explanation, we can see that multiple component based

Application systems build upon a clustered architecture will improve the efficiency of whole system.

figure:kubernetes architecture

In the shown image, there are two major working nodes in the system that contains multiple containers, kubelet and kubelet-proxies.

Let’s see about the componets in the above shown image, in a more detailed way.

In a cluster there two main components,

1.Control plane

2.Worker nodes

Control pane responsible for managing the state of the cluster. Single control pane will Contain multiple worker nodes.

figure:Control pane

When the client engages with the application relevant control pane will receive a REST API and that request will manage the rest of the cluster .

ETCD is a distributed key value store that store the cluster’s persistent state. This will controlled by the API server and other components of the control.

figure: multiple ETCDs handling

Scheduler is responsible for scheduling pods onto worker node in the cluster. Scheduler willManage the work distribution by analyzing how much of work can be assigned with each pod.

Controller manager is responsible for running the controllers that manage the state of the cluster.

Worker nodes run the containerized application workloads. Each container will run on a pod.

Pods are the smallest deployable unit in kubernetes, these pods provide the shared storage and the network configurations to the containers. All pods are handled by the control pane.

In a worker node there are pods, containers, kubelets, container runtime and kube-proxies.

Kubelet is responsibe for communicating with the control pane. Kubelet will receive Instructions which pods needed to run on the pode from control pane.

Container runtime will run the containers on the worker node. Container runtime is responsible for pulling the container images from a registry, start and stop containers and managing the container’s resources.

figure: Container runtime

Kube-proxy is a network proxy that runs in each worker node. This is responsible for Routing the traffic in to the correct pode. This also provides the load balancing to the pods and ensures the traffic distributed evenly across the pods.

In the previously discussed information, there is a important word “Load Balancing”

Let’s talk about concept “Load balancing” in a more detailed way, to understand more about kubernetes.

In a network, multiple requests are coming from the clients and also multiple replies are going out from the servers in the same time.

There will be a larger load in a distributed network time like that.

“Load balancing is the method of distributing network traffic equally across a pool of resources that support an application. Modern applications must process millions of usewrs simultaneously and return the correct text, video, image or whatever file that needed.

Load balancers are the components that used to load balancing.

Load balancer is a device that sits between user and the server group and acts as and invisible facilitator, ensuring that all resource servers are equally used.

figure: load balancer

There are several types of load balancers.

1.Application load balancing

Application load balancers look at the request content, such as HTTP headers or SSL session IDs, to redirect traffic.

figure: Application load balancer

2.Network load balancing

Network load balancers examine IP addresses and other network information to redirect traffic optimally.

They track the source of the application traffic and can assign a static IP address to several servers. These load balancers use the static and dynamic load balancing algorithms described earlier to balance server load.

figure: Network load balancer

3.Global server load balancing

Global server load balancing occurs across several geographically distributed servers. They attempt to redirect traffic to a server destination that is geographically closer to the client. They might redirect traffic to servers outside the client’s geographic zone only in case of server failure

figure: Global load balancer

4.DNS load balancing

In DNS load balancing, you configure your domain to route network requests across a pool of resources on your domain. A domain can correspond to a website, a mail system, a print server, or another service that is made accessible through the internet.

DNS load balancing is helpful for maintaining application availability and balancing network traffic across a globally distributed pool of resources.

figure: DNS load balancer

Now let’s see about some benefits about load balancing.

*Application availability
*Application scalability
*Application security
*Improve Application performance

minikube deployment

minikube is the tool that facilitates the setup of a single-node Kubernetes cluster on a local machine.

minikube is designed to develop applications using kubernetes without need of a full-scale, multi node cluster.

Step 1
Install minikube on local machine. This installation will based on the operating system in the targeted machine.
This can be installed using link provided or using below given powerhell command(Make sure that powershell will run as administrator)

New-Item -Path 'c:\' -Name 'minikube' -ItemType Directory -Force
Invoke-WebRequest -OutFile 'c:\minikube\minikube.exe' -Uri 'https://github.com/kubernetes/minikube/releases/latest/download/minikube-windows-amd64.exe' –UseBasicParsing

Step 2
Adding the minikube.exe file to the PATH that minikube installed.

$oldPath = [Environment]::GetEnvironmentVariable('Path', [EnvironmentVariableTarget]::Machine)
if ($oldPath.Split(';') -inotcontains 'C:\minikube'){
[Environment]::SetEnvironmentVariable('Path', $('{0};C:\minikube' -f $oldPath), [EnvironmentVariableTarget]::Machine)
}

Step 3

Starting the cluster

minikube start

Step4

Starting the deployment

# Run a test container image that includes a webserver

kubectl create deployment hello-node --image=registry.k8s.io/e2e-test-images/agnhost:2.39 -- /agnhost netexec --http-port=8080

Kubect1 create will create a deployment that will manages the Pod.

As mentioned in the start , Pod runs the container based on the provided image.

Step5

kubectl get deployments

Step 6

kubectl get pods
details of pod

Step 7

kubectl get events

Above command output shows the cluster events.

Step 8

kubectl config view

Above command output shows the kubect1 configurations.

Step 9

kubectl logs hello-node-5f76cf6ccf-br9b5

Above command output will show the logs for the container includes in the pod.

Config maps and secrets

In kubernetes, ConfigMaps and Secrets are two essential resources that used to manage configuration data and sensitive information respectively.
They are allow to decouple details and sensitive data from the application code making easier to manage, update and secure.

Create configmaps

Create configmaps
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
key1: value1
key2: value2

above yaml definition used to create a configmap.

Create configmap in a local directory

mkdir -p configure-pod-container/configmap/
# Download the sample files into `configure-pod-container/configmap/` directory
wget https://kubernetes.io/examples/configmap/game.properties -O configure-pod-container/configmap/game.properties
wget https://kubernetes.io/examples/configmap/ui.properties -O configure-pod-container/configmap/ui.properties

# Create the ConfigMap
kubectl create configmap game-config --from-file=configure-pod-container/configmap/

This documentation will show various ways to create configmap.

Display details of a configmap

kubectl describe configmaps game-config

output for the above command be like,

Name:         game-config
Namespace: default
Labels: <none>
Annotations: <none>

Data
====
game.properties:
----
enemies=aliens
lives=3
enemies.cheat=true
enemies.cheat.level=noGoodRotten
secret.code.passphrase=UUDDLRLRBABAS
secret.code.allowed=true
secret.code.lives=30
ui.properties:
----
color.good=purple
color.bad=yellow
allow.textmode=true
how.nice.to.look=fairlyNice

Consuming configmaps

Config maps can be consumed in many different ways in pod.
Such as environment variables, command-line arguments or as files in a volume.

#configmap consuming
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mycontainer
image: myimage
envFrom:
- configMapRef:
name: my-configmap

Create secrets

Unlike ConfigMaps, secrets are encoded or encrypted to provide an additional layer of security.

#secret creation
apiVersion: v1
kind: Secret
metadata:
name: my-secret
type: Opaque
data:
username: <base64-encoded-username>
password: <base64-encoded-password>

Consuming secrets

Secrets are also can consumed similarly like configmaps.

#consuming secrets
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mycontainer
image: myimage
env:
- name: MY_USERNAME
valueFrom:
secretKeyRef:
name: my-secret
key: username
- name: MY_PASSWORD
valueFrom:
secretKeyRef:
name: my-secret
key: password

In the above documentation includes different operations in a secret.

Monitoring, Logging and Debugging

Monitoring, Logging and debugging are crucial aspects of managing applications in a kubernetes cluster.
Each operation plays a distinct role in ensuring health, performance and keeping reliability of the containerized workloads.
Monitoring is the process that tracking the performance of the application and in the kubernetes infrastructure.
It helps with detecting the issues, optimize resource usage and ensure that applications running as expected.

Prometheus and grafana

Prometheus is a widely used monitoring system used in the kubernetes ecosystem.
It will collect the matrices from various resources including applications and the kubernetes API server.
Grafana is the tool that used in conjunction with Prometheus to create dashboards and visualize monitoring data.

figure: Monitoring clusters using grafana
figure: Grafana working
figure: Monitoring clusters

Networking in Kubernetes

Networking in kubernetes plays a major role with managing and deploying the containerized applications.

This Networking model involves in pods, services, container network interface, overlay networks, network policies and DNS resolution.

Other than that networking option enables assigning unique IP addresses to pods, using services for communication abstraction, employing CNI plugins and overlay networks for inter-node communication, utilizing kube-proxy for managing networking rules, and leveraging ingress for external access.

These components work together to create a flexible and scalable networking model for containerized applications in a kubernetes cluster.

Helm

Helm is a package manager for kubernetes applications that simplifies the process of defining, installing and upgrading even the most complex kubernetes applications.

figure: Helm architecture

Helm provides Helm chart to define, install and upgrade kubernetes applications using a collection of pre-cofigured resources packaged together.

Helm chart encapsulates all the information needed to deploy a specific application, including services, deployments, config maps and other resources.

Helm charts are stored in a unit called ‘repository’ which are the HTTP servers serving an index.yaml file describing the available Helm charts.

figure: Contrast between with and without Helm

Each helm can construct a instance called ‘release’ which the given Helm deployed in the given cluster.

figure: Helm instance

Chart hooks allow the user to execute custom logic at various points in the deployment process such as pre-install, post-install, pre-upgrade, and post-upgrade.

Helms mainly used with the simplifying the complex configurations in the deployments.
It will enhances the reusability of application configurations ,making easier to share and distribute applications across different environments and clusters.

figure: structure of a Helm chart

--

--