“Kubernetes Clusters: Understanding Nodes, Pods, and Services”

Prateek Kumar
4 min readJan 26, 2023

--

Welcome to the second blog post in our series “Mastering Kubernetes: A Comprehensive Guide to Container Orchestration”. In the previous post of this series “Getting started with Kubernetes: An Introduction to Container Orchestration”, we introduced the basic concepts of Kubernetes and how it works to manage and scale containerized applications. In this post, we will be diving deeper into the components that make up a Kubernetes cluster, specifically nodes, pods, and services. By the end of this post, you will have a better understanding of how these components work together to create a functioning Kubernetes cluster.

Nodes:

A node is a worker machine in Kubernetes, either a virtual or a physical machine, depending on the cluster. Each node has the necessary services to run pods and is managed by the master components. A node can have one or more pods running on it.

Pods:

A pod is a smallest and simplest unit in the Kubernetes object model, which represents a single instance of a running process in a node. A pod represents processes that are running on the same host and share the same network namespace. Pods can contain one or more containers, and all containers in a pod share the same IP address and ports.

Services:

A service is an abstraction that defines a logical set of pods and a policy by which to access them. Services enable loose coupling between pods and consumers and provide load balancing and service discovery. Services can be accessed by other pods within the same cluster or by external clients via a stable IP address and DNS name.

Let’s take a look at how these components work together in a real-world scenario.

apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: myapp:latest
ports:
- containerPort: 8080

The above code defines a Pod object called “myapp-pod” that runs a container based on the “myapp:latest” image and exposes port 8080. This pod object is then created on the cluster using the kubectl create command.

kubectl create -f myapp-pod.yaml

This single pod is then exposed as a service using a service object

apiVersion: v1
kind: Service
metadata:
name: myapp-service
spec:
selector:
app: myapp
ports:
- name: http
port: 80
targetPort: 8080

The above code defines a Service object called “myapp-service” that selects the pods with the label “app: myapp” and exposes port 80 on the host as a way to access the container’s port 8080. This service object can then be created on the cluster using the kubectl create command.

kubectl create -f myapp-service.yaml

In this example, the pod “myapp-pod” is running on a node in the cluster. The service “myapp-service” is then created and selects the “myapp-pod” based on the label “app: myapp”. The service then exposes port 80 on the host, which can be accessed by other pods within the same cluster or by external clients.

It’s important to note that the service does not directly access the pod, but instead uses a unique IP address and DNS name to forward the traffic to one of the available pods that match the selector criteria. This allows for load balancing and automatic failover, as the service will automatically route traffic to a different pod if one becomes unavailable.

In addition to creating and managing pods and services, it’s also important to understand how to view and troubleshoot the state of a Kubernetes cluster. The kubectl command-line tool provides a number of useful commands for this purpose, such as:

  • kubectl get nodes: Lists all the nodes in the cluster
  • kubectl get pods: Lists all the pods in the cluster
  • kubectl get services: Lists all the services in the cluster
  • kubectl describe pod <pod-name>: Provides detailed information about a specific pod
  • kubectl logs <pod-name>: Shows the logs for a specific pod
  • kubectl exec -it <pod-name> -- <command>: Executes a command in a running container of a specific pod

It’s also important to monitor the health of a Kubernetes cluster, and there are several tools available for this purpose such as Prometheus, Grafana, and Elasticsearch. These tools can be used to collect and analyze metrics, such as CPU and memory usage, and provide alerts when certain thresholds are exceeded.

In conclusion, Kubernetes clusters are made up of several core components such as nodes, pods, and services. Understanding how these components work together and how to manage and troubleshoot them is essential for effectively using Kubernetes to deploy and manage containerized applications. In the next post of this series, we will be discussing more about Deploying Applications on Kubernetes: Best Practices and Tips” in detail.

As always, if you have any questions or feedback, please feel free to reach out. “

Note : In this blog, I have used kubectl command to create objects and apiVersion: v1 is used, it may vary based on the version of Kubernetes you are using.

Thanks for reading and I hope you enjoyed it.

--

--