The Layman’s Guide to Kubernetes: Understanding Containerization and More — Part 2

Varsha Das
Javarevisited
Published in
14 min readApr 24, 2023

Welcome to Part 2 of “The Layman’s Guide to Kubernetes: Understanding Containerization and More.” In our previous article, we introduced the concept of containerization and how it has transformed modern software development.

Link to Part 1 — https://medium.com/javarevisited/the-laymans-guide-to-kubernetes-understanding-containerization-and-more-f48ef16d3f8f

That’s why in Part 2, we’re excited to take a deep dive into Kubernetes and break down its key components such as pods, deployments, and services in simple terms. We’ll demystify these concepts and explain how they fit into the larger picture of container management. We’ll also cover more advanced topics like rolling updates, rollbacks, and replica sets, liveness probles to Kubernetes deployments.

Whether you’re a software developer, DevOps enthusiast, project manager, or just someone curious about containerization and Kubernetes, this article is for you. Our goal is to make Kubernetes accessible to everyone, regardless of technical expertise.

Join us as we continue our journey to demystify Kubernetes and unlock its potential for managing containerized applications with ease. So let’s jump right in!

Quick glance at what have we covered in this article:

Explain Stateful sets in Kubernetes.

What are the differences between a pod, a deployment, and a service in Kubernetes?

What is the difference between a replica set and a replication controller?

What are the main features of Kubernetes as a orchestration tool?

Explain the concept of rolling updates and rollbacks in Kubernetes.

What do you understand by "load balancer” in Kubernetes?

How do you deploy an application to a Kubernetes cluster?

What are liveness and readiness probes in Kubernetes and why are they important?

What is the difference between a DaemonSet and a Deployment in Kubernetes?

Let’s begin:

What are the main features of Kubernetes as a orchestration tool?

Kubernetes is like a boss who helps manage a group of musicians, but instead of musicians, it manages a bunch of computer programs called “containers.” Here are some things that Kubernetes can do:

  • It can automatically add or remove containers depending on how many people are using your app. This helps make sure your app is always fast enough.
  • It can distribute the work evenly among the containers, so none of them get too overloaded. This helps make sure your app doesn’t slow down or crash.
  • If one of the containers stops working, Kubernetes can automatically replace it with a new one. This helps make sure your app keeps running even if something goes wrong.
  • It makes it easy to put your app in different places, like testing or production, without having to do a lot of work. This helps make sure your app works well in all those different places.
  • It can run on different types of computers, so you can put your app wherever you want. This helps make sure your app can be used by lots of people.

Overall, Kubernetes makes it easier for developers to make sure their apps work well and stay running, so people can use them without any problems.

How do you deploy an application to a Kubernetes cluster?

To deploy an application to a Kubernetes cluster, you typically follow these steps:

  1. Create a Docker image of your application and push it to a Docker registry, such as Docker Hub or Google Container Registry.
  2. Write a Kubernetes deployment configuration file in YAML format that specifies the details of your application deployment, including the Docker image, the number of replicas to run, and any required environment variables or volumes.( we will see in upcoming articles on k8s commands)
  3. Apply the deployment configuration file to the Kubernetes cluster using the kubectl command line tool. This will create a Kubernetes deployment object in the cluster, which will manage the deployment of your application.
  4. Optionally, create a Kubernetes service object to expose your application to the network. The service object can be used to load balance traffic to your application and provide a stable IP address and DNS name.
  5. Monitor the deployment using Kubernetes tools such as kubectl or the Kubernetes dashboard. You can view logs, check the status of running pods, and troubleshoot any issues that arise during deployment.
  6. Once your application is deployed, you can make changes to the deployment configuration file to update the application, or use rolling updates to deploy new versions of the application without downtime.

Overall, deploying an application to a Kubernetes cluster involves creating Docker images, writing Kubernetes configuration files, and using Kubernetes tools to manage and monitor the deployment.

With these tools and processes in place, Kubernetes provides a powerful platform for deploying and managing containerized applications at scale.

What do you understand by load balancer in Kubernetes?

In Kubernetes, a load balancer is a way to distribute network traffic across multiple instances of an application running in a cluster. Load balancing is important for scaling applications and ensuring high availability by distributing traffic evenly across the available instances.

Imagine you’re at a restaurant and the waiter is serving food to many tables at the same time. If the waiter tries to carry too many plates at once, they might drop them or spill something.

A load balancer in Kubernetes is like a smart waiter who helps manage the flow of traffic between different computer programs called “containers.” Just like a waiter needs to balance the number of plates they carry, a load balancer balances the number of users or requests going to each container.

So, if lots of people are trying to use an app at once, a load balancer can make sure that the requests are evenly distributed to all the containers that are running the app. This helps make sure that no one container gets too overloaded and slows down or crashes.

How does Kubernetes service help in load balancing — example?

A Kubernetes service is like a traffic cop for your computer programs called “containers.” It makes sure that when someone wants to use your app, they get connected to the right container that can handle their request.

Imagine you have a pizza shop with three chefs working in the kitchen. You don’t want all the orders to go to one chef and leave the others with nothing to do, so you use a service to evenly distribute the orders between all three chefs.

This is how a Kubernetes service works too. It helps make sure that requests to your app are evenly distributed between all the containers that are running it. This helps make sure that none of the containers get too busy and slow down or crash.

For example, let’s say you have three containers running your app on Kubernetes. When someone wants to use your app, the service will send their request to one of the containers. If that container gets too busy, the service can send new requests to a different container instead. This way, all the containers share the workload and your app runs smoothly.

Overall, a Kubernetes service is like a traffic cop that helps manage the flow of requests to your app’s containers, making sure that everyone gets served quickly and efficiently.

Role of kubelet and kube proxy in worker nodes?

The kubelet and kube-proxy are two key components that run on worker nodes in a Kubernetes cluster, and they play important roles in managing containerized workloads and network traffic.

Think of a Kubernetes worker node like a construction site where different workers are building a house. The kubelet and kube-proxy are like two important workers who help make sure everything runs smoothly.

The kubelet is like a foreman who manages the work of each worker. It makes sure that each container in a worker node is running as it should be, and that the containers are communicating with the other nodes. The kubelet also communicates with the Kubernetes master node to get instructions and updates on what needs to be done.

The kube-proxy is like a security guard who controls access to the construction site. It makes sure that requests to the containers in the worker node are directed to the right place, and that no unauthorized requests get through.

Technically,

Kubelet: The kubelet is an agent that runs on each worker node and is responsible for managing the lifecycle of containers.

It communicates with the API server on the master node to receive instructions for running containers,

Uses the container runtime (such as Docker) to start, stop, and monitor containers.

The kubelet also monitors the health of containers and can take action to restart them if necessary.

Kube-proxy: The kube-proxy is a network proxy that runs on each worker node and is responsible for managing network connectivity between pods and services.

It maintains network rules to allow traffic to be directed to the appropriate destination, and provides load balancing for TCP and UDP traffic.

The kube-proxy communicates with the API server on the master node to receive instructions for managing network connectivity, and then updates network rules accordingly.

What is a Kubernetes namespace?

A Kubernetes namespace is like a separate room in a big house where you can store different things. In the same way, a Kubernetes namespace is like a separate virtual environment where you can store different parts of your application.

Think of it like this:

you have a big toy collection, and you want to keep the Lego blocks separate from the action figures. So you put the Lego blocks in one room and the action figures in another room. Each room is like a namespace, and it helps you keep things organized and separate.

In Kubernetes, a namespace is a way to organize your application by dividing it into different sections or parts. This can be helpful when you have a lot of different components to your application, or when you want to separate different environments like development, testing, and production.

For example, let’s say you have a web application with a front-end and a back-end. You can create two different namespaces, one for the front-end and one for the back-end, and keep them separate. This can help you manage the components of your application more easily and reduce the chance of something going wrong.

Overall, a Kubernetes namespace is like a separate virtual environment where you can store different parts of your application, helping you keep things organized and separate.

What is the difference between a replica set and a replication controller?

A replica set and a replication controller are two different ways to make sure that your application has the right number of copies, or “replicas”, running at all times.

Think of it like a pizza shop where you need to make sure that you always have enough pizzas ready for customers. A replica is like a pizza, and you need to have a certain number of pizzas ready to serve at all times. A replica set and a replication controller are like two different ways of making sure that you always have enough pizzas ready.

A replication controller is an older method for managing replicas in Kubernetes. It helps you make sure that a specific number of replicas are running at all times. For example, if you want to have 5 replicas of your application running, the replication controller will make sure that there are always 5 replicas running, and it will create new replicas if any of them fail.

A replica set is a newer, more advanced method for managing replicas in Kubernetes. It also helps you make sure that a specific number of replicas are running, but it gives you more flexibility and control over how replicas are managed. For example, you can use a replica set to specify how many replicas should be running in different environments, like development, testing, and production. You can also use a replica set to update replicas with new versions of your application.

Overall, a replica set and a replication controller are two different ways to manage replicas in Kubernetes. While they both help you make sure that a specific number of replicas are running, a replica set is a more advanced method that gives you more flexibility and control over how replicas are managed.

What is Kubectl?

Kubectl is a command-line interface tool that is used to interact with and manage Kubernetes clusters.

Think of it like a remote control that allows you to control and manage a group of computers that are working together to run your application.

With Kubectl, you can use simple commands to perform a variety of tasks on your Kubernetes cluster, such as creating and managing deployments, scaling replicas, checking logs, and managing Kubernetes resources like pods, services, and namespaces.

For example, if you want to create a new deployment for your application, you can use the “kubectl create deployment” command to do that. If you want to see the logs for a specific pod in your cluster, you can use the “kubectl logs” command to do that.

Overall, Kubectl is a powerful tool that allows you to manage and interact with your Kubernetes cluster through the command line, making it easier to deploy and manage containerized applications at scale.

Explain Stateful sets in Kubernetes.

StatefulSets in Kubernetes are a way to manage the deployment and scaling of stateful applications, such as databases, in a distributed system.

A stateful workload is one where each instance has a unique identity, such as a database that stores data. In contrast, a stateless workload is one where each instance is interchangeable, such as a web server that serves requests.

Imagine you have a video game that you play with your friends online. The game has multiple servers, and each server needs to store information about the players, their progress, and other game-related data. This data is called “state” because it changes over time as players interact with the game.

StatefulSets assign a unique and stable hostname to each pod it creates, based on a naming convention you specify.

For example, if you create a StatefulSet called “game-servers”, it may create pods with hostnames like “game-servers-0”, “game-servers-1”, and so on. These hostnames remain stable even if pods are rescheduled to different nodes or fail and are replaced, which ensures that the game data stored in those pods remains accessible and consistent.

What are the differences between a pod, a deployment, and a service in Kubernetes?

A pod is like a single container that runs an application in Kubernetes. It can contain one or more containers that share the same resources and run on the same node. A pod is like a single unit of deployment that can be scaled up or down.

A deployment is like a set of pods that are managed by Kubernetes. It makes it easy to create, update, and delete groups of pods, and it can help to ensure that a specified number of pods is always running. Deployments also provide features like rolling updates and rollbacks, which help to update applications with minimal downtime.

A service is like a way to access a set of pods from the network. It provides a stable IP address and DNS name that can be used to communicate with the pods, even if they are deleted or recreated. Services can also provide features like load balancing and session affinity, which help to distribute traffic evenly and make sure that requests are sent to the same pod every time.

Explain the concept of rolling updates and rollbacks in Kubernetes.

When you have an application running in a Kubernetes cluster, you may need to update the application to a newer version. Kubernetes provides a way to do this with rolling updates.

A rolling update is a process that updates a Kubernetes Deployment in a controlled, incremental way. This means that the update is applied to a small number of Pods at a time, while the other Pods continue to serve traffic.

This process ensures that the application remains available during the update, and also provides a way to roll back the update if there are any issues.

For example, let’s say you have a web application running in a Kubernetes cluster, and you want to update it to a new version. You could use a rolling update to update the Pods in the Deployment one by one, while the other Pods continue to handle requests. This way, the application remains available during the update.

Now, let’s say that the new version of the application has a bug that causes it to crash. You can use a rollback to revert the Deployment to the previous version. This process is also done in a controlled, incremental way, so that you can ensure that the application remains available and that the rollback is successful.

So, rolling updates and rollbacks are important concepts in Kubernetes because they allow you to update and manage your application in a controlled and predictable way, while ensuring that the application remains available and stable.

What are liveness and readiness probes in Kubernetes and why are they important?

When you have an application running in a Kubernetes cluster, you want to ensure that it is available and responding to requests. Kubernetes provides two types of probes, liveness and readiness probes, to help you check the health of your application.

Let’s say you have a web application running in a Kubernetes cluster, and the application becomes unresponsive due to a bug or other issue.

Without a liveness probe, Kubernetes would not be able to detect that the application is not running and would continue to send traffic to the container, resulting in downtime for your application. With a liveness probe, Kubernetes can detect that the application is not running and restart the container to ensure that it is available again.

Similarly, without a readiness probe, Kubernetes may send traffic to containers that are not ready to handle requests, resulting in errors or timeouts for your users. With a readiness probe, Kubernetes can ensure that traffic is only sent to containers that are ready to handle requests.

A liveness probe is used to check if the application is running and responding to requests. If the liveness probe fails, Kubernetes will restart the container. This ensures that the application is always running and available.

A readiness probe is used to check if the application is ready to serve requests. If the readiness probe fails, Kubernetes will stop sending traffic to the container until the probe succeeds again. This ensures that traffic is only sent to containers that are ready to handle requests.

In summary, liveness and readiness probes are important in Kubernetes because they help ensure that your application is available and responsive to requests, and can help you detect and recover from failures quickly.

What is the difference between a DaemonSet and a Deployment in Kubernetes?

A Deployment in Kubernetes is like a group of identical soldiers that are trained to do the same thing. They work together to complete a task, such as building a wall or protecting a castle. If you need more soldiers to complete the task, you can easily add more, and if you have too many, you can easily remove some.

A DaemonSet in Kubernetes is like a group of messengers that need to deliver a message to every single person in a town. Each messenger has a specific route to follow and a specific message to deliver. You need to make sure that each messenger reaches every person in the town, so you need to send a messenger to every single house in the town.

In Kubernetes, a Deployment is used to manage a group of identical “soldiers” (Pods) that can easily be scaled up or down as needed, while a DaemonSet is used to make sure that there is a “messenger” (Pod) running on every single node in the cluster.

So, to summarize, a Deployment is used to manage a group of identical soldiers (Pods) that work together to complete a task, while a DaemonSet is used to make sure that there is a messenger (Pod) running on every single node in the cluster to complete a specific task.

That’s all for today.

If you found this format helpful, kindly share your feedback in the comments section below. We would love to hear from you if you would like us to publish more articles in this style.

Thanks for reading.

If you liked this article, please click the “clap” button 👏 a few times.

It gives me enough motivation to put out more content like this. Please share it with a friend who you think this article might help.

Subscribe here to receive alerts whenever I publish an article.

If you enjoyed reading this, you could buy me a coffee here.

Connect with me — Varsha Das | LinkedIn

Follow my Youtube channel — Code With Ease — By Varsha, where we discuss Data Structures & Algorithms.

Happy learning! 😁

--

--

Varsha Das
Javarevisited

"Senior Software Engineer @Fintech | Digital Creator @Youtube | Thrive on daily excellence | ❤️ complexity -> clarity | Devoted to health and continuous growth