“Getting started with Kubernetes: An Introduction to Container Orchestration”

Prateek Kumar
5 min readJan 25, 2023

--

Welcome to the first blog post in our series “Mastering Kubernetes: A Comprehensive Guide to Container Orchestration”, the open-source container orchestration system that automates the deployment, scaling, and management of containerized applications. In this post, we will be introducing the basic concepts of Kubernetes and how it works. By the end of this post, you will have a good understanding of the core components of Kubernetes and how they work together to manage and scale containerized applications.

Before diving into Kubernetes, it’s important to understand the basics of containerization. A container is a lightweight, standalone, and executable package of software that includes everything needed to run the software, including the code, runtime, system tools, and libraries. Containers are isolated from each other and from the host system, which makes them portable and easy to deploy.

Kubernetes is a container orchestration system that automates the deployment, scaling, and management of containerized applications. It does this by using a number of core components, such as:

  • Nodes: The physical or virtual machines that run containerized applications.
  • Pods: The smallest and simplest unit in the Kubernetes object model, which represents a single instance of a running process in a node.
  • Services: An abstraction that defines a logical set of pods and a policy by which to access them.
  • Replication Controllers: Ensure that a specified number of replicas of a pod are running at any one time.

Let’s take a closer look at each of these components.

Nodes: A node is a worker machine in Kubernetes, either a virtual or a physical machine, depending on the cluster. Each node has the necessary services to run pods and is managed by the master components.

Pods:

A pod is the smallest and simplest unit in the Kubernetes object model, which represents a single instance of a running process in a node. A pod represents processes that are running on the same host and share the same network namespace.

Services:

A service is an abstraction that defines a logical set of pods and a policy by which to access them. Services enable loose coupling between pods and consumers and provide load balancing and service discovery.

Replication Controllers:

A replication controller ensures that a specified number of replicas of a pod are running at any one time. If there are too few replicas, the replication controller creates more pods. If there are too many, it deletes pods.

To give an idea of how these components work together in practice, let’s take a look at a simple example of deploying a containerized application on Kubernetes.

apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: myapp:latest
ports:
- containerPort: 8080

The above code defines a Pod object called “myapp-pod” that runs a container based on the “myapp:latest” image and exposes port 8080. This pod object is then created on the cluster using the kubectl create command.

kubectl create -f myapp-pod.yaml

This single pod is then exposed as a service using a service object

apiVersion: v1
kind: Service
metadata:
name: myapp-service
spec:
selector:
app: myapp
ports:

name: http
port: 80
targetPort: 8080

The above code defines a Service object called “myapp-service” that selects the pods with the label “app: myapp” and exposes port 80 on the host as a way to access the container’s port 8080. This service object can then be created on the cluster using the `kubectl create` command.

kubectl create -f myapp-service.yaml

With this basic setup, we now have a containerized application running on Kubernetes, exposed as a service and accessible via a host port. But what happens when we want to scale our application or if one of the pods fail? This is where Replication Controllers come in. A Replication Controller ensures that a specified number of replicas of a pod are running at any one time. By creating a Replication Controller object, we can define the number of replicas we want and Kubernetes will automatically ensure that the specified number of replicas are running.

apiVersion: v1
kind: ReplicationController
metadata:
name: myapp-rc
spec:
replicas: 3
selector:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: myapp:latest
ports:
- containerPort: 8080

The above code defines a Replication Controller object called “myapp-rc” that ensures that three replicas of the pod defined in the template section are running at any one time. This replication controller object can then be created on the cluster using the `kubectl create` command.

kubectl create -f myapp-rc.yaml

With this setup, Kubernetes will automatically ensure that three replicas of our containerized application are running and available. If a pod fails, Kubernetes will automatically create a new one to replace it. If we want to scale our application, we can simply update the replicas field in the replication controller and Kubernetes will take care of the rest. This is just a basic introduction to the core components of Kubernetes and how they work together to manage and scale containerized applications. In the next posts of this series, we will dive deeper into the various features of Kubernetes and learn how to deploy and manage more complex applications. In summary, Kubernetes is an open-source container orchestration system that automates the deployment, scaling, and management of containerized applications. It does this by using a number of core components, such as nodes, pods, services and Replication controllers. Understanding these components and how they work together is essential for effectively using Kubernetes to manage and scale your applications.

Note : In this blog, I have used `kubectl` command to create objects and `apiVersion: v1` is used, it may vary based on the version of Kubernetes you are using. Additionally, while the example used in this blog post is relatively simple, it is important to note that in real-world scenarios, deploying and managing applications on Kubernetes can be more complex. There are many other Kubernetes objects and resources that can be used to configure and manage your applications, such as Deployments, StatefulSets, ConfigMaps, Secrets, and many more. In future blog posts, we will be diving deeper into these and other advanced features of Kubernetes, so stay tuned!

Another important aspect of using Kubernetes in production is security. Kubernetes provides several built-in security features such as role-based access control (RBAC), network policies, and pod security policies. However, it’s important to understand the security risks and potential attack vectors when deploying applications on Kubernetes and take the necessary steps to secure your cluster.

In the next post of this series, we will be discussing more about Kubernetes Clusters: Understanding Nodes, Pods, and Services in detail. We will also learn how to create and manage these objects, how they work together, and how to troubleshoot any issues that may arise.

Thanks for reading and I hope you enjoyed it.

--

--