Kubernetes 101: Introduction to Container Orchestration 🎵 🐳

Momal Ijaz
AIGuys
Published in
8 min readApr 15, 2023

If you are reading this article, you most likely are to be familiar with the concept of containerization, images and Docker…..

If not, I would recommend checking out my “Docker for dummies” series part-1 at least, to help you familiarize yourself with the basics of container-based shipping.

In this article, we will cover

  • Basics of Kubernetes
  • Why do we need Kubernetes in the first place?
  • What is a Kubernetes cluster?
  • What are Kubernetes objects?
  • Setting up your first Kubernetes cluster locally with minikube :-)

With that, let’s dive into the world of Kubernetes….

When we want to deploy an application to production, suddenly a lot of components start to matter, which is not really important in the local limited dev environment. So why can we not just deploy a docker container, to a simple AWS EC2 instance (a virtual computer in the cloud), and; yet it run and serve our user like that forever…. well because:

a. There is no Container monitoring in a simple EC2 instance. You cannot just spin up a docker container in your remote instance and hope it never crashes or runs into any error, neither can you sit in front of your computer forever, monitoring your live production application.

b. If you have built something really gooood… the traffic and number of users of your applications might shoot up or decrease with time. There is Traffic management supported in EC2 instances, and ultimately you will have to keep requesting bigger or smaller EC2 instances to support incoming traffic load.

c. One final drawback with the usage of this entirely self-managed EC2 docker-based setup is that there is no Load balancing or traffic distribution across nodes. Say, to allow for traffic handling you spin up two containers, hosting your application, but which node to redirect the traffic based on its load, would be a decision you need to make manually and how both nodes keep the same state/volume/ application state…

The aforementioned reasons are exactly why we need Kubernetes to level up the game of docker container deployment in production!

Why Kubernetes? 🧐

With all the good stuff said, Kubernetes is not the only service available out there to help us manage containers in deployment…. there are many other services available from cloud providers like AWS ECS — Elastic Container Service, Azure Containers and more….

These services provide solutions to all the above-mentioned problems we mentioned without using Kubernetes…. but then we need to understand their custom architectures, philosophy and terms to be able to use these tools for productionizing our application.

Whereas Kubernetes, is Cloud-agnostic… we write the same yaml configurations files to tell Kubernetes to manage our containers and these exactly similar files can be used across any cloud-provided Kubernetes management service like AWS EKS (Elastic Kubernetes Service), Azure Kubernetes Service (AKS), etc.

What is Kubernetes? 🤓

Kubernetes is an open-source collection of tools and concepts, for orchestrating docker containers in production.

It’s not a docker substitute, rather allows managing docker containers in production easily.

It’s not a cloud/software either.

Kubernetes Architecture Core Components

A Kubernetes cluster comprises of the following four main components, in addition to other smaller parts, that become useful once you dive deeper into it.

A depiction of Kubernetes cluster in the cloud

a. Pod: 🔸

It is the smallest unit in the Kubernetes framework, which is responsible for holding a container and executing it.

Just like a stove hold a kettle on top of it and executes the recipe inside it!

b. Worker Node: 🔸🔸

It is a simple remote machine like an EC2 instance and is responsible for running pods in it. These are actual machines, on which our application will run in the cloud!

c. Proxy/Config: 🔸🔸🔸

A configuration file on the worker node is responsible for doing traffic management of the pod running it in the worked node.

d. Master Node: 🔸🔸🔸🔸

The master's node is yet another computer in the cloud, that is responsible for interacting with the pods and worker nodes and performing the auto-scaling of the pods based on the incoming traffic.

This node is responsible for spinning up new pods or shutting redundant/free-ones down in given worker nodes based on incoming traffic.

Kubernetes Objects

Before we dive into setting up our own Kubernetes cluster, we need to know the terminologies that Kubernetes uses internally.

So Kubernetes works with objects, It has a lot of different objects like pods, deployments, services, volumes, etc. If we want to create new pods, communicate these commands to Kubernetes via creating and sending these objects to Kubernetes through commands.

We will cover only the three most important and basic objects here, but remember there are a lot more objects Kubernetes has to offer based on your DevOps needs.

a. The Pod Object 📦

A pod object is the smallest unit of a Kubernetes cluster and is basically just a thin wrapper around containers.

  • It can hold or run multiple containers
  • It is also responsible for holding containers’ resources like volumes
  • Each pod has a cluster internal IP address.
  • Containers within the same pod, are present on the same worked node and hence can communicate with each other via “Localhost”.
  • Pods are ephemera ( if they crash / shutdown all their data is lost)
  • We can make changes to pod objects after deploying them.

b. The Deployment Object 🏢

A deployment object in a Kubernetes cluster manages the pod creations for us.

  • We specify the pod replicas, container specifications, and other important configurations, and the deployment object is responsible for achieving and maintaining that state.
  • For deploying a machine learning model, a database service, or any other micro-services of your system, we create a deployment object and specify the service image and other configurations in pod objects, that are associated with this deployment. (It will be a lot more clear, when we set up our minikube cluster).
  • We can make changes to deployment objects after deploying them.

c. The Service Object 🌎

Whenever we create a pod object in our Kubernetes cluster, it is spun up with an auto-assigned cluster internal IP address. But there are two issues with this IP address

a. It changes every time the pod is crashed and restarted.

b. It only exposes the pod to the cluster’s internal resources and not the outside world.

So, if your web app or algorithm, needs to talk to the outside world, it is not possible with the pod’s default IP address. That is where the service object comes into the picture, as the name suggests, the service object exposes our pods to the outside world by assigning them a static IP address, that can make your pods accessible to the outside world.

Setting up our First Kubernetes Cluster I 🚀

Enough with the theory… all these concepts will make a lot more sense when you will set up your own Kubernetes (k8s) cluster.

We will set up our first Kubernetes cluster, using minikube. In this part, you will;

  • Containerize our app and push it to the docker hub.
  • Set up our minikube cluster.

In the next part, we will deploy this containerized app in our minikube cluster.

Now ideally Kubernetes clusters are set up in the cloud or your custom data center, where you want to deploy/host your applications. But for getting our hands dirty with Kubernetes setup, we have a tool called minikube, that allows us to set up a small single-node Kubernetes cluster in a virtual machine on our local computer and provides exactly similar functionality like a remote big k8s cluster on a cloud.

For following along you need to have the following things on your system:

a. Minikube : CLI tool for setting up local k8s cluster

b. kubectl : CLI tool for allowing developers to talk to their k8s cluster

c. Container or virtual machine manager: Docker, QEMU, Hyperbox etc.

d. docker — CLI tool for image creation and pushing.

In addition to this, you also need a simple application that you want to deploy in this Kuberentes cluster, I am providing a simple Node.js webapp, that prints a simple message and listens on port 3000. The app folder comprises three files:

a. app.js — main application file

b. package.json — apps dependencies.

c. Dockerfile — simple docker file for baking application’s code to image.

d. docker — CLI tool for creating and pushing images to the docker hub.

You can download this demo application from here.

With the initial setup done, let’s spin up our cluster. Open a terminal and type:

minikube start ---driver=<your VM name>
The output of above command

You can check the dashboard by typing:

minikube dashbaord 
Minikube dashboard (No deployments / pods/ services are created yet)

Minikube will basically set up a single-node cluster in a virtual machine on your system. In this dummy k8s cluster, there is only one node that serves as both master and worker nodes. Once your cluster is set up, we need to create and push our app’s image to the docker hub.

For that first, go to docker-hub sign up and create an account on docker hub.

Next up, you need to log in to docker from your cli, by typing:

docker login -u <email_id> -p <password>

Once you are logged into your account, you need to create an image from the given application. From the folder containing the Docker file, type:

docker build -t my_image .

Once your image is built successfully, we need to push it to the docker hub, for allowing our cluster to pull it. For that go to the docker hub and click on Create a repository, make it public and once it is created copy its name.

In my case, the name is momil56/kubernetes101-demo-image. Copy your user_name/image_name and rename your image to this name, this is important to tell docker, that which repo to push your image to.

docker tag my_image momil56/kubernetes101-demo-image

Now push your image to the repo by :

docker push momil56/kubernetes101-demo-image

Great job! So far, you have set up your first Kubernetes cluster in minikube. Created and pushed your app’s image to the docker hub.

The last and most important step is to deploy this app into our k8s cluster by creating the deployment, pod and service objects that we discussed earlier and see our app in action in our minikube cluster…. we will implement this bit in part II… stay tuned!

Happy Learning ❤️

--

--

Momal Ijaz
AIGuys
Writer for

Machine Learning Engineer @ Super.ai | ML Reseacher | Fulbright scholar'22 | Sitar Player