A Practical Step-by-Step Guide to Understanding Kubernetes

Deploy a distributed application and understand key underlying concepts

Ram Rai
Ram Rai
Jan 7 · 11 min read

Quick introduction

Kubernetes or K8S in short is an open-source platform that runs containerized applications. It’s wide array of capabilities has made it the container orchestrator of choice. In this post, we’ll attempt to learn Kubernetes from a practical standpoint by actually deploying an app. In my opinion, such an exercise would be rewarding only with a sound grasp of the key concepts behind it; therefore, I’ll walk you through several concepts and explain how they relate to the application. Although the focus of this blog is application architecture, the practical introduction should be helpful to software architects, software developers, and DevOps as well.

We’ll deploy the Tinyurl application described in the last piece, where we touched upon Kubernetes briefly. It was essential to cover the basics of our TinyURL app to establish the context; now, with that out of the way, we can focus entirely on Kubernetes. Here is the git repo for this tutorial.

For this tutorial, I am going to assume you have some understanding of Docker images, containers, and microservices architecture. If that is not the case, please read that piece.

Here, we will cover the key concepts of Kubernetes skipping details that are not essential at this point.

Kubernetes: An Administrative Perspective

First, a quick introduction to the mechanics of Kubernetes: A K8S cluster consists of the master node and one or more worker nodes (also called minions). As application developers, we communicate with the Kubernetes cluster using a command-line tool called kubectl, which executes requests via an API server located in the master node. There is also an etcd distributed resilient key-value store in the Kubernetes cluster for storing critical information about state and other information required for the operation of the cluster.

A Kubernetes cluster(source: Wikipedia)

Note: To communicate with a Kubernetes cluster, the kubectl command-line tool looks for a file named config in the $HOME/.kube directory or within a file pointed to by environment variable KUBECONFIG. This config file contains your credentials and cluster endpoints to talk to. You should be able to download this file after logging into your Kubernetes provider. We will be using the Google Cloud power shell where credentials are pre-configured.

Like Google Kubernetes Engine(GKE), you should be able to easily deploy the application on other providers like Microsoft Azure and Amazon.

Mapping Your Application to Kubernetes Resources

We want to deploy the TinyURL distributed application on Kubernetes. It consists of three microservices: Django TinyURL, a front-end serving end-users using a PostgreSQL database, and a Redis cache. And so if you followed the last piece, we ended up with three Docker images in the Docker registry (or equivalent images of your own):

  • irnlogic/djangotinyurl
  • irnlogic/postgres
  • irnlogic/redis

In the end of this exercise, these three Docker images containing code+ operating system will be wrapped in Kubernetes objects and deployed.

Containers

We have made a good beginning by containerizing our application (i.e., we made Docker images out of code for each service and pushed to Docker registry). We were able to use Docker Compose to run those images together as a single unit on a laptop. Likewise, Kubernetes can run images as networked containers in a cloud setting. Remember, a container is essentially an operating system process running code in a Docker image.

Pods

Containers have to run in Kubernetes Nodes. But the smallest unit of deployment in Kubernetes is a Pod, which behaves like a virtual host and can have one or more containers (a fixed number). Containers in a pod share access to the same file system and network of the pod. Hence they can address each other as localhost.

For example, one can imagine a pod with two containers: one for the application and a second sidecar container for collecting and pushing logs to a central location.

Our services will have just have one container per pod.

Deployment

We can run a single instance of a Docker image as a pod like so:

kubectl run <pod_name> — image=<docker_image_name>

To support a larger user base, we will need to run multiple instances of pods. How would we do that? Do we start multiple pods one by one? What happens when one of them crashes? Does that mean that we have to monitor each pod? That clearly isn’t practical. And so Kubernetes Deployment helps run multiple pods together. Here, the number of pods requested is called the desired state. While the current state at a point in time may be different—for example, some Pods may have failed to start or have crashed — Kubernetes will continuously monitor the state of the deployment and attempt to keep the current state as close to the desired state (specified in the deployment descriptor) as possible.

Finally, a deployment is just one of the workload types supported in Kubernetes, and it works well for stateless workloads such as our frontend.

Services

Next, the Django web server has to communicate with the Redis cache. There are typically multiple pods behind a deployment, and so it would be impractical to address pods individually, say, by maintaining a list of pod IPs. To add to that, pods can be short-lived and replaced by new pods, which causes previous IPs to be no longer valid.

Therefore, Kubernetes has a concept of services, which establishes reliable endpoints over deployments and handles routing of incoming requests to the pods running your code.

We’ll create Kubernetes service of type ClusterIP, which exposes Redis and PostgreSQL deployment internally within the cluster by respective host/port. Thus, Kubernetes service provides an endpoint to receive requests from the consumption side.

Next, we’ll also expose the Django web server on the public web by layering a service of type LoadBalancer on top of it, which will then create a public IP. Be aware that LoadBalancer may not be available at all by cloud providers. Fortunately, it is with Google Cloud.

With this, our services can scale, communicate with each other, and the application accessible through a public IP.

Persistence

Kubernetes Pods are ephemeral and data on its file system will not survive the lifetime of a pod. Hence, the data must be saved to external storage. We can directly mount a file system on the pods, but then pods would need to manage storage endpoints and credentials directly. This makes the pods cluster-dependent, costing us portability. So the Kubernetes Persistent Volume sub-system helps by ensuring the separation of the provisioning/administration of volumes away from their consumption in pods. Here is how that works.

  • A persistent volume (PV) is a piece of storage that has been provisioned and has a life cycle independent of the pod. It also abstracts out details of storage. It is a cluster resource, just like a Node, except it offers storage instead of CPUs.
  • A PersistentVolumeClaim (PVC) is a request for a specific amount of storage from PV, just like pod requests CPUs and memory from a cluster Node.

The PVs can be manually provisioned by administrators or dynamically by StorageClasses specified in the PVC. PVCs are mounted as volumes on pods. When a pod starts up, a qualifying PV is exclusively matched to the PVC (no other pod can claim it after that). In our case, we will take advantage of Google Cloud’s default storage class to automatically provision PV for our PVC.

ConfigMap

Finally, services must be able to discover each other and be configurable. For instance, the Django web server needs a host/port of the PostgreSQL database, which itself can be configured to enable a specific user and password. This is done by creating a ConfigMap containing those parameters and attaching to relevant deployment. The parameters are then accessible as environment variables in the relevant pods. It’s worth noting that a ConfigMap is stored in the Kubernetes’ etcd server. Therefore, it can be seen as a form of persistence.

There is another very similar name-value storage in Kubernetes called the Secret that is suitable for storing passwords. I will leave that to you for the moment.

The Deployment Artifacts

With this information, we are now ready to start deploying our application on Kubernetes. This will require the following :

  • Three deployments—Django Web server, PostgreSQL, and Redis—one for each of our microservices.
  • A persistent volume and persistent volume claim for PostgreSQL database
  • Two services of type ClusterIP to make PostgreSQL and Redis internally visible to the Django web server
  • One service of type LoadBalancer to make frontend Django web server accessible from the internet
  • One ConfigMap declaring user name and password to access PostgreSQL

If you are wondering where the pods and containers are, the deployments implicitly create pods and containers.

The deployments and services in Kubernetes are instantiated by separate YAML files, a few of which will examine together.

Front-end deployment

The deployment can be declared in a yaml.

The Kind: Deployment indicates this is a Deployment. The matchLabels under selector (line 7) tells deployment to look for all pods with label app=tinywebsite and tier= frontend and treat those as belonging to this deployment. The line replicas: 3 requests three pods for the deployment (the desired state). The template section declares the structure of the pods that make up the deployment. Here, a single container with the name tinywebsite is requested (line 18). Note that items below the container tag are part of a possible array; we just happen to have one item and, therefore, one container. The line image irnlogic/djangotinyurl:1.0 specifies the docker image for the container. Likewise, this container listens on Port 8001 and requires 1 CPU and 128 MB of memory.

The metadata section (line 12) assigns labels to pods; they exactly match the pods selector in the deployment (line 7 ).

You may be wondering why we couldn’t declare labels for pods and deployment together all at once. There are a couple of reasons for this. First, under the template/metadata section, you may have assigned more labels than needed by the pod selector of deployment (e.g., language=python for observability reasons). Second, Kubernetes follows a loose coupling principle between pods and other resources, such as deployments and services, which provides some design advantages, which we will not delve into quite yet.

Another piece of information before we move on: Front-end service will use Redis and PostgreSQL services. For instance, connection to Redis is made like so:

In line 18, the name redis is used as a hostname for the Redis service. How does the hostname get resolved?. That is the beauty of Kubernetes services, which establishes an internal DNS entry with the service name, which happens to be redis (we could have used ConfigMaps to configure and read the Redis service name, but I didn’t for the sake of an easier read).

Front-end service

The frontend.yaml is located here and creates a service over front-end deployment.

Here are the key elements of this service definition

  • kind: service establishes this to be service resource request
  • metadata/name is the name of the service, and the metadata/labels assign labels to the service (you can provide any label here)
  • spec/type is LoadBalancer; this causes an external IP to be created for this service making it accessible publicly
  • spec/selector declares selectors, which identify pods that are behind service, which exactly match the labels specified in the deployment.

After the service deploys, our application is accessible publicly.

Front-end microservice: Pod/Deployment/Service

Also, we now have fully specified our front-end microservice and have to do the same for other services.

PVC for PostgreSQL Storage

This is the persistent volume claim to request storage for our PostgreSQL database.

The kind: PersistentVolumeClaim indicates the Kubernetes resource type, and ReadWriteOnce means only a single pod can gain access to this pod. This is sufficient for now; in the future, we’ll improve the situation using Statefulsets. As is evident, 10GB of storage is requested, and going by the metadata, the section name of the PVC will be posgres-disk.

Next, we’ll mount the requested storage in the PostgreSQL pod.

PostgreSQL Deployment

PostgreSQL deployment here is similar to the front-end deployment we have seen earlier, except this one mounts a volume.

  • The volumes section at the very bottom references the postgres-disk PVC we created earlier.
  • The volume is then mounted in the volumeMounts section to /var/lib/postgresql/. postgres is configured to persist data at /var/lib/postgresql/data and so will create a data folder at the mounted location as needed.
  • configMapRef loads name-value pairs in the postgres-config ConfigMap into environment variables of postgres pod, establishing default database name, username, and password.
  • See the definition of that ConfigMap below; note ConfigMap name postgres-config and the name-value pairs under the data section.

The picture below illustrates the intended PostgreSQL service together with its deployment and persistent volume (PV).

PostgreSQL microservice: Pod/Deployment/Service

Other services and deployments are declared likewise here.

After all of the application services and workloads are deployed, we should come up with an arrangement like below.

Deployed topology
Deployed topology
Deployed topology

Users can access the application frontend by a public IP address. The LoadBalancer service creates a public IP routing incoming requests to one of the fronted pods. The fronted pod, in turn, communicates with Redis and PostgreSQL as <service name>:<port>, e.g. redis:6379. Recall that the Redis and PostgreSQL services are of the type ClusterIP (by default), which establishes internal DNS entries identical to their service names. They also load balance requests to the pods behind them.

In the end, all three application workloads shown are hosted among the cluster nodes. Likewise, the PersistentVolumeRequest (PVC) mounted on the PostgreSQL pod is mapped to PersistentVolume (PV) in the cluster. It is worth reiterating once again that the application workloads like pods are short-lived in relation to cluster resources like Nodes and PVs.

There is another way to view the topology illustrated above: Your code in the containers (blue boxes) have been embedded in Kubernetes artifacts to deploy and scale!

Deploying the Application

Once a Kubernetes cluster is provisioned, we are ready to deploy. See this doc to review the prerequisites, clone https://github.com/irnlogic/tiny.git, then go to the kubernetes folder.

The apply command of kubectl used below creates the resource specified in the file if it does not exist and applies any changed configuration to the resource. We will deploy three of our services one by one.

Deploy redis

kubectl apply -f redis-deployment.yaml
kubectl apply -f redis-service.yaml

Deploy postgres

kubectl apply -f postgres-pvc.yaml
kubectl apply -f postgres-configmap.yaml
kubectl apply -f postgres-deployment.yaml
kubectl apply -f postgres-service.yaml

Deploy frontend

kubectl apply -f frontend-deployment.yaml
kubectl apply -f frontend-service.yaml

Requested deployments and services will be created in a few minutes, including a load balancer service, which exposes TinyURL front end to the public internet.

Now list services deployed in Kubernetes:

kubectl get services

You should see something like the following:

NAME         TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
frontend LoadBalancer 10.38.14.217 37.230.9.73 8001:30080/TCP 58s
kubernetes ClusterIP 10.98.2.1 <none> 443/TCP 12m
postgres ClusterIP 10.98.8.269 <none> 5432/TCP 2m8s
redis ClusterIP 10.96.11.24 <none> 6379/TCP 3m57s

Start the TinyURL application using the following public address: http://EXTERNAL-IP:8001 (replace EXTERNAL-IP with the IP you get above). Hurray! We are done!

Seen here is an example browser screenshot of the application.

Tinyurl application output

Later I will post some tips for basic troubleshooting and further exploration of Kubernetes Cluster.

Conclusion

With this, we have reviewed various Kubernetes resources, connected our application services to those resources, and successfully deployed the TinyURL application!

Some bottlenecks are evident in our application, for instance, we have a single instance of PostgreSQL database. Going forward, we should plan on a performance test and eliminate the bottlenecks one after another.

Better Programming

Advice for programmers.

Thanks to Trinoy Hazarika, Aadit Rai, and Zack Shapiro

Ram Rai

Written by

Ram Rai

Performance, scalability, data and software architecture enthusiast.

Better Programming

Advice for programmers.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade