How To Deploy A Django App Over A Kubernetes Cluster (With Video)

Tech with Mike
13 min readJul 21, 2022

--

How to deploy a Django app over a Kubernetes cluster

Before starting the tutorial, I want to mention that I also have created a Youtube video for this article. You can learn concepts easily by watching the video.

Hello there, Hope you’re doing well. In this article I want to share my knowledge about how we can deploy a Django app over a Kubernetes cluster. Kubernetes is a vibe word in the Backend and DevOps fields and You usually see it in the job descriptions, So it’s a valuable knowledge that we should gain it. Besides, notice that it has many different concepts and you should be well ready for this tutorial. So probably it’s better to provide yourself a warm coffee before starting the tutorial.

One more thing before starting the tutorial. I’m assuming that you already know about basic concepts of Docker like Building new images or Docker volumes. Actually before learning the Kubernetes, It’s better You have learnt Docker and how it works. In the past, I had built some tutorials about How we can dockerize our Django app and How we can optimize our Django docker image you can check them out to have better understanding about Docker in Django world.

This tutorial has two section; in the first section we will learn about basic concepts of Kubernetes that we will use in the code section; and second section is the code section that we will implement deployment of Django app over a Kubernetes cluster. Okay, without further delay let’s dive into the tutorial.

Concepts Presentation Section

What Kubernetes is?

So the first question that we should answer is how Kubernetes came to game?

Containers are a good way to bundle and run your applications, but in the production environment, you need to manage the containers that run the applications and ensure that there is no downtime. For example, if a container goes down, another container needs to start. Wouldn’t it be easier if this behavior was handled by a system? That’s how Kubernetes comes to the rescue! It takes care of scaling and fail-over for your application.

Kubernetes is a powerful open-source container orchestrator that automates the deployment, scaling and management of containerized applications. It was originally developed by Google and it was open sourced by Google at 2014. It is capable of managing containers on different environments, like: Virtual Machines, Physical Machines and Hybrid Machines.

What is difference between Kubernetes and Docker?

Usually new learners think Kubernetes is similar with Docker and get wrong understandings about these two technologies. Actually these two technologies in the base are different from each other. You should consider these two technologies as below sentences.

· Docker is a container technology VS K8s is a management technology.
· Docker is about automated building VS K8s is about automated managing.

Why should we learn Kubernetes?

Okay, So now why should we learn about Kubernetes? Why we need it? What benefits it provides us?

So Kubernetes is a container management tool that can do management automatically. It can provides us minimum downtime by automatically replace failed containers. It can easily scale up our application whenever we need it. For example online shops have much more users in some days of year, like Christmas holidays. You can easily increase your application power using Kubernetes tool in this days.

Another reason is that Kubernetes has a great data snapshot tool named 10Kasten. It can takes backup from your entire K8s cluster, whether it is on your local machines our on remote machine. It can takes snapshots from your component states.

Kubernetes has also bigger community in comparison of other competitors. So if you got any problem, you can get solution quickly from its community.

Basic Architecture and Main Components of Kubernetes

Okay, now I want provide you some image of architecture of the Kubernetes. Kubernetes runs our workload by placing containers into Pods to run on Nodes. Depending on the cluster, a node may be a virtual or physical machine. Each node is managed by the control plane and contains the services necessary to run Pods.

Kubernetes follows master-slave architecture and has a master node and worker nodes. In the cluster, nodes can talk to each other by using kubelet process.

Each worker has a different number of containers depending on how you distribute workload between them and how many resources each worker has.

A cluster can have different components in it. But the common components are Pod, Service, ConfigMap, Secret, Volume, and Deployment.

Basic Components In The Kubernetes Cluster

Okay, now basic and common components of Kubernetes that we will use in the code section.

Pod: is the smallest unit that we configure and interact with and pod is a wrapper of a container on in the each worker node. Actually we also can have multiple container in each pod. Usually per application you would have one pod so the only times that you have more than one container in a pod are time that your application needs some helper containers.

So for example database will be one pod, message broker will be another pod. Virtual Network will assign each pod its own IP address. Pods will talking each other by using internal IP addresses. Note that in the Kubernetes cluster we don’t create containers, and we’ll create pods that are abstraction layer over containers. So, We only work with Pods.

Pod manages the containers are running inside itself without our intervention. So for example if a container stops or dies inside of a pod it will be automatically restarted. However, Pod are ephemeral components, which means that pods can also die very frequently and when a pod dies, a new one gets created.

Service: Here is where the notion of Service comes into play. So, what happens? Whenever a pod gets restarted or recreated, will get new IP address. So if your app talks to a database, if database recreated you have to adjust its IP in your app. So this is a inconvenient way to change IP addresses of apps. Because of this, another component of service is used which basically is an alternative to those IP addresses.

ConfigMap: With it we store our environment variables in the cluster. Be aware that values of variables are stored in raw format in the ConfigMap object and will be visible to other users in the cluster. So we will not store things like database password in it. We store non-confidential data in it.

Secret: is the place which we store confidential data in it. Things like database password or Django secret key. Kubernetes will store these data in Base64 encoded format. So it will somehow makes difficult to read it for other users.

Volume: is same with docker-compose. If you know in docker-compose we specify volumes mapping for storing data permanently. Without volume mapping if your container goes down, your data will be lost. In the Kubernetes we have different road to manage volumes. we have two concept here, PersistentVolume or PV and PersistentVolumeClaim or PVC. With PV, we specify the volume resources that we have in the cluster and with PVC, we specify volume needs that we have.

So in nutshell, for having permanent data in a Kubernetes cluster, First we should specify resources and then in the pods that gonna use that we claim the resource.

Deployment: we specify our pods by deployment blueprint. In practice you would not create pods directly, you will create deployment. There you specify how many replicas or what volumes claim you have or what ports you want to be open. So Deployment is another abstraction layer over the pod.

Code Section

Okay, So lets dive into the code and see how we can deploy our Django app over a Kubernetes cluster. By the way, the code of this project is available in the GitHub. First you have to install the Minikube in your PC. This app provide us a virtual cluster on our PC.

After installing the Minikube, start your docker app, and then in the command prompt start Minikube service by hitting:

minikube start

In the successful situation you will see below messages from Minikube. As you can see, it says that kubectl is configured. Now we can talk to the cluster using kubectl command.

Kubectl Is Configured And Ready To Deploy Our App To The Cluster

If you are tired you can take break. In the rest of the article we’re going to deploy 3 part of our Django app. Django, Postgres, and Celery are these part.

Django Deployment

Okay, first I want to create our ConfigMap object. As I mentioned earlier, we specify environment values that are not confidential in it. Go ahead and create a file as configMap.yml; Then copy below content in it:

So what’s going on in this file?

apiVersion: Which version of the Kubernetes API you’re using to create this object.
kind: What kind of object you want to create.
metadata: Data that helps uniquely identify the object, including a name string, UID, and optional Namspace.
data: As you can see, we have defined our variables in key-value structure in it.

Now for applying this configuration file we use apply command with -f option. you can use it as below:

kubectl apply -f configMap.yml

in the successful situation you will see response:

configmap/shop-config created

Next let me create our secret object where we store our confidential data. To do that I will create a file named .env-kuber and store my confidential values, like below:

Then I will add this file to .gitignore so it will not published to my repository. Now I’m going to create a secret object from this file by below command:

kubectl create secret generic shop-secret --from-env=.env-kuber

You will get “secert/shop-secret created” response from kubectl. You can check out secrets by using below commands:

kubectl get secret
kubectl describe secret shop-secret

Now, lets create deployment (or pod) for our Django app (or image). Notice that I already created the Image and pushed it into Docker hub. Lets go ahead and create a file named django-deployment.yml file and copy below content to it.

Okay, So let me go through new things we have here.

Labels: We will give the pods that created with this file the key-value label. Here we will assign them “app: shop” key value. We will use these labels when we want to select specific pods.
Replicas: Specifies how many pods of this deployment should be in the cluster. We can scale up or scale down our app easily by changing this setting.
Selector: Specifies on which pods we want apply this specs.

Following metadata are not very important So I skip them.

Image: We specify which docker image we want to use in this pod. I used my latest image that I created and pushed to Docker hub.
EnvFrom: We specify how pod can read environment variables. Here we tell pod that it should read env variables from a secret object and a config-map object.
Ports: In the port section we specify which ports of pod should be open. Our gunicorn app will listen on the port 80.
VolumeMounts: Here we mount path that we store our media files to a volume named media-volume-mount.
PersistentVolumeClaim: For storing media-volume-mount we claim a volume named media-pvc. Later we will see media-pvc claim that we specified what volume we need.

I know there are a lot of concepts in it. But, we have reached to middle of the path and please keep going. I will not tell repetitive concepts.

In the next step I want to create persistent volume in our cluster. Go ahead and create a file named media-pv.yml and put below content in it and then apply it.

So lets see what new things we have in here:

StorageClassName: We define StorageClassName as manual value for this persistent volume, which will be used to bind persistentVolumeClaim requests to this PersistentVolume.
Capacity: Guess what things it specifies? :)
AccessModes: ReadWriteOnce access mode, meaning the volume can be mounted as read-write by a single node.
HostPath: We also specify that volume should be at /data/media-pv on the cluster’s node.

So as I mentioned before, with Persistent Volume we create resources in our cluster. Now we have created this resource in our cluster and in the next we should create a Persistent Volume Claim in order to use that resource. So go ahead an create a new file named media-pvc.yml and put below content in it and then apply it.

As you can see, there is no new thing in it. We specified a claim that matches with the resource that we created in the past. Now we can use the volume we created in the previous step with media-pvc claim.

In the next step I want to create a service for our Django app on port 80. So it will be accessible from outside of cluster. So go ahead and create new file named django-svc.yml and put below content in it and then apply it.

The only thing that I want to mention here is type: LoadBalancer. There are different types that you can use in here but for simplicity reason I chose the LoadBalancer. So we mapped port 8000 to 80 of pods that have app:shop key-value. So our Django APIs will be accessible in port 8000. Before you testing it, you should run below command:

minikube tunnel

We should run this command to have access to LoadBalancer services from our windows. Okay now we can test our APIs on port 8000. You can do that by Postman application. Now I want to set up our Postgres Database. So lets start second section of coding. In the following sections I will be give less descriptions about codes because we already know some of them.

Postgres Deployment

Okay, first let me create a volume for our database. So our data will not lost if its pod goes down. Go ahead and create a file named postgres-pv.yml and copy below content to it and next apply it.

In it we have not any new thing, so I will jump right into next step. Now we should create a persistent volume claim so we can use the volume resource that we created in the previous step. So go ahead and create a new file named postgres-pvc.yml and copy below content it and then apply it.

Now lets create a service for it in order to be accessible by other pods in the cluster. go ahead and create a file named postgres-svc.yml and put below content it and then apply it.

So now other pods can request the database on port 5432. Okay now last step in this section; we will create deployment file to make our database pod. So go ahead and create a new file named postgres-deployment.yml and then copy below content and then apply it.

We touched all concept in it before, So I will not go through that. Congratulation you deployed your database over Kubernetes and now you can migrate your database. To doing that, we can go inside our Django app pod, and by using manage.py file migrate the migrations.

kubectl exec POD_NAME  -- python /app/shop/manage.py migrate

So you need the pod-name and you can get it by checking results of below command:

kubectl get pod

by doing so, you will get something similar below picture and from there you can get Django pod name. (here it has shop-app-7b* name)

Okay, in the end of this section we prepared our database and our data can be saved permanently to our hard disk. In the next section, we will setup our celery module so we can have asynchronous tasks in our application.

Celery Deployment

As you know, we use celery for asynchronous tasks or cronjobs. In the our Django project, we use it for making thumbnail from product images. So without further delay, lets jump into the code.

As you know celery needs a message broker to establish connection between different components. (here they are: Django, Beat, Celery) Here we will use Redis as message broker so in the first step I will deploy a Redis app over the cluster. Be aware that I don’t want to have permanent storage for our Redis so I will not bothered myself with PV or PVC. Okay, go ahead and create a new file named redis-deployment.yml and put below content in it and then apply it.

We have new concepts here; resources and requests. With them we can specify how many resources a deployment will need. When you specify the resource request for containers in a pod, the kube-scheduler uses this information to decide which node to place the pod on.

Okay, now we will need a service in order to access our Redis in the cluster. So go ahead and create a new file named redis-svc.yml and put below content in it and the apply it.

Okay, now go ahead and create our Beat and Worker deployment. First create a new file named beat-deployment.yml and copy below content in it and then apply it.

Okay, as you now we use same image for app, beat and worker module. But, they have different command. So in the above you see that I override the command. There is no thing that I would emphasize so lets create our worker pod. Create a new file named worker-deployment.yml and put below content in it and then apply it.

As you can see, it’s similar with beat deployment and just have two different things. One in the command and you already know that worker has different command, and another one is in the volume. Why we mount volume here but not in the beat? Worker pods need to have access to media files. For example in our Django app we create thumbnail from product images by using asynchronous task. So worker should have access to media files.

Now you should have properly set up the celery module and your tasks would be working correctly.

Congratulation, You have successfully deployed a simple Django app over a virtual Kubernetes cluster.

--

--

Tech with Mike
Tech with Mike

Written by Tech with Mike

Python/Django Developer, DevOps Engineer, Youtuber.

No responses yet