Hands-on Day 1 and Day 2 Operations in Kubernetes using Django and AKS — Part 1

Ousama Esbel
COMPREDICT
Published in
7 min readFeb 26, 2021

Kubernetes has become the de facto container orchestrator due to its various functionalities and flexibility. Although Kubernetes documentation is thorough and provides many examples, it is not straight forward to combine all these tutorials and use it to deploy a real-life application with several services from end-to-end. With that, I will demonstrate how to deploy a real-scenario application on Azure Kubernetes Cluster (AKS) as the production platform. Moreover, I will be discussing day 1 and day 2 operations of the application life cycle in Kubernetes in series of articles. Here are the main headlines:

  • Discuss the application and set up the cluster, container registry and the production namespace. (part 1)
  • Deploy Config Maps, Secrets and Persistent Volumes. (part 2)
  • Deploy, monitor and define update strategies for the services including setting up Traefik as Ingress Controller. (part 3)
  • DevOps and Auto deployment using Github Action. (part 4)

You don’t have to use Azure Kubernetes Service per say, you can easily re-configure the manifests to be compatible to any Kubernetes installation, such as, AWS EKS or Linode. However, as a prerequisite, you need to have a basic knowledge of Kubernetes, docker, yaml and shell scripting. In addition, if you want to run the application along with the tutorial, you need to:

In this article, the application will be explained. Then, I will setup the production cluster and private container repository.

Application Overview

The application is a web service developed using Python, specifically using Django framework. Whenever you start a new project, there are many boilerplate integrations need to be done before you start developing the actual application or translate your business model, for instance, you need to prepare the development environment, docker integration, install python libraries, create database, build the continuous integration pipeline, etc... In order to quickly jump-start the development and directly start on developing the application’s logic, I used Cookiecutter Django. If you use Django framework but never heard of Cookiecutter Django then I encourage you to give it a look as it provides various functionalities for all stages out of the box.

At the time of writing this tutorial, Cookiecutter Django does not provide Kubernetes integration so I extended it and added the necessary Kubernetes manifests to deploy to Azure Kubernetes cluster.

The web application consists of the following services:

  • Django: Web framework.
  • Redis: Queue Broker for escalating heavy tasks to workers.
  • Celeryworker: Process background tasks.
  • Celerybeat: Process scheduled tasks.
  • Postgres: Database.
  • Flower: Task monitoring tool for Celery

I have implemented a simple use case in the application for the purpose of demonstrating the communication and deployment of the services in the cluster. Basically, the Django web app provides a form in the admin panel where a user can upload an image and specify its title and caption. Then, the application will escalate a task to the queue to resize the image. If any of the workers is free, then it will overtake the task, resize the image and store it.

With that, Django, Postgres, Redis and celeryworker need to communicate with each other to save and resize the image. The communication is done as following:

  1. Upon form submission, Django stores the record in the database.
  2. Django uploads the image to a shared volume.
  3. Django notifies the queue that a new record has been stored in the database with a given ID.
  4. An unoccupied celeryworker check the queue and overtake the task.
  5. celeryworker queries the record from database.
  6. celeryworker gets the image from the shared volume, resize it and overwrite the image in the shared volume.
  7. Celeryworker notifies Redis about the result of the task.

The following graph illustrates the previous steps:

Step by step on how resize image task is carried in the application.

With the aforementioned scenario, we need to use three different communication medium in Kubernetes to perform the task:

  • Kubernetes LoadBalancer resource to access the admin panel.
  • ClusterIP to communicate between the services and pass the record’s ID.
  • Kubernetes shared Persistent Volume to access and process the image.

We could convert the image to base64 string and pass it through the internal network without the need to use persistent volume. However, this will decrease the performance and add communication overhead. Also, for the sake of learning, it would be great to utilize different Kubernetes resources.

In development phase, I prefer to develop using docker-compose since Kubernetes is mainly used for production and manages services in multiple nodes, while we develop locally in a single node. However, you still can develop with Kubernetes locally with one node using Docker Desktop or Minikube.

The specification of the services can be found in local.yml . With that, you can run the application using:

docker-compose -f local.yml up --build

Azure Kubernetes Cluster

If you don’t have already an Azure account, you can create one for free. You will have free credits that should be more than sufficient to run this tutorial.

In Azure portal, it is better to group the resources of your app in a logical group, this is called Resource Group. Then, within your created group, click add, and search for Kubernetes service. The form is straight forward, just make sure to remember the resource group and cluster name and enable the following:

  • In Authentication tab, enable AKS-managed Azure Active Directory, this will enable you to connect to cluster from your azure cli in your device (or in your CI runner) and control the cluster locally using kubectl commands.
  • In Networking tab: set network configuration to Azure CNI. This will create an Azure VPC instead of using default Kubernetes network, kubenet.
  • Create new container registry and enable admin user. You can use external private registry, like dockerhub or harbor.io. However, in this tutorial, I will use Azure Container Registry (ACR).

Once cluster is created, open your local terminal and run the following commands:

SUBSCRIPTION_ID=  # your subscription id when you created your acc.
RESOURCE_GROUP= # The resource group you created.
CLUSTER_NAME= # name of the kubernetes service.
az account set --subscription ${SUBSCRIPTION_ID}
az aks get-credentials --resource-group ${RESOURCE_GROUP} --name ${CLUSTER_NAME}

Then, a command will prompt asking you to login by providing you a login URL and a code that you need to copy to a browser. Once authenticated, you can then apply any changes to the cluster from your command line. To test it, run the following command:

kubectl get all --all-namespaces

With that, you are actually getting all the information from your Azure cluster.

Azure Container Registry

If you went to your resource group in the portal, you would find two additional resources that have been created along with your cluster, namely Virtual Network and Container Registry. So now we need to build the production images and push them to ACR in order for the cluster to find them and pull them.

For starters, click on the container registry and navigate to Access Keys. There you can find the login server and the credentials that you need to login. In the cloned repository, modify .env file and set CR_URL to the value correspond to login server.

Then, docker-login to the registry by running the following command in your terminal:

docker login -u ${ACR_USERNAME} -p ${ACR_Pass} https://${ACR_URL}

Once successfully logged in, you can then build your production images and push them to the Container Registry using the following commands:

docker-compose -f production.yml build
docker-compose -f production.yml push

You can verify that they are pushed by navigating to Repositories tab in your Container Registry dashboard.

Production Namespace

Namespaces in Kubernetes is a way to logically isolate services or applications from each other. It is not always necessary to have but it offers several advantages and it is recommended per Kubernetes Best Practices. Namespaces help:

  • Enforcing roles and responsibilities between the teams, specifying who can access what in the cluster.
  • Partitioning landscapes, for example, logically separate staging and production services or app1 and app2.

In almost every Kubernetes distribution, the cluster comes with several Virtual clusters. An important namespace namedKube-system, contains the Kubernetes system controllers and API. There are few cases where you need to modify the services inside it but it is not always necessary. Another namespace that Kubernetes distributors usually provide is a default namespace for you to create your services. It is called, well, default namespace.

As per best practices, it is better to organize your application using a namespace and assign the correct roles and responsibility for a team to access and modify it. For our use case, I will create a namespace called production. The following snippet shows how to create a namespace:

Kubernetes Namespace manifest

To deploy it, run the following command:

kubectl apply -f compose/kubernetes/namespaces/production.yml

Next

In the next article, I will cover setting up all the requirements prior to release and lunch the application. Moreover, I will cover configurations, secrets and shared volumes.

Clean Up

If you ran the tutorial, please go ahead and delete the resource group, Azure then will delete every resource in that group. Additionally, delete the service principal that has been created along with the resource group.

--

--

Ousama Esbel
COMPREDICT

Head of IT at COMPREDICT GmbH, worked as full stack machine learning engineer. Enthusiastic about AI.