Hands-on Day 1 and Day 2 Operations in Kubernetes using Django and AKS — Part 2

Ousama Esbel
COMPREDICT
Published in
7 min readMar 8, 2021

Kubernetes has become the de facto container orchestrator due to its various functionalities and flexibility. Although Kubernetes documentation is thorough and provides many examples, it is not straight forward to combine all these tutorials and use it to deploy a real-life application with several services from end-to-end. With that, I will demonstrate how to deploy a real-scenario application on Azure Kubernetes Cluster (AKS) as the production platform. Moreover, I will be discussing day 1 and day 2 operations of the application lifecycle in Kubernetes in series of articles. Here are the main headlines:

  • Discuss the application and set up the cluster, container registry and the production namespace. (part 1)
  • Deploy Config Maps, Secrets and Persistent Volumes. (part 2)
  • Deploy, monitor and define update strategies for the services including setting up Traefik as Ingress Controller. (part 3)
  • DevOps and Auto deployment using Github Action. (part 4)

You don’t have to use Azure Kubernetes Service per say, You can easily re-configure the manifests to be compatible to any Kubernetes installation, such as, AWS EKS or Linode. However, as a prerequisite, you need to have a basic knowledge of Kubernetes, docker, yaml and shell scripting. In addition, if you want to run the application along with the tutorial, you need to:

This article discusses how to manage the application’s environment variables and configurations as well as how to set up the volume profiles that will be mounted onto the containers.

ConfigMaps and Secrets

Before we start deploying our services, we need to configure the environment variables and secrets that the services need to function properly. So, I have translated all cookiecutter .envs files into ConfigMaps and Secrets. You can add ConfigMaps and Secrets manually to your cluster through Configuration tab in your Azure cluster dashboard. However, it is better to automate these steps through manifests.

You can find the configuration files in compose/kubernetes/configmaps and secrets . To deploy them in your cluster, run the following commands:

kubectl create -f compose/kubernetes/configmaps/.
kubectl create -f compose/kubernetes/secrets/.

Kubernetes Secret resource is not the best way to store sensitive data as the values are stored as an encoded base64. Instead, we can use Azure’s Key Vault resource to securely save these configuration and provide them to the services. However, it is more complicated to configure and integrate than Secrets.

Persistent Volumes

Persistent Volumes (PV) are very crucial resources to make an application stateful. For our application, we need three different volumes:

  • Volume for Postgres data.
  • Volume for backing up Postgres.
  • Shared volume for media directory that contains the uploaded images.

Kubernetes provides two methods to create and attach PVs, either dynamically or statically. As a developer using Kubernetes, you are only allowed to create volumes based on profiles determined by the cluster administrator. These profiles are called Storage Classes. Out of the box, Azure provides four different types of Storage Classes. However, all these storage classes will not retain your data if the volume is gone. Moreover, you will lose all your data. To tackle this issue, you can do one of the following approaches:

  • Create a custom Storage Class and set the reclaim policy to retain the data. Then, you can dynamically create volumes with this class.
  • Create a static persistent volume from the default storage class and override the reclaim policy.
  • Dynamically create a Persistent Volume and then patch the created volume to retain the data.

In this tutorial, I will go through all of the above approaches, although the last option is the least favorable because you cannot reproduce it from manifests. Furthermore, here are the different strategies that will be used:

  • For the shared media, a static Persistent Volume will be created from azurefile Storage Class.
  • For Postgres data, I will create a custom Storage Class to define Azure disk that can retain information, then create a dynamic Persistent Volume.
  • For Postgres backups, I will create a dynamic Persistent Volume from Azure’s default Storage Class and manually modify the Persistent Volume to retain the information.

Media Shared Volume

In Static Volume Provisioning, we need to create a Persistent Volume that describes the volume characteristics, like resources, access mode, resource size, etc. Then, we need to create a Persistent Volume Claim (PVC) that can be used to attach the service to the PV.

Being that, we have to create a Azure Storage Account first. So, head to your resource group, click “add” and search for storage account. The form is straight forward, just make sure to remember the storage account name, to set the location the same as the cluster’s location and to keep everything as default (for real production, you might need to enable soft-delete for restoration purposes).

Once created, click on the storage account and, on the left menu, click on File shares. Click on add “+ File share” and set the name and Quota as you see fit (in my code I named the File shares aksshare).

Now, we need to create a Persistent Volume manifest that will attach the newly created File storage to our services (namely Django, Celeryworker and Celerybeat). The following snippet shows how to create the resource:

Persistent Volume manifest of media volume

What you need to bear in mind is that the storage class used is azureFile. This type requires credentials in order to mount. The credentials are passed to the volume using Secret. In the cloned repository, I have provided an example manifest of the Secret manifest in compose/kubernetes/secrets/pm.yaml.exmaple, where you have to rename it to pm.yaml and fill in the values inside. You can find the values in the dashboard of Azure Storage Account you created. Head to the dashboard, then go to Access Keys on the right menu and click on show keys. You need to copy the storage account name and any of the provided keys.

Additionally, the volume will be mounted with permission 0777 for owner with ID 200. In the dockerfile of each service, I created a user for running that service and assigned the user an ID of 200.

To deploy the volume, run the following commands:

kubectl create -f compose/kubernetes/secrets/pm.yaml
kubectl create -f compose/kubernetes/persistent_volumes/media.yaml

To verify if PV and PVC is created, run the following:

kubectl get pv -n production
kubectl get pvc -n production

You should be able to see that PV has been created and the PVC is bounded to it as the following:

Postgres Data Volume

First of all, we need to create a custom Storage Class that can retain the data. The following snippet shows the configuration needed to create the Storage Class:

Custom StorageClass manifest to retain the data within the volume.

What is important, is to set the reclaim policy to retain. Additionally, allowVolumeExpansion is very useful to expand the volumes on fly. To create the custom Storage Class, we need to run the following command in the persistent_volumes folder:

kubectl create -f storage_classes/retain_azure_disk.yaml

In dynamic volume provisioning, we don’t need to create PV, we only need to create a PVC that uses the custom storage class:

Persistent Volume Claim that uses the already created storage class.

As before, we deploy the PVC using kubectl apply -f postgres-data.yaml

Postgres Backups Volume

For Postgres Backups, we will dynamically create Persistent Volume. However, we will not use the custom Class Storage created earlier. As mentioned before, in Dynamic Volume Provision, we only create the claim. The PVC is very similar to the PVC we used for Postgres data, the only difference is that we need to set storageClassName to default to use the default Storage Class provided by Azure. When you create the PVC, you will notice that the reclaim policy is delete as shown below:

We can update the reclaim policy for this resource by using Kubernetes API as following:

kubectl patch pv <pv-name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'

where <pv-name> in the above case is: pvc-65e37164-058b-4698-bd6a-908a953e1e27 . As much convenient this is, it is hard to trace and document. It would therefore be advisable to use manifests over patching.

Next

In the next tutorial, the microservices will be launched and exposed to outside using Traefik and LoadBalancer. Both Django and Flower will be secured by TLS. In addition, I will discuss day 2 operations, specifically monitoring the services using health checks and strategies to update the application.

Further Improvements

  • Use Helm to automate the deployment of Kubernetes and make the manifests more customizable.
  • Integrate Key Vault instead of Kubernetes Secret resource.
  • Use same Virtual Network for Azure Storage Account and Kubernetes Cluster, and enable soft-delete.

Clean Up

If you ran the tutorial, please go ahead and delete the resource group, Azure then will delete every resource in that group. Additionally, delete the service principal that has been created along with the resource group.

--

--

Ousama Esbel
COMPREDICT

Head of IT at COMPREDICT GmbH, worked as full stack machine learning engineer. Enthusiastic about AI.