Dockerizing Django Application with PM2 and Nginx

A detailed guide on creating a production-ready Docker image for your Django application.

Avi Khandelwal
DATA PEACE AI
13 min readAug 24, 2020

--

You are a professional software developer and writing some awesome code for your company. You have your application ready to be tested. You opened up your terminal, fired up the engines, and started the application. Your application is perfectly working fine on your local machine. Finally, you pushed your final tested code to the version control system, maybe GitHub. One of your team members tested out the same code on his/her machine, and boom something broke! If you are addressing this problem, you might have got into “But, it works on my machine” situation.

There are many possible reasons why these things keep happening in software development. Some of them are:

  1. Different environment configurations.
  2. Missing libraries and dependencies.

There are quite a few ways available to prevent these development pitfalls, and one of them is to dockerize an application.

In this blog post, I will show you how to configure a Django application to run on Docker. To make this production-ready, we’ll use PM2 which is a daemon process manager that will help you manage and keep your application online 24/7. As a bonus, we’ll also take a look at how to integrate Docker containers with AWS CloudWatch for logging.

Enough overview. Let’s jump right into the Docker.

Docker

Why Docker?

Docker is a tool that is designed in a way so that it makes it easier for developer teams to ship applications by using containers. Docker helps to package all the parts of an application such as libraries, other dependencies, and deploy it as one package. When the docker image gets shared among team members, anyone run the container out of that image in just a matter of seconds. Also, the docker image can be pushed to Docker Hub so that anyone can pull it and use it. It’s that simple.

Unlike Virtual Machines, rather than creating a whole virtual operating system, Docker can be run on the top of your existing VM taking the advantage of the OS kernel on which they are running. So it makes them easier for applications to be shipped with things not already running on the host computer.

The developer team can now spend more time writing some awesome code without focussing on the machines that it will eventually be running on. Docker provides developers the isolated environments for their applications so it eliminates the risk of breaking anything.

Prerequisites

Before we begin, it is assumed that you have:

  1. Created a Django web application.
  2. Installed Docker and Docker Compose on your machine.
  3. An existing AWS account with its credentials stored securely.
  4. Access to AWS CloudWatch with appropriate attached policies in order to write logs to CloudWatch.

The Dockerfile

We’ll begin by creating a Dockerfile at the project’s root. A Docker file is a text document(without any extension) that contains all the commands that you call on the command line to build an image. During the build, the Docker is able to read all the instructions written in it to build an image.

Add a Dockerfile at the project’s root with the following content:

Let’s break down the content of this Dockerfile:

The first line of the Dockerfile starts with FROM instruction. This sets the base image for our Django application for subsequent instructions. We are using a Slim based Docker image for Python 3.8.5. This will optimize and secure our containers by understanding the application and what it needs using various analysis techniques.

Note: I would not recommend to use Alpine based Docker image specially if you are using Python. It will result in slower build time and have many runtime bugs. For more information, you can check this article.

RUN instruction will tell Docker to execute any command that is required to build an image in a new layer. We want to install PM2 as our process manager for the Django application. Lastly, we need to do some housekeeping for cleaning up all unnecessary files. The RUN statement can be used multiple times for installing dependencies, but remember every RUN statement will add a layer. To keep things efficient minimize the number of layers.

ENV instruction in Docker sets the environment for our application. This can be set using a key and its value. ENV variables are available when building the image, as well as when a container is started from it. Here we can set some environments for our Django application such as development, staging, and production. We can set the project’s working directory inside the container using ENV and then use WORKDIR to tell the path of the working directory.

PYTHONUNBUFFERED instructs docker not to buffer the output from Python in the standard output buffer, but simply send it straight to the terminal.

COPY instruction will copy the requirements.txt file which is present at the project’s root to the working directory that we have set earlier through WORKDIR.

Next, we’ll install all our project libraries from requirements.txt using pip. Did you notice that we are installing project libraries before copying the source code to the Docker image? Since we touch the requirements.txt file rarely, we’ll cache the layers until the point when copying the source code into the image.

Next, copy the complete source code to the project’s working directory inside the container. Additionally, a log directory can be created if your Django application is configured for logging through settings.py.

ENTRYPOINT instruction allows you to configure a container that will run as an executable. ENTRYPOINT instruction has two forms:

  • ENTRYPOINT [“executable”, “param1”, “param2”] (exec form, which is the preferred form)
  • ENTRYPOINT command param1 param2 (shell form)

shell form calls out /bin/sh -c <command> for normal shell processing to take place whereas exec form does not invoke a command shell. This means that normal shell processing does not happen.

Inside the exec form of the entrypoint instruction, we tell Docker to always execute a bash script named docker-entrypoint.sh. We’ll look at the docker-entrypoint.sh script later in this post.

Lastly, we define the CMD instruction and pass the commands to run the Django API application.

CMD instruction has three forms:

  • CMD [“executable”,”param1",”param2"] (exec form, this is the preferred form)
  • CMD [“param1”,”param2"] (as default parameters to ENTRYPOINT)
  • CMD command param1 param2 (shell form)

When you specify more than one CMD instruction only the last one will take effect. Remember you can use ENTRYPOINT and CMD together. In this case, you can specify the default executable for your image while also passing the default arguments to that executable. To understand more deeply how our ENTRYPOINT and CMD instructions are working, we’ll be looking at the docker-entrypoint.sh script next.

The docker-entrypoint.sh script

Remember that we have added a docker-entrypoint.sh script to the ENTRYPOINT instruction of the Dockerfile to run as an executable. Go ahead and create a docker-entrypoint.sh file at the project’s root and populate it with the following content:

For simplicity, let me break it down:

Before beginning any bash script it is a good idea to include set -euo pipefail as it will help the bash script to exit whenever any command fails, there are unset variables, or to return code of the whole pipeline If any command in a pipeline fails.

Next, we are checking the argument passed with the docker command. If nothing is passed, it will assume the default argument as help and show the usage. Otherwise, the argument is set to whatever passed through Docker command.

These two functions will apply the Django migrations to the changes that you make to your models and collect the static files to a specified directory.

show_usage() function, when executed, will display the usage of concepts related to Docker as well as running the Django app’s API application.

This is the docker-entrypoint.sh main block. Here you can see there are so many case statements to process the passed arguments. It checks if the first argument passed through CMD in Dockerfile is run and the second argument is api then it will invoke the run_db_migrations and run_collect_static_files functions for the Django API application, otherwise default to exit.

The main advantage of having this kind of methodology is that it helps you in having more control over the passed default arguments to the ENTRYPOINT executable. This lets you override the CMD arguments with your own choice without touching the Dockerfile when running a docker container out of the image. Isn’t it amazing?

Did you notice that we are using PM2? Its the process manager that helps in keeping the application online 24/7. We can easily wrap the Gunicorn by creating a PM2 ecosystem configuration file in its script field. You can learn more about the PM2 ecosystem file here.

The Docker Compose file

Since we are not only going to work with the Django API application and need to add support for Nginx and AWS CloudWatch logging driver, we need some way to use a multi-container for the Django application.

Docker Compose is a tool by which we can run multi-container Docker applications. It is a YAML file where we can define all the needed services for our Django application. The main advantage to use Compose tool is that with a single command, we can build and run all the services defined in the Compose configuration file.

We’ll be creating two Compose files, one for the development environment and another for the production environment.

With that, in your project’s root, create a docker-compose.yml file, and populate it with the following content:

Let’s break down the snippet:

The first line of the Compose file starts with a version. You must specify the correct version according to the Docker Engine version that you are using.

Next, we have to define service which contains configuration that is applied to each container started for that service.

Here the first service that we have defined is for the Django API application which builds from the current directory and tags it with the name of the image app:latest. Next, we are defining volumes for persisting data for the Django static files. Let’s discuss secrets in a minute. We can set up the environment dynamically for our Django application using a .env file which I’ll show later in this post.

Here we are defining the Nginx service for the Django application using the Nginx image and assigning a custom container name to it. Again we are using volumes for persisting data for static files and sharing the Nginx configuration file to the specified path inside the container. depends_on expresses the dependency of Nginx service on the API service.

Our Django application uses an env file to store sensitive information such as database passwords, authentication tokens, etc. and you must not transmit these secrets over the network or stored them unencrypted in a Dockerfile or in the application’s source code. A good approach is to use Docker secrets which centrally manage this data and securely transmit it to only those containers that need access to it.

For setting up Nginx for the Django application, a configuration file is required. In the project’s root create nginx/ directory and inside that create an app-api.conf file with the following content:

Production Docker Compose

Create a docker-compose.prod.yml file in the project’s root with the following content:

This production docker-compose file is only meant for configuring the AWS CloudWatch logging driver to send the container logs to the AWS CloudWatch. Notice we are dynamically setting all the required variables for the configuration of the log which can be set using the env file and that is the next topic of our discussion. Make sure you must create a log group through AWS CloudWatch console beforehand or set the awslogs-create-group to true to automatically create the log group as needed.

The .env file

The Docker environment variable file (.env) can be very useful in setting the environment variables for the applications. This is quite handy as it allows to dynamically set the key and values for the environment without hardcoding them in the Docker files. Also, it can be reused for different environments and containers just by a quick edit.

It has to be created in the same directory where the docker-compose command is executed.

Create a .env file in your project’s root with the following content:

With that, you can set your own environment variables as needed for the Django application.

Use .dockerignore file to improve Docker images

When working in a production environment it is necessary to avoid bulky Docker images that contain many unnecessary files and directories. In the production environment, we have to focus on keeping the Docker images size small, which builds fast and is way more secure. The .dockerignore file is a plain text file where we can exclude our files and directories that need not be a part of the final image.

At the project’s root, create a .dockerignore file with the following content:

Time to test things

Finally! its time to test what we’ve done so far. Open up your terminal to get the containers running. First, build the images and then start the services:

Or you can combine the commands into one:

This one command will build the image and then start the service. For the first build it will take some time, so just relax and sit back. Subsequent builds will be much faster due to Docker caching.

Docker build

Next, run:

to check the running containers.

Docker check running containers

Visit your server’s IP address and you’ll see the Django administration page:

Django administration page

Since we’ve made another Compose file for logging out the logs to AWS CloudWatch, go ahead and run:

Docker Compose in production

To see the logs, navigate to your AWS Management Console and open up the AWS CloudWatch service. On the Logs section in the navigation pane, you’ll see two log groups have been created. One is for the Django API application and another one for the Nginx. Inside the log groups, you’ll find the log streams where all the logs are collected.

CloudWatch Logs Group
CloudWatch Logs Stream

Conclusion

In this post, we learned how to dockerize a Django application using PM2 and Nginx. We have also created a production-ready Docker Compose file to collect logs and send them to the AWS CloudWatch. Additionally, we walked through how to set up a Docker entrypoint script which can be reused for various Django applications other than the API.

Although we didn’t set up any database service such as Postgres, I suggest using a fully managed database service such as AWS RDS for the production deployment.

If you enjoyed this post, I’d be very grateful if you’d spread by emailing it to a friend or a colleague. Thank you!

--

--

Avi Khandelwal
DATA PEACE AI

A DevOps enthusiast who loves to automate repetitive tasks, saving some time and energy.