Django-React App — Part 3 Production Deployment

Roman Kosanovic
ascaliaio
Published in
11 min readJan 10, 2023

TL;DR: Using a single Dockerfile per repository, that was also used for local development as described in Part 1 and Part 2, build production images of the Django backend and React frontend and deploy them in a kubernetes cluster.

Introduction & Motivation

As seen in previous articles, a single Dockerfile is written and meant to be used for local development and production builds. Having only one Dockerfile reduces the number of files in a repository and makes it easier to manage how an image is built for all purposes.

A few things to keep in mind about a production grade image are its size, security aspect and stability.

Size is influenced by the number of dependencies and their own sizes but also how a Dockerfile is written. In production the goal is to keep the number of OS and language dependencies to a bare minimum, basically only what is needed to successfully run the image. Therefore, a dependency list such as a requirements.txt in Python world or package.json in JS world, should be revised and maintained regularly by the Software Engineers responsible for their app. The Dockerfile itself can greatly influence the image size. If the whole Dockerfile is written using a single stage for building and running the app, the image will be noticeably bigger than building it using multi-stage Dockerfiles. The reason for this is when packages are built, such as pip wheels or node modules, a lot of the time additional build tools and libraries are required to be present in the base image. Additionally, the build process not only creates the built packages but also documentation and other metadata files not necessary in a running production image. That’s why there should exist a separate build and a separate runtime stage into which only the built package binaries are copied over from the build stage. There are other “tricks” such as having as few “RUN” lines as possible etc.

Security level of an image is influenced by what it contains and a few other configurational settings. The more software it contains, the greater the possible area of an attack. The dependency list therefore not only influences size but security as well. Package versions should be updated regularly in accordance with their own security updates. OS dependencies that are present in the base image and what we specified in the install command within a Dockerfile, should be upgraded during build time for the same reason. There are other tweaks one can configure within the Dockerfile but of the more important ones are to configure a user and limit its permissions to the app directory, e.g. Bitnami does that for all its images.

When talking about stability, in the case of web development, it’s about how to handle production load in a reliable manner. The development servers used in local development environments are no good here, because they weren’t meant to handle heavy traffic or be reliable. They aren’t designed to be customer-facing. For Django and other Python frameworks, there are multiple production grade servers to choose from such as gunicorn, uWSGI, uvicorn and many other, each with its own advantages and disadvantages. They are excellent API webservers but were meant to serve only application code. Django on the other hand has static files as well that can’t be served by these servers.

To create a production ReactJS app, its files will be built into static files and most commonly served by Nginx.

Nginx can be leveraged to serve the React files as well as Django’s static files. This approach is discussed in more detail in this article.

Kubernetes & Production Deployment

Kubernetes is a container orchestration platform used to manage services on multiple nodes. It’s practically become the industry standard despite its relatively high learning curve compared to other orchestrators like Docker Swarm for example. With almost every application aspect configurable, it feels like it’s become an “application OS” and will cover practically any use-case you may have for your production environment.

Main problem

In the introduction, I touched upon the main problem — serving Django static files.

Deciding how to solve it will impact image building and deployment of both parts of the app and possibly performance as well.

There are many options to serve static assets, two of the most common are using a CDN (Content Delivery Network) such as Amazon Cloudfront or using Kubernetes storage options such as kubernetes volumes in case your files aren’t too big. The focus of this article will be on the latter as it can be applied to cloud and on-premise environments as well.

Solution

In order for Nginx to be able to serve Django’s static files, it needs access to them. Using a kubernetes volume that can be mounted into both the frontend and backend container, this can be easily achieved. The entire process is divided into these steps:

  1. First, static files need to be created and copied over to a folder intended to be kept by the volume
  2. Ensure the volume is mounted into each container of the pod.
  3. Write a route in the nginx configuration of the frontend container to serve the files

The next problem we’re facing is what kind of volume to use? Do we tie the volume, and hence the files, to the life of the pod? In which case, a regular kubernetes volume should be used. Or should we use a persistent volume? In which case, the files will persist on the host disk after the pod is terminated.

This decision seems trivial but has serious repercussions on your infrastructure. Suppose, we opt for a persistent volume. This now means that the files will be stored on the underlying host. If the kubernetes cluster is created in a cloud environment, this means that only for these measly static files, a storage driver, storage class and at least a persistent volume claim should be configured. On AWS, that’s usually the AWS EBS CSI driver and this means that for this volume alone, an EBS disk will be created in the cloud thereby increasing costs. Disks are usually cheap but it’s a good thing to keep in mind. Also in a multi-node cluster configured for high availability, i.e. with instances in multiple AZs (data centers within an AWS region physically at least 80 km apart), this means that each time a pod is deployed and assigned to a different instance, a new disk will be created attached to that instance. A bit of a mess for sure.

On the other hand, a regular kubernetes volume doesn’t depend on the instance, it’s tied to the pod and its lifecycle.

From the docs, Pods are the smallest deployable units of computing that you can create and manage in Kubernetes. A Pod (as in a pod of whales or pea pod) is a group of one or more containers, with shared storage and network resources, and a specification for how to run the containers.

By being able to share storage and network resources between containers, serving static files can be done quickly and reliably if both of the containers are managed by a single pod. This is also known as a sidecar pattern. There is no extra disk creation or anything like that but the deployment file will now couple together both parts of the app and one can now ask the question in which repository to put the deployment file? That question will be addressed in Part 4 when talking about Continuous Deployment pipelines.

For now, the game plan is clear — create the static files in backend build time, copy to a folder kept in a kubernetes volume that will be mounted to the frontend container too.

The Django backend Dockerfile’s production stages look like this:

###################################################################################
# Base stage #
###################################################################################
# syntax = docker/dockerfile:1.4
ARG PYTHON_VERSION=3.9

FROM python:${PYTHON_VERSION}-slim-buster as base

###################################################################################
# Local Development Build stage #
###################################################################################
FROM base as local-builder
... (seen in Part 1)

###################################################################################
# Local Development Runtime stage #
###################################################################################
FROM base as local
... (seen in Part 1)

###################################################################################
# Production Build stage #
###################################################################################
FROM base as builder

ARG ENVDIR

ENV ENVDIR=.prodenv \
DEBIAN_FRONTEND=noninteractive

WORKDIR /usr/src/app

RUN apt-get update && apt-get install -y --no-install-recommends build-essential \
<packages needed to build your python deps>

COPY requirements.txt ./

RUN --mount=type=cache,mode=0777,target=/root/.cache/pip \
pip3 install --upgrade pip && \
pip3 install --no-warn-script-location \
--prefix=/install \
-r requirements.txt

###################################################################################
# Production Runtime stage #
###################################################################################
FROM base as prod

EXPOSE 8080

ARG ENVDIR=.prodenv

ENV ENVDIR=${ENVDIR} \
DEBIAN_FRONTEND=noninteractive \
# Keeps Python from generating .pyc files in the container
PYTHONDONTWRITEBYTECODE=1 \
# Turns off buffering for easier container logging
PYTHONUNBUFFERED=1 \
env=PROD

WORKDIR /usr/src/app

RUN --mount=type=cache,target=/var/cache/apt,id=apt \
apt-get update && apt-get install -y --no-install-recommends dumb-init \
<other OS deps needed by python packages at runtime>
&& apt-get clean \
&& (rm -f /var/cache/apt/archives/*.deb \
/var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin /var/lib/apt/lists/* || true)

# Copy over the python packages built in the previous stage
COPY --from=builder /install /usr/local
COPY . .

# Collect static files, copy to the folder that will be in kubernetes volume and
# create a non-root user with an explicit UID and permission to access the /usr/src/app folder
RUN cd /usr/src/app/ascalia/ && mkdir -p static && envdir ${ENVDIR} python manage.py collectstatic && cp -R static /var/www/ \
&& adduser -u 5678 --disabled-password --gecos "" appuser && chown -R appuser /usr/src/app

USER appuser

ENTRYPOINT [ "/usr/bin/dumb-init", "--" ]
CMD [ "./start_server.sh"]

The last “RUN” line is where the magic of static file creation and copying happens. As the docker docs say: “The RUN instruction will execute any commands in a new layer on top of the current image and commit the results. The resulting committed image will be used for the next step in the Dockerfile”. If the static files had been created as part of the CMD command, they wouldn’t be present in the right folder at the right time, i.e. when the volume is mounted, so they would be missing. Also, in accordance with best security practices, the root user won’t be the default container user. A totally new user is created that doesn’t have root privileges.

The React Dockerfile’s production stages look like this:

##########################################################################
# Base Stage #
# This gets our dependencies installed and out of the way #
##########################################################################
# syntax = docker/dockerfile:1.4
ARG NODE_VERSION=16.11

FROM node:${NODE_VERSION}-slim as base


# need to put here all the environment variables
ENV NODE_ENV=production \
BABEL_ENV=production

RUN --mount=type=cache,target=/var/cache/apt,id=apt \
apt-get update && apt-get upgrade -y && apt-get install -y git build-essential --no-install-recommends \
<OS deps needed for building node-modules>
&& apt-get clean \
&& (rm -f /var/cache/apt/archives/*.deb \
/var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin /var/lib/apt/lists/* || true)

WORKDIR /app

COPY package*.json ./

RUN --mount=type=cache,mode=0777,target=/root/.yarn YARN_CACHE_FOLDER=/root/.yarn \
yarn install --production --non-interactive --ignore-optional

##########################################################################
# Local Development Stage #
# don't COPY in this stage because for dev you'll bind-mount anyway #
##########################################################################
FROM base as local
... (seen in Part 2)

##########################################################################
# Prod intermediate Stage #
##########################################################################
FROM base as prod-intermediate

ENV PATH=/app/node_modules/.bin:$PATH \
NODE_ENV=production \
BABEL_ENV=production

COPY . .

# Creating a production build
RUN yarn build

##########################################################################
# Production Stage #
##########################################################################
FROM nginx:1.23.2-alpine AS prod

ARG REACT_APP_API_URL=<YOUR APP URL> \
NGINX_CONF=nginx.conf

# need to put here all the environment variables
ENV NODE_ENV=production \
BABEL_ENV=production \
REACT_APP_API_URL=${REACT_APP_API_URL} \
NGINX_CONF=${NGINX_CONF}

RUN rm /etc/nginx/conf.d/default.conf && rm /etc/nginx/nginx.conf

# NGINX_CONF is used be able to specify diff configs
COPY ${NGINX_CONF} /etc/nginx/nginx.conf
COPY --from=prod-intermediate ./app/build /usr/share/nginx/html

RUN touch /tmp/nginx.pid \
&& chown -R nginx:nginx /tmp/nginx.pid \
&& chown -R nginx:nginx /var/cache/nginx \
&& chown -R nginx:nginx /var/log/nginx \
&& chown -R nginx:nginx /etc/nginx/nginx.conf \
&& chown -R nginx:nginx /usr/share/nginx/html

USER nginx

CMD [ "nginx", "-g", "daemon off;" ]

Dependencies (node modules) are built in the “base” stage upon which the “prod-intermediate” stage continues and is used to build a production grade ReactJS app. Those files are copied over to the final “production” stage to be served by nginx. The existing nginx user is set as the default container user to avoid using root.

It’s good to point out that the prod stage is using an alpine version of the nginx image as base. Unlike python applications for which alpine-based images should be avoided (see here), React production files are static and nginx only serves them to the browser in which the magic happens. For this reason, it doesn’t have to be a Debian based nginx docker image so why not make it as small as possible and use an alpine-based instead?

We need to tell nginx where to expect Django’s static files too so a basic configuration could look something like this:

...
pid /tmp/nginx.pid;

http {
...bunch of lines not relevant for the topic of the article...

upstream django_app {
# All containers in the same pod are reachable on 127.0.0.1
server 127.0.0.1:8080;
}

server {
listen 80;

server_tokens off;
server_name _;

# Serves the built React static file
location / {
root /usr/share/nginx/html;
index index.html;
try_files $uri $uri/ /index.html;
}

# Route requests to the backend Django API
location /api {
add_header Access-Control-Allow-Origin *;
add_header Access-Control-Max-Age 3600;
add_header Access-Control-Expose-Headers Content-Length;
add_header Access-Control-Allow-Headers Range;
try_files $uri @django;
}

... other django routes...

# Django Static Files - routes beginning with /static/
location /static/ {
add_header Access-Control-Allow-Origin *;
add_header Cache-Control public;
add_header Pragma public;
add_header Vary Accept-Encoding;
root /var/www/;
}

# Use a named location so no internal rewrite gets performed
location @django {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_pass http://django_app;
}

error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /var/www/static;
}
}
}

Now the final touch is the k8s deployment file. I’ll keep it as simple as possible to get the point across but of course you’ll add the necessary environment variables your app needs as well as lifecycle and readiness probes, resource specifications etc.

apiVersion: apps/v1
kind: Deployment
metadata:
name: django
labels:
app: django-react
namespace: my-app
spec:
replicas: 1
selector:
matchLabels:
app: django-react
template:
metadata:
labels:
app: django-react
spec:
terminationGracePeriodSeconds: 30
containers:
# Backend container definition
- name: django-backend
image: django-backend:v1
imagePullPolicy: Always
ports:
- containerPort: 8080
name: http
# Mounts the volume in the backend container
volumeMounts:
- name: staticfiles
mountPath: /var/www/static
# Frontend container definition
- name: react-frontend
image: react-frontend:v1
imagePullPolicy: Always
env:
- name: REACT_APP_API_URL
value: <MY APP URL>
ports:
- containerPort: 80
name: http
lifecycle:
preStop:
exec:
# Gracefully shutdown nginx
command: [
"sh", "-c",
"sleep 5 && /usr/sbin/nginx -s quit"
]
# Mounts the volume in the frontend container
volumeMounts:
- name: staticfiles
mountPath: /var/www/static
# Creates the kubernetes volume
volumes:
- name: staticfiles
emptyDir: {}

First an empty volume is created with a name “staticfiles” and is then mounted into each container on the path /var/www/static, exactly where Nginx expects it according to its configuration file.

You may notice a lifecycle hook for the frontend container. This is because nginx doesn’t inherently know how to handle signals. So when a pod is terminating, kubernetes will send a SIGTERM that nginx will ignore and keep serving until getting “killed”. To avoid that, because in some situations it may be manifested as downtime to users, we make use of kubernetes mechanisms such as lifecycle hooks.

With these finishing touches, we can now successfully manually deploy a django and react web app into our production environment.

Summary

This article demonstrated how to build a production image for both the backend and frontend part of the app and how to deploy them in a kubernetes cluster.

The images are created to be as small as possible, employ a non-root user and have the latest security updates.

In this case, both containers are deployed within a single pod, known as the sidecar pattern, in which they share Django’s static files through the use of a kubernetes volume.

While this is a solid setup for deploying the entire app manually, manual deployment is definitely not a solution especially when working in a team. Deploying tested code as often as possible means making the app better and bringing more value to your clients in less time. To address this issue in more detail, I’ll present a possible solution in Part 4.

Even though I’ve given a lot of information, there could have been a lot more and writing about the deployment files in even more detail would’ve made the article far too long. That’s why your questions and comments are more than welcome and will try to answer them promptly.

Thank you very much for reading, hope it was clear and useful, don’t forget to clap if it was 🙂

--

--

Roman Kosanovic
ascaliaio

Senior DevOps/SRE Engineer and former physicist