Django-React App — Part1 Django Development Environment

Roman Kosanovic
ascaliaio
Published in
10 min readDec 29, 2022

TL;DR: Using docker and docker-compose create an easy to use and excellent approximation of a production environment. In this example, both Django and Node have “hot reload” capabilities making it possible to develop an application and reflect the changes immediately. Because everything is done within docker containers, there is no fear of having OS specific packages and having the app behave differently when deployed into a production environment. The principles described here can easily be applied to other frameworks that have the “hot reload” feature, such as Flask, FastAPI and many other.

Introduction & Motivation

Traditional code development implies having the programming language and code dependencies installed on your local machine. For languages that use interpreters, e.g. Python or JavaScript, this usually means downloading the dependencies as packages and then building them on your local machine. Sometimes those packages or modules (in node terminology) will use underlying OS binaries that will result in differing behavior depending on the OS of your local machine so once the app is deployed into a production environment, usually Kubernetes, Docker Swarm or a similar managed cloud solution, developers could be unpleasantly surprised by its behavior.

To avoid this problem a good approximation of the production environment is required. To achieve this, the following tools will be used:

  • Docker & docker-compose
  • Your favorite editor, here VScode
  • Make — optional

I will not go into details or the installation of each listed tool because the post would get far too long, but in short by installing Docker Desktop, now available for linux as well, all the required tools will be installed.

Docker & docker-compose

In this day and age, Docker shouldn’t need much introduction. It’s a virtualization platform that utilizes linux namespaces to isolate the processes (and its resources) it manages from the rest of the OS and these processes are called containers. Unlike a VM, it’s lightweight because it uses the underlying OS thereby making a container startup much faster than a VM that needs to start up its own OS first.

Nowadays there are alternatives to Docker such as Podman (a very good alternative) or rkt and others. Podman is striving to be more efficient than docker and by aliasing alias docker=podman one can use the same docker commands and have no learning curve for using it.

The reason my choice is docker, besides getting used to it, is its relatively new build engine called BuildKit whose features are impressive and not all can be used by Podman. An example would be its new and superior caching.

“When you’re running pip install(or Pipenv or Poetry) normally on your computer, it caches downloads in your home directory, so that later installs don’t require redownloading the same package.That doesn’t work in Docker builds because each build is its own self-contained little filesystem, starting at best from a previously cached layer.

And since the unit of caching is the RUN command, either you have all the packages downloaded, or none.

To solve this category of problem, BuildKit adds a new kind of caching: you can cache a directory across builds. It should be presumed to get deleted at any point, and in that sense it is quite similar to the directory caching provided by online CI systems.” — from an excellent blog I heartily recommend.

This new caching in combination with multi-stage builds can considerably speed up image building . The combo of these features will be utilized in setting up our development environment to make the experience smooth and fast.

The docker-compose binary makes it easier to orchestrate and manage multiple docker containers on a single host via the use of yaml files and a few simple commands. Any developer, not just DevOps engineers, should know its basics.

VScode

VScode is Microsoft’s impressive IDE that is constantly getting more and more features, making our development lives easier. One could do an entire course on its capabilities but suffice to say is that it can automatically create Dockerfiles and docker-compose.yml files depending on the framework specified. One of the most useful features is using it to get into a container or a remote host. With an incredible support for a wide array of languages and frameworks, it’s my editor of choice.

Make

A tool probably more than 50 years old has made a comeback in recent years but not in the way it was used before, like compiling C code. My love-hate relationship towards it is based on its incredible usefulness and automation capabilities and it’s incredibly ugly and somewhat illogical syntax.

Make is in its core a task scheduler. Unlike a shell script in which each command is methodically executed one after the other until it finishes or errors out, Make is organized into tasks called targets that can be called from the terminal at will. This makes it possible to organize multiple steps and long commands into a single task that can be executed by a single line, e.g. make build. I realize it’s not everybody’s favorite but for this purpose, writing it isn’t complicated and it saves a lot of time in managing the development environment.

Django

In this example, our app is using Django as the backend and its code is stored in its repository. We have dependencies used in production and even more in development, separated into two requirements files where the dev one depends on the production file. This is what the Dockerfile looks like:

###################################################################################
# Base stage #
###################################################################################
# syntax = docker/dockerfile:1.4
ARG PYTHON_VERSION=3.9

# In the CLI: docker image build --build-arg PYTHON_VERSION=<VALUE> .
FROM python:${PYTHON_VERSION}-slim-buster as base


###################################################################################
# Local Development Build stage #
###################################################################################
FROM base as local-builder

ARG ENVDIR=.devenv
# The value should be either "requirements" or "dev-requirements"
# In the CLI: docker image build --build-arg REQUIREMENTS_FILE=<VALUE> .
# Using docker-compose: docker-compose build --build-arg REQUIREMENTS_FILE=<VALUE>
# Afterwards bringing the containers up: docker-compose up -d
ARG REQUIREMENTS_FILE=dev-requirements.txt

ENV ENVDIR=${ENVDIR} \
DEBIAN_FRONTEND=noninteractive

WORKDIR /usr/src/app

RUN apt-get update && apt-get install -y --no-install-recommends build-essential \
<other OS packages needed to build out pip wheels> \
&& apt-get clean \
&& (rm -f /var/cache/apt/archives/*.deb \
/var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin /var/lib/apt/lists/* || true)

COPY ${REQUIREMENTS_FILE} requirements.txt ./

RUN --mount=type=cache,mode=0777,target=/root/.cache/pip \
pip3 install --upgrade pip && \
pip3 install \
--no-warn-script-location \
--prefix=/install \
-r ${REQUIREMENTS_FILE}

###################################################################################
# Local Development Runtime stage #
###################################################################################
FROM base as local

EXPOSE 8000

ARG ENVDIR=.devenv \

ENV ENVDIR=${ENVDIR} \
DEBIAN_FRONTEND=noninteractive \
# Turns off buffering for easier container logging
PYTHONUNBUFFERED=1 \
PYTHONFAULTHANDLER=1 \
env=DEV

WORKDIR /usr/src/app

RUN --mount=type=cache,target=/var/cache/apt,id=apt \
apt-get update && apt-get -y upgrade && apt-get install -y --no-install-recommends build-essential \
<OS dependencies needed to run our backend app> \
&& apt-get clean \
&& (rm -f /var/cache/apt/archives/*.deb \
/var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin /var/lib/apt/lists/* || true)

# Copy over the python packages built in the previous stage
COPY --from=local-builder /install /usr/local

# TODO we can pass variables here for production deployments, e.g. uwsgi
CMD [ "./start_server.sh" ]

###################################################################################
# Production Build stage #
###################################################################################
FROM base as builder

... (will be shown in the following articles)

###################################################################################
# Production Runtime stage #
###################################################################################
FROM base as prod

... (will be shown in the following articles)

ENTRYPOINT [ "/usr/bin/dumb-init", "--" ]
CMD [ "./start_server.sh"]

At first glance it looks scary but it actually repeats the logic of building out the packages in one stage and copying them over into the “runtime” stage. This is done to have the smallest image possible. The main focus here are the Local Development Build & Runtime stages.

Using build arguments we can change parts of the app such as the python version or a new requirements file and test the app with these new settings. The important part here is:
RUN --mount=type=cache,target=/var/cache/apt,id=apt and RUN --mount=type=cache,mode=0777,target=/root/.cache/pip which will direct BuildKit to cache layers across builds, making each subsequent build faster. If for example a pip package is changed, BuildKit is smart enough to use all the built packages that haven’t changed and are stored now in the cache. In contrast, the old docker build engine would build the entire layer from scratch, meaning all the packages regardless if their versions were changed or not or if they were added/removed or not, which would make the build slower.

For our local dev environment, dev-requirements file is used to build them in the dev build stage and these packages are then copied over to the dev runtime stage.

Notice that the dev runtime stage isn’t copying code into the container. Reason being, code will be changed during development and we don’t want to rebuild the image each time a change is made. Instead, using docker-compose, the code will be mounted into the container which will be started using the “hot-reload” feature of our django runserver. It’s important to note here that many other frameworks have the same option, like Flask, FastAPI and even NodeJS and its Webpack dev server. This way, each time a change is saved, the server will reload it and it will be immediately visible.

Next thing is writing the docker-compose.yml:

version: "3.8"

services:
db:
image: timescale/timescaledb:latest-pg12
volumes:
- postgres_data:/var/lib/postgresql/data/
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
environment:
- POSTGRES_DB=local
- POSTGRES_USER=local
- POSTGRES_PASSWORD=local
ports:
- "5432:5432"
healthcheck:
test: "pg_isready --username=local --dbname=local && psql --username=local --list"
timeout: 1s
retries: 10

django-backend:
build:
context: .
target: local
container_name: django-backend
volumes:
- .:/usr/src/app
ports:
- 8000:8000
environment:
- DEBUG=True
- SECRET_KEY=<some-secret>
depends_on:
db:
condition: service_healthy

volumes:
postgres_data:

Here a local postgresql docker container will be spun up and after that the backend Django app itself. This local db container can have some test data. If a team has a development db somewhere, that can also be used. It’s enough to specify it in the .devenv folder and the app will immediately try to connect to it. Code is bind-mounted into the backend container, meaning any change in the code done on the host will be reflected inside the container (and vice-versa).

There is only one problem with editing code on the host — because the packages are installed in a container, the IntelliSense feature won’t be of much use. To fix that, use VScode to attach to a running container, install your favorite extensions and it will remember for any future development within that container. Everything is described in more detail here.

Last thing to mention here is the starting the app, start_server.sh:

#!/bin/bash

function migrate_collectstatic(){
echo "Migrating..."
envdir ${ENVDIR} python manage.py migrate
echo "Migration complete."
}

if [[ "${env}" == "PROD" ]]; then
mkdir static || true
migrate_collectstatic
cp -R static /var/www/
echo "Starting production web server..."
envdir ${ENVDIR} gunicorn -b 0.0.0.0:8080 --forwarded-allow-ips="*" --timeout 120 --workers=10 app.wsgi
else
migrate_collectstatic
echo "Starting django web server..."
envdir ${ENVDIR} python manage.py runserver 0.0.0.0:8000
fi

Static file collection should be considered carefully and it’s not the topic of this article. The important part is how the app is started which depends on the environment variable. If it’s for local development we want the django development server because of the reload option.

This already is enough for a development environment. The commands needed to run it would look something like this:

# Build the dev image using default build values
docker-compose build --build-arg BUILDKIT_INLINE_CACHE=1 --progress=plain

# Start the dev container
docker-compose up -d

# When finished, stop the container
docker-compose down

All that is needed now is to start the dev container, develop your app using an editor of your choice on the files on your local machine without having to install python, pip packages and OS dependencies on the local machine. This setup also guarantees that the same environment we use in development, will also be in production as you can see in the Dockerfile. The only time the dev container needs rebuilding is when a pip package is changed but that is quick thanks to BuildKit.

However we can automate our control further. Enter the Makefile. The following section is purely optional and can be skipped.

More automation — Make (Optional )

The Makefile encapsulates the above commands together into tasks, or targets in make terminology. The following Makefile demonstrates this:

.PHONY: help local_dev_env clean_local

PWD ?= pwd_unknown
REQUIREMENTS_FILE=dev-requirements.txt
ENVDIR=./.devenv

# Use the new and stable docker image build engine, BuildKit
export DOCKER_BUILDKIT := 1
export COMPOSE_DOCKER_CLI_BUILD := 1

#########################################################################################
# TARGETS/TASKS #
#########################################################################################

# Displays this menu
help:
@awk 'BEGIN{print "\nMakefile usage:\n"};/^[^#[:space:]\.].*:/&&$$0!~/=/{split($$0,t,":");printf("%8s %-16s %s\n","make",t[1],x);x=""};/^#/{gsub(/^# /,"");x=$$0;if(x!="")x="- "x};END{printf "\n"}' Makefile

# Run local dev containers. To use a different requirements file and/or use a diff env folder and/or python, run: make -e PYTHON_VERSION=3.10 -e REQUIREMENTS_FILE=other-requirements -e ENVDIR=/path/to/folder local_dev_env
local_dev_env:
@echo "Setting up a local development environment..."
@echo "Consists of a local db and a django container with egress access."
@echo -e "Using $(REQUIREMENTS_FILE) in build and python version $(PYTHON_VERSION)\n"
@docker-compose build --build-arg REQUIREMENTS_FILE=$(REQUIREMENTS_FILE) --build-arg PYTHON_VERSION=$(PYTHON_VERSION) --build-arg BUILDKIT_INLINE_CACHE=1 --progress=plain
@echo ""
@echo "*** Starting development containers, don't forget to clean up after yourself ***"
@echo "*** by running: make clean_local ***"
@echo ""
@docker-compose up -d
@echo ""
@echo "backend is exposed on: http://localhost:8000"
@echo "*** Execute the same command in the frontend repo and then ***"
@echo "In the browser type: http://localhost:3000 and enjoy!"

# Remove the local development containers and dangling images
clean_local:
@echo "Starting local cleanup..."
@docker-compose down
@rm -f .env
ifeq ($(shell docker images --filter "dangling=true" -q),)
@echo "No dangling images."
else
@docker images --filter "dangling=true" -q | xargs docker rmi
endif

Using make helpwill display the targets with the above comments as their descriptions. The commands now look like:

# Display targets
make help

# Build and start the dev container using the defaults
make local_dev_env

# Build and start the dev container using a different python version, e.g. 3.10 and requirements file
make -e PYTHON_VERSION=3.10 -e REQUIREMENTS_FILE=new-requirements.txt local_dev_env

# Stop container and clean all the untagged build layers after multiple rebuilds
make clean_local

As unittests and other are added, targets can be written to manage those as well and having the Makefile becomes even more handy.

Summary

Just by installing docker and without having to install the language, framework and app dependencies a developer can create a production-like environment and easily see the app code in action.

Just by changing two environment variables, the app code can be executed with any language version or package list.

Any change in code is immediately reflected and bugs can be fixed before a commit is pushed.

Rebuilds are fast thanks to BuildKit caching that makes the Dockerfile a good candidate for usage within a CICD pipeline.

This article is the first in a series, describing the local development setup for the backend part of the app. There’ll be a demonstration in a future article on how to use OpenVPN to connect your local docker backend container to a database somewhere in a private subnet.

The next article (Part 2) will explore how to set up a similar development environment for the frontend React app and to close it off, a production deployment will be considered in a separate article.

--

--

Roman Kosanovic
ascaliaio

Senior DevOps/SRE Engineer and former physicist