Setting up a Django project like a pro

Francisco Ceruti
13 min readMay 27, 2019

--

In this article I’m gonna show you the way I layout a typical medium-to-complex django project that uses git for source control, pyenv and pipenv to handle packages, celery with redis to run tasks, pytest for testing, flake8 and jsbeautify for code linting, sentry to manage logs, webpack to compile static files, docker compose to serve locally and postgresql to store your data, all of this while trying to follow the twelve factor app configuration methodology.

If you prefer to jump straight to the conclusion, check out this repo, where I have everything ready for you to clone and start coding your new project immediately!

Note: I’m not claiming to have the final say in any of this and is an ever-evolving subject, so please let me know if you think I could be doing things better :)

Python version

With every new release of python, there are certain features that are added, deprecated or removed, so you wanna make sure that you know with which version of python you are working. The best way is to use pyenv which is a python version manager which lets you install and specify different versions of python for each of your projects. To specify one version run at the root of your project

pyenv local 3.7.2

and file called .python-version will be created with the text3.7.2 in it. Now when we type python --version we’ll get Python 3.7.2, and if we modify the file and write3.6.8, the same command will output Python 3.6.8.

Package manager & virtual environment

With confidence in the version of your interpreter, we can worry about how to download libraries and where to put them. If this was a couple of years ago I’d be talking about pip, virtualenv or even the standard library’s module venv, but now the coolest kid in town is pipenv, or at least pypa says so. Pipenv handles both the virtual environment and package management. Simply run

pipenv install

and pipenv will create a virtual environment (vm) and two files: Pipfile and Pipfile.lock. If you are coming for javascript this is very similar to package.json and package-lock.json. To see where the vm was created type pipenv --venv and this will give you a clue about how pipenv works: it automatically maps project directories to their specific vms.

Because we are creating a django project, we’ll download the package and add it to our vm with the command pipenv install django. To access the vm’s libraries, you have to prepend pipenv run to every command you run. For instance, to check the installed django version type pipenv run python -c "import django; print(django.__version__)".

Project layout

In the picture below I show how I like to layout my directory structure.

Initial project layout

I don’t like the default django layout that results after we are prompted to give our project a name like awesome, which also happens to be the name of our repo, and we end up typing things like ~/code/awesome/awesome/settings.py, which is just simply awful. It makes much more sense to me to put all your configuration files in a directory called conf. Your project’s name is already the name of the root directory. So, let’s start our projects by

pipenv run django-admin.py startproject conf .
mkdir {apps,assets,logs,static,media,templates,tests}

The development server should be working after we type pipenv run python manage.py runserver, and it does, but something is off, that command was super painful to write. Luckily we can create aliases for pipenv in our Pipfile like so

[scripts]
server = "python manage.py runserver"

Try again, this time with pipenv run server, much better right?

Source control

Now that we have the skeleton of our app ready, it’s a good place to start tracking our changes making sure that we don’t track secret codes, development media and log files etc. Create a file called .gitignore with the following content and you’ll be safe to initialize your repo git init, and create your first commit withgit commit -am "Initialize project".

# Python bytecode
__pycache__
# Django dynamic directories
logs
media
# Database files
.sqlite3
# Environment variables
.env
.env.*
# Static files
node_modules
# Webpack
assets/webpack-bundle.dev.json
assets/bundles/style-dev-main.css
assets/bundles/style-dev-main.css.map
assets/bundles/script-dev-main.js
assets/bundles/script-dev-main.js.map

Settings and environment variables

Have you read The Twelve-Factor app? Please go ahead and do so. Too lazy? Let me summarize it for you, at least the part about configurations: You should store your connection information to external services such as databases, external storage, APIs & credentials in your environment. This makes it very easy to transition between the different environments your code will run in such as dev, staging, ci, testing & production.

There's an excellent package called django-environ that will help us here. Go ahead and install it withpipenv install django-environ and modify conf/settings.py to read all its external services configuration and secret values from the environment

import environenv = environ.Env()
root_path = environ.Path(__file__) - 2
ENV = env('DJANGO_ENV')
DEBUG = env.bool('DEBUG', default=False)
SECRET_KEY = env('SECRET_KEY')
DATABASES = {'default': env.db('DATABASE_URL')}
...

You can choose to load your environment variables however you want, but I recommend you create a file called .env, which will be picked automatically by pipenv.

ENV=dev
DEBUG=on
SECRET_KEY=my-secret-key
DATABASE_URL=postgres://localhost:5432/awesome-db

If like in this example, we decide to use postgresql we need to make sure we install an adapter like psycopg2 with pipenv install pyscopg2-binary.

Logging

Logging is one of those things, that no one pays too much attention in a project, but it really nice when it’s well done and it’s there. Let’s start with the right foot and modify ourconf/settings.py file

LOGS_ROOT = env('LOGS_ROOT', default=root_path('logs'))LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'console_format': {
'format': '%(name)-12s %(levelname)-8s %(message)s'
},
'file_format': {
'format': '%(asctime)s %(name)-12s %(levelname)-8s %(message)s'
}
},
'handlers': {
'console': {
'level': 'DEBUG',
'class': 'logging.StreamHandler',
'formatter': 'console_format'
},
'file': {
'level': 'INFO',
'class': 'logging.handlers.RotatingFileHandler',
'filename': os.path.join(LOGS_ROOT, 'django.log'),
'maxBytes': 1024 * 1024 * 15, # 15MB
'backupCount': 10,
'formatter': 'file_format',
},
},
'loggers': {
'django': {
'level': 'INFO',
'handlers': ['console', 'file'],
'propagate': False,
},
'apps': {
'level': 'DEBUG',
'handlers': ['console', 'file'],
'propagate': False,
}
}
}

All we need to do now is to add the required environment variables to our .env file

...
LOGS_ROOT=./logs
USE_SENTRY=on
SENTRY_DSN=https://<project-key>@sentry.io/<project-id>

Notice that if we decide to turn on sentry with USE_SENTRY=on, we first need to pipenv install sentry-sdk and get our secret url from sentry.io after creating a new project there.

Testing

Testing code is a good idea. Found a bug? Make a test that fails, fix the code, pass the test and commit. This way you’ll never have to worry about that particular bug ever appearing again, even when someone else (which may well be you from the future), touches something completely unrelated that through a Goldberg-like process affects this part of the code and the bug reappear. Not if you write some tests.

You can use django TestCase or other frameworks, but I like pytest and one of my prefered features is parametrized tests. Let’s go ahead and

pipenv install pytest pytest-django --dev

Notice the --dev, this tells pipenv to keep track of certain dependencies as development only.

There are many names for files where we can configure pytest, but I prefer to do it in one called .setup.cfg, as the filename is shared with other tools and this helps to keep the file count lower. The following is a possible configuration

[tool:pytest]
testpaths = tests
addopts = -p no:warnings

Now you can create tests inside tests directory and run them with

pipenv run pytest

Code linting

Code linting is to run software that analyzes your code in some fashion. I’m only going to focus in style consistency tools, but you should know there are plenty others to consider like Microsoft’s pyright.

The first tool I’m going to mention is your IDE itself. Whether is VIM, VSCode, Sublime or others, there is this project called EditorConfig which is a standard specification for telling your IDE how big your indents should be, which string quotes you prefer and that kind of stuff. Just add at the root of your project a file called .editorconfig with something like this

root = true[*]
charset = utf-8
end_of_line = lf
indent_style = space
insert_final_newline = true
trim_trailing_whitespace = true
[*.py]
indent_size = 4
combine_as_imports = true
max_line_length = 79
multi_line_output = 4
quote_type = single
[*.js]
indent_size = 2
[*.{sass,scss,less}]
indent_size = 2
[*.yml]
indent_size = 2
[*.html]
indent_size = 2

Next, we’ll add flake8 to enforce PEP8 rules and isort to have a standard way of sorting imports.

pipenv install flake8 isort --dev

Both can be configured in the same .setup.cfg file that we created for pytest. This is the way I like it, but you may choose to configure it as you wish

[flake8]
exclude = static,assets,logs,media,tests,node_modules,templates,*/migrations/*.py,urls.py,settings.py
max-line-length = 79
ignore =
E1101 # Instance has not member
C0111 # Missing class/method docsting
E1136 # Value is unsubscriptable
W0703 # Catching too general exception
C0103 # Variable name doesnt conform to snake_case naming style
C0330 # Wrong hanging indentation
W504 # Too few public methods
[isort]
skip = static,assets,logs,media,tests,node_modules,templates,docs,migrations,node_modules
not_skip = __init__.py
multi_line_output = 4

We can run both of these programs with pipenv run flake8 and pipenv run isort respectively.

The final tool we are going to use is Js Beautifier, which will help us maintain order in our html, js and stylesheets. For this one, you can either pipenv install jbbeautifier or just install a plugin for your IDE, and let it show you error messages (this is how I do it). To configure it create a .jsbeautifyrc file in the root of your project with content like this (too large for this post).

Static files

All our javascripts, stylesheets, images, fonts and other static files live inside the assetsdirectory. We will use webpack to compile SASS stylesheets into CSS and ES6 into browser JS (ES5?). Webpack will pick up assets/index.js and use that as the entry point to all our static files. From this file we will import all our javascript and style sheets and webpack will compile, minimize and put it in a nice bundle for us. My typical assets/index.js looks like this

import './sass/main.sass'
import './js/main.js'

Naturally, we need to at least install webpack, babel and sass compiler. We need to create a file called package.json with the following content

{  
"scripts": {
"dev": "webpack --mode development --watch",
"build": "webpack --mode production"
},
"devDependencies": {
"@babel/core": "^7.4.4",
"@babel/preset-env": "^7.4.4",
"babel-loader": "^8.0.6",
"css-loader": "^2.1.1",
"file-loader": "^3.0.1",
"mini-css-extract-plugin": "^0.6.0",
"node-sass": "^4.12.0",
"sass-loader": "^7.1.0",
"webpack": "^4.32.0",
"webpack-bundle-tracker": "^0.4.2-beta",
"webpack-cli": "^3.3.2"
}
}

Then run npm install, which will download all the dependencies specified in package.json into a directory called node_modules.

It’s time to configure Webpack. Here’s a link to the configuration I use (too long for this post), but long story short, it defines a series of when you see this kind of file do this, and for this other do that. Put this configuration in a file called webpack.config.js in the root of your directory.

Now it’s time to connect webpack’s output to django templates with the best tool I’ve found for this: django-webpack-loader. As you may be used to by now we need to pipenv install django-webpack-loader to install it and then modify conf/settings.py

INSTALLED_APPS = [
...
'webpack_loader',
]
filename = f'webpack-bundle.{ENV}.json'
stats_file = os.path.join(root_path('assets/'), filename)
WEBPACK_LOADER = {
'DEFAULT': {
'CACHE': not DEBUG,
'BUNDLE_DIR_NAME': 'bundles/', # must end with slash
'STATS_FILE': stats_file,
'POLL_INTERVAL': 0.1,
'TIMEOUT': None,
'IGNORE': ['.+\.hot-update.js', '.+\.map']
}
}

Notice that what we are doing is reading a file that contains the information about where to find webpack’s compiled files (bundles).

We can now add in our templates webpack’s compiled scripts and stylesheets using a template tag

{% load render_bundle from webpack_loader %}<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>{% block page_title %}{% endblock %}</title>
<meta name="description" content="{% block page_description %}{% endblock %}">
{% block page_extra_meta %}{% endblock %}
{% render_bundle 'main' 'css' %}
</head>
<body>
{% block body %}{% endblock %}
{% render_bundle 'main' 'js' %}
</body>
</html>

Celery

Celery is a task queue that’s quite handy to offload work that shouldn’t block a user request like sending an email. Install celery with pipenv install celery and add the following code

conf/celery.py

import os
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'conf.settings')
app = Celery('conf')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()

conf/settings.py

...
CELERY_BROKER_URL = env('CELERY_BROKER_URL')

.env

...
CELERY_BROKER_URL=redis://localhost:6379/0

You can now start running code async with pipenv run celery, which should either work or tell you that redis is not available. Don’t worry if it isn’t, we are not gonna use your system’s redis anyways.

Docker compose

To bring all the project pieces online, we have to turn on the web server, celery, redis, postgres & webpack. It can be quite annoying to open several terminals and type all the needed commands. To fix this problem we’ll use docker compose, which is a docker container orchestration tool.

In a nutshell, how docker works, is that it “compiles” your code and all it’s requirements into what's called an image. Then to run our app, we can instantiate this image as a container which is a running version of the image. We will also mount into our containers our code, so that we can make changes in our local machine and see it live in our containers without having to re build the images.

To build the images we need, we will create two files called Dockerfile.web and Dockerfile.worker which will serve as a script for building out web and worker images.

Dockerfile.web & Dockerfile.worker

# Pull base image
FROM python:3.7.2-slim
# Instal system dependencies
RUN apt-get update
RUN apt-get install git -y
# Set environment varibles
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Create dynamic directories
RUN mkdir /logs /uploads
# Set work directory
WORKDIR /code
# Install pipenv
RUN pip install --upgrade pip
RUN pip install pipenv
# Install project dependencies
COPY Pipfile Pipfile.lock ./
RUN pipenv install --dev --ignore-pipfile --system
# Expose port 8000 in the container
EXPOSE 8000 # <--- SKIP this line for Dockerfile.worker

Now we’ll create a file called docker-compose.yml where we’ll setup all the services our project uses

version: '3'services:
web:
image: dev_server
build:
context: .
dockerfile: Dockerfile.web
volumes:
- .:/code
- ./logs/web:/logs
- ./media:/uploads
command: python manage.py runserver 0.0.0.0:8000
ports:
- 8000:8000
env_file:
- .env
- .env.docker
environment:
- BROKER_URL=redis://cache
- MEDIA_ROOT=/uploads
- LOGS_ROOT=/logs
links:
- redis:cache
worker:
image: dev_worker
build:
context: .
dockerfile: Dockerfile.worker
volumes:
- .:/code
- ./logs/worker:/logs
- ./media:/uploads
command: python manage.py celery
env_file:
- .env
- .env.docker
environment:
- BROKER_URL=redis://cache
- MEDIA_ROOT=/uploads
- LOGS_ROOT=/logs
links:
- redis:cache
redis:
image: redis
expose:
- 6379
webpack:
image: dev_webpack
build:
context: .
dockerfile: Dockerfile.webpack
volumes:
- ./assets:/code/assets
command: npm run dev

A couple of things about this config file

  • Notice that web service uses Dockerfile.web, and worker service uses Dockerfile.worker.
  • Notice that we are passing two environment variables files to web and worker service. We create it this way so we can override certain things when running inside Docker. In particular, we need to change the database address to point to the host machine (your machine), and not localhost (inside the docker container). To do this, create .env.docker and add the line
DATABASE_URL=postgres://<user>@host.docker.internal:4321/<db-name>
  • We mount ./media and ./logs inside your containers so that you can easily read the logs and check the uploaded files in your local machine. Don’t worry, they are git-ignored.

The moment we have been building for … tum tum tum

docker-compose up

Yey! Everything should be online at this point, and while is not time yet to drink that beer, you can start thinking about it.

When we change code in our machine, the server will autoreload as is expected with django's development server. But when we add a new library with pipenv install, we need to rebuild the images with

docker-compose build

One final thing is that we will make our images lighter by ignoring some files when mounting our code with a file called .dockerignore like this

.env
.env.*
.git
.gitignore
.dockerignore
.editorconfig
.gitignore
.vscode
Dockerfile*
docker-compose*
node_modules
logs
media
static
README.md

Extra: Custom user model

I’ve still yet to work on a project where I don’t need to modify django’s built in auth app, either to add/modify fields, add custom behavior or to rename the urls. To have everything at the tip of our fingers, and not floating in the depth of our django depency, we’ll create our first app called users and add a custom user model.

apps/users/models.py

from django.contrib.auth.models import AbstractBaseUser, PermissionsMixinfrom django.db import models
from django.utils import timezone
from apps.users.managers import UserManager
class User(AbstractBaseUser, PermissionsMixin):
email = models.EmailField(unique=True, null=True, db_index=True)
is_active = models.BooleanField(default=True)
is_staff = models.BooleanField(default=False)
date_joined = models.DateTimeField(default=timezone.now)

REQUIRED_FIELDS = []
USERNAME_FIELD = 'email'

objects = UserManager()

apps/users/managers.py

from django.contrib.auth.models import BaseUserManager
class UserManager(BaseUserManager):
def create_user(self, email, password, **extra_fields):
if not email:
raise ValueError('The Email must be set')
email = self.normalize_email(email)
user = self.model(email=email, **extra_fields)
user.set_password(password)
user.save()
return user
def create_superuser(self, email, password, **extra_fields):
extra_fields.setdefault('is_superuser', True)
extra_fields.setdefault('is_staff', True)
extra_fields.setdefault('is_active', True)
if extra_fields.get('is_superuser') is not True:
raise ValueError('Superuser must have is_superuser=True.')
return self.create_user(email, password, **extra_fields)

apps/users/urls.py

from django.contrib.auth import views as auth_views
from django.urls import path
urlpatterns = [
path('login/',
auth_views.LoginView.as_view(),
name='login'),
path('logout/',
auth_views.LogoutView.as_view(),
name='logout'),
path('password-change/',
auth_views.PasswordChangeView.as_view(),
name='password_change'),
path('password-change/done/',
auth_views.PasswordChangeDoneView.as_view(),
name='password_change_done'),
path('password-reset/',
auth_views.PasswordResetView.as_view(),
name='password_reset'),
path('password-reset/done/',
auth_views.PasswordResetDoneView.as_view(),
name='password_reset_done'),
path('reset/<uidb64>/<token>/',
auth_views.PasswordResetConfirmView.as_view(),
name='password_reset_confirm'),
path('reset/done/',
auth_views.PasswordResetCompleteView.as_view(),
name='password_reset_complete'),
]

conf/settings.py

AUTH_USER_MODEL = 'users.User'INSTALLED_APPS = [
...
'apps.users',
]

conf/urls.py

from django.contrib import admin
from django.urls import include, path
urlpatterns = [
path('', include('apps.users.urls')),
path('admin/', admin.site.urls),
]

With all this in place, we are now in full control of our user’s auth process. If you want to add registration and custom templates to the mix checkout my repo, where I have everything set up working with django-registration-redux.

Final thoughts

I hope this guide was as helpful for you to read as it was for me to write. I encourage you to clone my repo in github and make pull requests if you feel anything can be done better.

Happy Coding!

Final project layout
I put this here so that medium would let me use this picture as a cover photo.

--

--