Setting up Nginx, Gunicorn, Celery, Redis, Supervisor, and Postgres with Django to run your Python application
Ahoy fellow software adventurers! Today we have a chance to look into setting up some of the moving parts commonly used in production python applications.
Here’s an overview of the major components we’ll be looking at:
Nginx: An HTTP and Reverse Proxy Server
Gunicorn: A WSGI HTTP server
Celery: A tool for asynchronous processing with Python
Redis: A message broker
Supervisor: A process control system for unix
PostgreSQL: A sweet open-source relational database management system
Off we go!
Setting Up Postgres
Create a database and a database user with the credentials referenced in the django settings file and grant appropriate access to the django application.
$ psql# create user USER with password ‘PASS’;
# create database database_name
# \q
To grant all privileges to the django app, use:
# grant all privileges on database database_name to USER;
Run migrations to postgres database:
python manage.py migrate
If you have not made migrations files from your django models, run:
python manage.py makemigrations
python manage.py migrate
Set up Celery
celery -A proj worker
For celery logs to be displayed, run:
celery -A proj worker — loglevel=info
Set up Redis
redis-server
Check to be sure redis is working with:
$ redis-cli ping
# PONG
Set Up Gunicorn
sudo vi /etc/systemd/system/gunicorn.service
Edit service file:
[Unit]
Description=gunicorn daemon
After=network.target[Service]
User=ubuntu
Group=www-data
Environment=ENVIRONMENT_VARIABLE=var
WorkingDirectory=/home/ubuntu/proj
ExecStart=/home/ubuntu/proj/virtualenv/bin/gunicorn \
— access-logfile — \
— log-level debug \
— workers 3 \
— bind unix:/home/ubuntu/proj.sock \
proj.wsgi:application[Install]
WantedBy=multi-user.target
Start gunicorn with:
sudo systemctl daemon-reload
sudo systemctl start gunicorn
If adjusting the service file after gunicorn has been started, restart gunicorn with:
sudo systemctl restart gunicorn
Set up Nginx
sudo vi /etc/nginx/sites-available/proj
Edit file:
server {
listen 80;
server_name SERVER_IP; location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/ubuntu/proj/app;
} location / {
include proxy_params;
proxy_pass http://unix:/home/ubuntu/proj.sock;
}
}
Arrange project in sites-enabled directory:
sudo ln -s /etc/nginx/sites-available/proj /etc/nginx/sites-enabled/
Check status of Nginx:
sudo nginx -t
Allow Nginx to interact with the host machine on the network:
sudo ufw allow ‘Nginx Full’
Start Nginx server:
sudo systemctl start nginx
Daemonize celery and redis with supervisor
Supervisor is only available for python2, there are development forks/versions for python 3 but python 2 can and should be used in production because supervisor is an independent process.
echo_supervisord_conf > supervisord.conf
vi supervisord.conf
Edit config file:
[program:celeryd]
command=/home/ubuntu/proj/virtualenv/bin/celery worker — app=proj -l info
stdout_logfile=/home/ubuntu/proj/celeryd.log
stderr_logfile=/home/ubuntu/proj/celeryd.log
autostart=true
autorestart=true
startsecs=10
stopwaitsecs=600
Run celery and redis as daemon processes with:
supervisord
If adjustments are made to configuration, the processes can be restarted with:
supervisorctl restart celeryd
All set!
Celery logs can be found in the proj/celeryd.log file for monitoring. For a visual approach to celery monitoring, check out flower: https://github.com/mher/flower
For a further look at asynchronous setup and programming with the tools in this reading be sure to check out my ebook:
Thanks for reading!