Getting a React + Django App to Production on DigitalOcean with Travis, Caddy and Gunicorn

In 2018, the way to do web app deployment seems to be containers. Containers, all the time, and everywhere. But with containers comes the learning curve of Docker, and Kubernetes, or Rancher, or maybe even something else on top. Especially for smaller projects, configuring and hosting a container-based solution might have a worse-than-expected return on investment. Instead, we can use tried and true tools to deploy the project onto our server, without containers. In this guide, I will walk you through the steps to get a project up and running on your production server.

Our final tech stack will look like this:

  • A cloud server on DigitalOcean
  • Systemd (For process management)
  • Gunicorn (For Django load balancing and running the app)
  • Caddy (For HTTPS, reverse proxy and file serving)
  • Travis CI (For Continuous Deployment and test automation)

Let’s go!

About this tutorial

This tutorial is aimed at people who are unsure how to run their code in production, but are still well versed in programming and understand some system administration.

The solution provided in this example only uses one production host and doesn’t implement much separation between services, so it is not the best choice for running heavy-load projects or multiple projects on the same host. That being said, it is completely fine for running e.g. hobby projects, internal applications or proof-of-concept type projects.

Many steps in this tutorial are very project-specific. In these cases, you will need to make adjustments to the commands used to make them fit your use case.

Things that you might not use in your project that I take for granted in this tutorial:

  • Python 3
  • Node.js v10 or React
  • Systemd
  • Ubuntu 18.04
  • Bash
  • a remote Git repository
  • … and many more.

All of the aforementioned things can be replaced with a solution of your own choosing.

Things that this tutorial won’t cover:

  • Server Side Rendering for React
  • Server administration (other than simple user management)
  • DNS and networking
  • Version control

Setting up our server


If you don’t already own a Linux-based server capable of hosting a production application, I recommend looking into DigitalOcean. Other cloud providers work as well, but I have found DO to be the simplest, in terms of initial setup time and pricing model. If you already have a server with a distribution and installed software, you may skip a fair amount of this tutorial.

Navigate to DigitalOcean’s site and create an account. Select a Droplet (VPS instance) to suit your needs. Small projects can manage with either of the two least expensive instances. Choose Ubuntu 18.04 as the distribution. Follow DigitalOcean’s own tutorials to get your server up and running.

From now on we will refer to our application as cool-new-app.

SSH into your server. We will start by setting up a user and installing the required software.

Creating a new user

For separation, we will create a new user for running the application. It can be done like this:

$ sudo adduser cool-new-app

Give the user a password and smash enter on the other prompts. Do note that this user won’t be given sudo rights, which means that system-wide installations must be done with a sudo-capable user instead.

Log on to the user with su cool-new-app . Edit .bashrc with your favourite editor and remove these lines (if they exist):

# If not running interactively, don't do anything
case $- in
*i*) ;;
*) return;;

We do this because we will be needing our .bashrc when logging in non-interactively.

Run the default Git setup for the user. We don’t want to see the config prompt later during a non-interactive login.

$ git config --global ""
$ git config --global "Your Name"

Log out of your app user.

Next, we will give the new user the privileges to restart the cool-new-app service (which we will set up later) with superuser permissions. To do this, we have to edit sudoers.

$ sudo -E visudo

Add the following line somewhere at the bottom of the file.

cool-new-app ALL = (root) NOPASSWD: /bin/systemctl restart cool-new-app

Save and quit. Now your app user can restart the app during deployment, which normally requires superuser permissions.

Installing Caddy web server

Caddy is a modern web server with reverse proxy features and automatic HTTPS. We will use it to serve the static build artefacts from the production build of your React app, and to route traffic to your backend and frontend.

Caddy can be downloaded as a prebuilt binary from the website, but the binary comes with special terms when used in a commercial setting. If your budget is tight, you can compile it from source, which is really easy.

Log out of your cool-new-app user with exit. Get the Caddy binary and throw it into your /usr/bin .

$ sudo mv caddy /usr/bin/caddy

You can also use the installer script from their download page.

Installing Node.js

Node.js is a JavaScript runtime, commonly used for running backend code or building frontend projects. We will use it to run the build script for the React application. To install Node.js, we will use the powerful NVM project.

Log on to your application user and run the NVM installer. Source your .bashrc again and install Node.js version 10 (or whichever version you need).

$ su cool-new-app
$ curl -o- | bash
$ . .bashrc
$ nvm install 10

After that’s done, log out of cool-new-app.

Installing Gunicorn

Next we will install gunicorn. Gunicorn will be used to run the Django application. It will also act as a load balancer: it will distribute requests to multiple parallel instances of Django for high availability.

I like to keep things out of the global installation, so I won’t install Gunicorn as a global package. The nicest way is to use pip from the Python 3 package to install virtualenv globally, and use that to install our package dependencies locally. If you don’t have Python 3 installed, do this:

$ sudo apt install python3

Next, install virtualenv:

$ python3 -m pip install virtualenv

Now, log in as your new cool-new-app user and use virtualenv to create a new Python environment with Gunicorn installed.

$ su cool-new-app
$ virtualenv -p python3 ~/venv
$ source ~/venv/bin/activate
$ pip install gunicorn

The first line uses the embedded pip module to install virtualenv globally. The second line uses virtualenv to create a new Python 3 environment in the venv subdirectory in our home directory.

Running the project in production

To run the project in production we need to do the following things:

  • Clone the project to the server
  • Set up Caddy with a Caddyfile
  • Set up Gunicorn for Django
  • Set up Systemd services for Gunicorn and Caddy
  • Add deployment scripts to the project.

Note: For this tutorial I will simply use the SQLite database with Django for simplicity. If you use a more involved database, such as PostgreSQL, remember to edit the scripts accordingly.

Clone your project to a subdirectory in the home directory as the app user. You might need to put in some public keys if the project is private.

Next, we will create a configuration file for Caddy called the Caddyfile. This file will be shared between all users, so we will store it in /etc/Caddyfile. Open it as superuser and add the following info: {
root /home/cool-new-app/cool-new-app/frontend/dist
} {
proxy / {
} {
root /home/cool-new-app/cool-new-app/backend/staticfiles

Simple, huh. The first statement serves the build artefacts of your frontend app from, which is a DNS name you should of course first purchase from a registrar. The next one proxies requests made to the api. subdomain of your domain to the local port 8000, which is the port Django will be running on. And because Django won’t serve static assets itself, we will also need to serve Django’s assets with the last configuration block.

If you’re unsure what is meant by DNS names and such, take a minute to study what they mean and how to configure DNS records.

That’s Caddy for now. Let’s set up Django.

To run Gunicorn, you need to first cd into the backend directory, install your pip dependencies and run the following command:

$ gunicorn -w 4 --access-logfile - --error-logfile - cool-new-app.wsgi

This will run 4 instances of the Django app and load-balance requests. To make this bullet-proof, I usually create a launch script in the repository that looks like this.

# Find out the location of the script, not the working directory
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null && pwd )"
# Navigate to the backend directory
cd "${DIR}/../backend"
# Activate virtualenv
. ~/venv/bin/activate
# Start Django with Gunicorn
gunicorn --bind -w 4 --access-logfile - --error-logfile - cool-new-app.wsgi

For this tutorial I will call this script and store it in a directory called deploy in the root of the repository. I recommend storing all deployment-related files in this directory.

Check that your Django backend is working by running the script. If it works, great!

Next, we will write a deployment script that will update and restart both applications when run. The script can then be run either manually or by an automation system (like Travis) to update the app.

Note: This step can be implemented in many ways. Some prefer to run this code as a post-receive hook on the server. I like to include the script in the repository to benefit from version control, and run it over SSH.

Create a new script called in the deploy directory of your repository. Write down the steps to update the whole app after logging in with SSH. That should look something like this:

. .bashrc
cd "$APP_DIR"
echo "# Starting deployment."
echo "# Target commit: ${TARGET}"
set -e # Fail the script on any errors.
nvm use 10
printf "# Node version: $(node --version)\n"
printf "# NPM version: $(npm --version)\n"
echo "# Stashing local changes to tracked files."
git stash
echo "# Fetching remote."
git fetch --all
echo "# Checking out the specified commit."
git checkout "${TARGET}"
echo "# Navigating to the backend directory."
cd backend
echo "# Activating virtualenv."
set +e # The activate script might return non-zero even on success. ~/venv/bin/activate
set -e
echo "# Installing pip requirements."
pip install -r requirements.txt
echo "# Collecting static files."
python collectstatic --noinput
echo "# Taking a database backup."
mkdir -p backups
cp db.sqlite3 backups/db.sqlite3.bak_`date "+%Y-%m-%d"`
echo "# Running database migrations."
python migrate --noinput
echo "# Restarting the backend service."
sudo systemctl restart cool-new-app
echo "# Navigating to the frontend directory."
cd ../frontend
echo "# Installing Node.js dependencies."
npm ci
echo "# Building the frontend project."
npx react-scripts build
echo "# Setting new build as the active build."
rm -rf "$APP_DIR/frontend/dist"
mv "$APP_DIR/frontend/build" "$APP_DIR/frontend/dist"
set +e
echo "# Deployment done!"

Read through the script carefully, since your application structure can differ a lot from mine. Some curious lines:

  • The LC_COMMIT_HASH environment variable will be passed over SSH by Travis CI and will default to origin/production. This is the commit hash we will fetch when updating.
  • We “double buffer” the frontend build directory in such a way that Caddy will serve the dist directory while npm will output the app to build. This way the frontend won’t go down during the build process.

The last step is to create Systemd service files for Caddy and Gunicorn. The files will reside in /etc/systemd/system/. Here’s the service configuration for the Caddy server, calledcaddy.service. Create it with sudo and add the following content:

Description=Caddy server
ExecStart=/usr/bin/caddy -conf /etc/Caddyfile -agree -email ""
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s TERM $MAINPID

Please note that your Caddy installation might be located in a different directory, depending on your system and how you installed Caddy. You should also replace the email with your personal email address.

Here’s the configuration file cool-new-app.service for Gunicorn:

Description=Cool new app Django
ExecStart=/bin/bash /home/cool-new-app/cool-new-app/deploy/

Enable your new services with sudo systemctl enable caddy cool-new-app. Start them with the start command: sudo systemctl start caddy cool-new-app.

When that’s done, make sure that they are running correctly with systemctl status caddy and systemctl status cool-new-app, and by checking if they’re working with your browser :)

Your application is now running in production!

Setting up Travis CI

Travis CI is a very easy to use automation service. To start using Travis CI, go to their site and log in with your GitHub account. Enable your repository in the Travis CI repository settings. Your project is now ready for CI! 🙌

Note: Travis CI is free only for open source projects. If you’re developing a private project, consider using a self-hosted GitLab CI, which is very similar to Travis.

Creating .travis.yml

To get Travis to actually run something, we will need to add an instructions file called .travis.yml to our project repository. Here’s a sample configuration:

- stage: test
language: python
- 3.6
- cd backend
- pip install -r requirements.txt
- python test
- stage: test
language: node_js
- '10'
- cd frontend
- npm ci
- npm run build
- npm run test
- stage: deploy
if: branch = production
language: node_js
- '10'
script: bash deploy/
email: false

There’s a lot to go through here. Let’s start.

  1. We define three stages: Python test, Node.js test and Deploy. The first will be used to run the Django tests on the backend, the second to run tests on the React frontend. The last is used to deploy our code to the server, but more about that later.
  2. Every stage comes with a couple of statements. language defines the programming language environment to use for the stage. before_install , install, before_script and script are pretty self-explanatory: they run commands in order.
  3. The notifications statement can be used to send status notifications to your email, or Slack, or whatever you like to use.

If you commit and push this file, Travis will automatically start running the first two stages, assuming you’re not on the production branch. The production configuration needs some more configuration.

Setting up SSH keys

First, we need a way to make the Travis instance pull the latest changes in your production environment. For this, we will set up a simple SSH + Bash script configuration.

Navigate to your project directory on your development machine. Run the following command to create a new key pair:

$ ssh-keygen -t rsa -f deploy_key

Enter an empty passphrase when prompted. Now you will have two files in your directory: deploy_key and . The first one is the private key: this is needed to SSH into your production server. The second one is the public key: this must be added to the authorized_keys of your production server. Copy the contents of the and log in to your server. Add the contents to the file /home/cool-new-app/.ssh/authorized_keys . If you’re unsure how to do this, look up instructions on the web.

Now we have the public key in place. Next up is getting the private key to Travis. We cannot just commit the file, because then anyone with rights to view the repository would be able to log into our server, which is a bad, bad thing. Instead, we will commit an encrypted version of the file.

Adding the private key to Travis

Install the Travis command line tool. Use it to log in to your Travis account with your GitHub credentials with travis login.

Note: Travis is running a migration from .org to .com during 2018. Be sure to use the --org and --com command line flags when running commands to ensure you’re managing the right version of the repository.

Run the following command to encrypt your private key:

$ travis encrypt-file deploy_key deploy/deploy_key.enc

This will produce a deploy_key.enc in the deploy directory which can be safely committed to your repository. You will also be given a command to decrypt the file in the Travis instance. Add that command to the before_script statement of your deploy stage, like this:

- stage: deploy
if: branch = production
language: node_js
- '10'
- openssl aes-256-cbc -K $encrypted_something_key -iv $encrypted_something_iv -in deploy/deploy_key.enc -out deploy/deploy_key -d
script: bash deploy/

You can also make sure that Travis actually remembers the necessary variables for decryption by running travis env list . If the encrypted_ variables are there, everything’s fine. You can now delete or move the original key pair somewhere else, so that you won’t accidentally commit the private key.

Adding the deployment scripts

Next, we will add the script to run the SSH command to the repository. Add the file deploy/ to the repository and add the following contents:

# Change permissions to something that SSH accepts
chmod 600 deploy/deploy_key;
# Send the commit hash env variable over ssh to know which commit to checkout to
# Pipe the update script over SSH to the production server
cat deploy/ | ssh -o StrictHostKeyChecking=no -o BatchMode=yes -o SendEnv=LC_COMMIT_HASH -i deploy/deploy_key "$DEPLOY_USER@$DEPLOY_SERVER"

This command will use the decrypted private key to SSH into your server and run the script called deploy/, which was created in the previous section!

The DEPLOY_USER and DEPLOY_SERVER variables are still not defined, so let’s do that first with the Travis command line tool:

$ travis env set DEPLOY_USER cool-new-app
$ travis env set DEPLOY_SERVER

Travis CI is now set up for production deployment :)

Pushing changes to the production branch of your repository should now trigger a stage in Travis which runs the deployment script on your server, updating both the backend and the frontend successfully. If something isn’t working, look for error messages in the Travis console output.


Your Django and React application is now running on your production server. Great job!

Next steps you can take:

  • Add better backups that aren’t stored on the server
  • Implement server security (firewall, etc.)
  • Add a step for staging environment deployment in Travis
  • Implement system monitoring
  • Refine the logging system
  • …and a lot more!

Happy deployment!