Noah Zoschke
3 min readJun 9, 2015

Developing a Twelve-factor Python App with Docker

In a previous post I called out ways in which Docker is awesome for developing twelve-factor apps. In this post we will deep dive into a code base that demonstrates everything working together: convox-examples/flask.

While this app doesn’t do much, it demonstrates the best practices for Docker, Python, Flask, Celery, Postgres and Redis. Docker is rapidly evolving, so you can watch the flask repo for occasional updates to the best practices as they evolve.

To begin:

$ git clone https://github.com/convox-examples/flask.git
$ cd flask
$ boot2docker up$ make dev
docker-compose up
Creating flask_redis_1…
Creating flask_postgres_1…
Recreating flask_worker_1…
Creating flask_web_1…
Attaching to flask_redis_1, flask_postgres_1, flask_worker_1, flask_web_1

postgres_1 | LOG: database system is ready to accept connections
redis_1 | [1] 27 May 17:22:44.052 * The server is now ready to accept connections on port 6379
worker_1 | [2015–05–27 17:22:48,134: WARNING/MainProcess] celery@c9d274ea9bc6 ready.
web_1 | * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
web_1 | * Restarting with stat

Now browse to your `boot2docker ip` port 3000, i.e. http://192.168.59.103:3000/, to see Hello World!

Design Goals

  • Develop solely in Docker environment — no local Homebrew, Virtualenv, Postgres, etc. required
  • Use Docker Language Stacks — no base image building or updates required
  • Follow twelve-factor — isolate processes, treat Postgres and Redis as backing services, etc.

Development with Docker Compose

A single `docker-compose up` command manages the entire development environment. Directives in docker-compose.yml make this possible:

Build, Command and Environment

web:
build: .
command: python hello.py
environment:
- ENVIRONMENT=development
- PYTHONPATH=/usr/src/app

This says that our web process will build from the local Dockerfile, which is nothing more than a “FROM python2-onbuild” directive. We set the PYTHONPATH to the workdir so we can load all our code as Python modules. Finally we set a development flag that we can use to configure Flask to auto-reload source code changes in development but not production.

Linked Containers

web:
links:
- postgres
- redis

This says that our web process is “linked” to two other containers: postgres and redis. Docker-compose now knows to start postgres and redis before web and worker, and to set environment variables in web and worker with connection information:

$ docker-compose run web bashroot@fcf12564f8a8:/usr/src/app# env | grep -e “^POSTGRES_[A-Z]”
POSTGRES_PORT=tcp://172.17.0.20:5432
POSTGRES_ENV_POSTGRES_PASSWORD=password
POSTGRES_ENV_POSTGRES_USERNAME=postgres
POSTGRES_ENV_LANG=en_US.utf8
POSTGRES_PORT_5432_TCP_PORT=5432
POSTGRES_PORT_5432_TCP_ADDR=172.17.0.20
POSTGRES_ENV_PGDATA=/var/lib/postgresql/data
POSTGRES_NAME=/flask_web_run_4/postgres
POSTGRES_ENV_POSTGRES_DATABASE=app
POSTGRES_PORT_5432_TCP=tcp://172.17.0.20:5432
POSTGRES_PORT_5432_TCP_PROTO=tcp
root@fcf12564f8a8:/usr/src/app# env | grep -e “^REDIS_[A-Z]”
REDIS_PORT_6379_TCP_PROTO=tcp
REDIS_ENV_REDIS_PASSWORD=password
REDIS_NAME=/flask_web_run_4/redis
REDIS_PORT_6379_TCP_ADDR=172.17.0.19
REDIS_PORT_6379_TCP_PORT=6379
REDIS_PORT_6379_TCP=tcp://172.17.0.19:6379
REDIS_PORT=tcp://172.17.0.19:6379
REDIS_ENV_REDIS_DATABASE=0

We then configure Celery and SQLAlchemy to connect based on these environment variables:

app = Flask(__name__)app.config[‘CELERY_BROKER_URL’] = ‘redis://u:%s@%s:%s’ % (
os.environ[‘REDIS_ENV_REDIS_PASSWORD’],
os.environ[‘REDIS_PORT_6379_TCP_ADDR’],
os.environ[‘REDIS_PORT_6379_TCP_PORT’]
)
app.config[‘SQLALCHEMY_DATABASE_URI’] = ‘postgresql://%s:%s@%s:%s’ % (
os.environ[‘POSTGRES_ENV_POSTGRES_USERNAME’],
os.environ[‘POSTGRES_ENV_POSTGRES_PASSWORD’],
os.environ[‘POSTGRES_PORT_5432_TCP_ADDR’],
os.environ[‘POSTGRES_PORT_5432_TCP_PORT’]
)

Note that we are using `convox/postgres` and `convox/redis` which unlike the official images parameterize the database username and password with environment variables.

Host Volume Mounts

web:
volumes:
- .:/usr/src/app

This says that our host source directory ‘.’ will be mounted into the container at ‘/usr/src/app’, meaning that all changes on the host are immediately visible to the container. With this, we can use our normal editor like TextMate to make changes to the code, and see them reload inside the container:

$ docker-compose up
< Make changes to hello.py >
web_1 | * Detected change in ‘/usr/src/app/hello.py’, reloading
web_1 | * Restarting with stat

Makefile

Dockerfile and docker-compose.yml are perfect for developing this Python app, but there is a bit more syntactic sugar to make this project simple to work with.

`make dev` first sources a .env file, a common pattern for saving secrets that the app always needs but can not be checked into the source. `make test` wraps the hard-to-remember command to discover and run all our tests. `make migrate` runs a one-off migration script. `make dep` runs a shell script to install all the dependencies and update requirements.txt, helpful for bootstrapping a project with no local Python and pip.

Wrap Up

The advantages of developing an app with Docker are numerous. We now have simple `make dev` and `make test` commands that will work anywhere. Our Python container is very simple — Docker maintains the base image and system dependencies, and we no longer need virtualenv to juggle Python environments and dependencies. Redis and Postgres are managed as attached resources that we connect to via environment variables.

Now that we have an excellent Docker development environment, future posts will deep dive into continuous integration and deployment.

Please direct feedback and/or questions via Twitter to @nzoschke or email to noah@convox.io.