Python + Docker: From development to production: Episode I

For the last two years or so I’ve been using Docker to deliver almost every single project I’ve been involved with. It’s been a long journey where I’ve had the opportunity to learn an interesting set of new technologies and most importantly how to get code shipped to production in a reliable, predictable and trustable way.

Python is the language I feel most comfortable with, which means that those projects (mostly web applications or web APIs) have been developed with heavy usage of frameworks and libraries available under the language’s ecosystem.

I’ll be sharing my journey this far in a series of blog posts (hopefully only two). The first one, which I named Episode I focuses on the development experience. Episode II is an overview of the different Cloud solutions I’ve worked with for deploying Docker applications and how to run the demo application on container friendly environment.

The Application

For the purpose of this demo we’re going to use a very simple Flask application named Easy GeoIP. The application comprises three main components:

  • An endpoint which takes-in a Domain Name or IP Address and uses the MaxMind City database to return all of the information associated to the given Domain Name/IP Address.
  • A web page, which will present the information returned by the endpoint mentioned above, in a user-friendly way (see screenshot below).
  • A PostgresSQL database which is used to fill-in the timezone for the provided Domain Name or IP Address, when not available in the MaxMind City Database.

Below a high-level view of the application architecture:

and then you can enter any IP Address/Domain Name of your choice and you should something like this:

NOTE: The PostgresSQL database was generated from the Shapefiles available here. For more details on how this database was created take a look at this link

Running the Python App for the first time

Let’s start with the basics and get the application running. First we need to clone the application repository and checkout the Git tag we will be using through this article:

- git clone
- git checkout blog-episode-i

We can now build the Docker which we will later use to create the container running the application:

- docker-compose build app

this might take a few mintues, but once the build is finished you should see something like this:

Removing intermediate container 4e33a37b9b69
Step 16/16 : CMD supervisord -u www-data -n -c /etc/supervisor/supervisord.conf
---> Running in 2f8f80a4405d
---> 368383a4d379
Removing intermediate container 2f8f80a4405d
Successfully built 368383a4d379

with the image built, we can now launch the application:

- docker-compose up
app_1 | * Running on (Press CTRL+C to quit)
app_1 | * Restarting with stat
tz_world_db_1 | LOG: database system was shut down at 2017-04-20 10:53:35 UTC
tz_world_db_1 | LOG: MultiXact member wraparound protections are now enabled
tz_world_db_1 | LOG: autovacuum launcher started
tz_world_db_1 | LOG: database system is ready to accept connections

after which you can now go visit http://localhost:5000, and you should see something like this:

First Dive into The Application

Ok, that was quick. In just a matter of minutes we were able to have a fully working application and with very little effort. Let’s see how was this possible. The very first thing that we did was to run:

- docker-compose build app

This command is telling Docker Compose to build the service app, which is defined as follows in the docker-compose.yml file:

Before we take an in-depth look to the Docker Compose file let’s go through the Dockerfile we’re using to build the application’s image:

The Dockerfile it’s pretty explanatory by itself, however let me highlight some of the most important elements present in it:

  • We’re using python:2.7-slim as our base image since we want to keep our Docker image as light as possible. This is very handy specially when it comes to deployment since the image needs to be pulled before been deployed.
  • Those instructions which are not supposed to change very often (apt-get install, apt-get update, etc) are added very early in the file so that we can benefit from the caching mechanisms provided by Docker.
  • The requirements.txt file is added independently from the application code. This is again, to speed-up the build process since app's dependencies should not change that often.
  • The container entrypoint is supervisord. The reason we need supervisord is because our container needs to run both NGINX and uWSGI. Also because other many goodies that come with supervisord, like that it can act as a process reaper, make sure processes are always running, etc.
  • Neither NGINX, nor uWSGI are running as root (not even supervisord). They all run as the user www-data.

With the Dockerfile now explained, let’s now break down the Docker Compose file:

Go ahead, play and test the application, modify it and play with it. You will see the development experience it’s quite the same as if it was running outside Docker :).

What’s Next??

With the development environment setup and fully working it’s now time to deploy to production. But how do we do that? For sure we want the process to be trustworthy and as predictable as possible. Also, what solutions, cloud-based or not, are available out there that help the most with our application deployment?

This and more will be the subject of Episode II, so stay tuned :).

Like what you read? Give Yoanis Gil Delgado a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.