"Docker-composing" a Python 3 Flask App Line-by-Line

Luis Ferrer-Labarca
BitCraft
Published in
8 min readAug 26, 2018

This is a continuation of our article: “Dockerizing a Python 3 Flask App Line-by-Line”. In this article I will assume you have some basic knowledge on Docker, so make sure to read that first if Docker is completely new to you.

Docker-compose helps configure, spin up, and connect containers with each other. Credit to Shutterstock.

Docker: Context

Docker is an amazing tool to run apps in isolated environments, regardless of which host they are being deployed to. It allows you to have apps (mostly) independent of the operating system you or your servers are running, which creates the following benefits to your development and deployment workflows:

  1. You don’t clutter your workstation with software you don’t need.
  2. Using different versions of the same software on different projects becomes a breeze.
  3. When done appropriately, code that is constantly modified by an entire team is almost guaranteed to run on any team member’s workstation with minimal setup time.
  4. Your code will behave the same in any workflow environment it is in (e.g. development, staging, production).
  5. Deploying is as easy as building a new image, deploying it to a Docker repository, and restarting your containers so that they pull the latest image.

Case scenario

We are building a Python 3 Flask app. We want uWSGI to work as the web server and we want the traffic to be routed through Nginx. These two pieces have their own dependencies, purpose, and responsibilities, so we can isolate each in a container. Therefore, we can build two Dockerfiles for each service, which docker-compose will then spin up, mount volumes for, and configure hosts so that they both can speak to each other.

Note

When working with Docker in production, you probably should not wrap Nginx in a container. Whatever web server you’re using (in our case uWSGI) should be made into its own image, and then you can have load balancers living elsewhere that balance the traffic between your uWSGI container replicas. For static content which Nginx is usually great at serving, you could consider using S3 or some CDN and access it directly from the frontend. I am only running a container for Nginx on this article for the purposes of teaching how to orchestrate and link containers running locally or under one host.

Building our Flask App

First, we define the dependencies for our Python app in a requirements.txt file. Read more on requirement files in the pip documentation.

# requirements.txtFlask==1.0.2
uWSGI==2.0.17.1

Now we can build a very minimal Flask app.py file that defines some logic for our web app. I won’t go deep into how Flask works, but you can learn more in their vast documentation.

# app.pyfrom flask import Flask
app = Flask(__name__)
@app.route('/')
def hello_world():
return 'Hello world!'
if __name__ == '__main__':
app.run(host='0.0.0.0')

Lastly, we create an app.ini file so that uWSGI knows how to operate.

; app.ini[uwsgi]
protocol = uwsgi
; This is the name of our Python file
; minus the file extension
module = app
; This is the name of the variable
; in our script that will be called
callable = app
master = true; Set uWSGI to start up 5 workers
processes = 5
; We use the port 5000 which we will
; then expose on our Dockerfile
socket = 0.0.0.0:5000
vacuum = true
die-on-term = true

Building our Flask Docker Image

Now that our Flask/uWSGI app is set up, we can Dockerize it. As shown in our previous article, we will write a Dockerfile for it. This one, however, will be more minimalistic, since we won’t be installing Nginx in it, and rather running it separately through docker-compose. The file will be named Dockerfile-flask since we will have two Dockerfiles in this project.

# Dockerfile-flask# We simply inherit the Python 3 image. This image does
# not particularly care what OS runs underneath
FROM python:3
# Set an environment variable with the directory
# where we'll be running the app
ENV APP /app
# Create the directory and instruct Docker to operate
# from there from now on
RUN mkdir $APP
WORKDIR $APP
# Expose the port uWSGI will listen on
EXPOSE 5000
# Copy the requirements file in order to install
# Python dependencies
COPY requirements.txt .
# Install Python dependencies
RUN pip install -r requirements.txt
# We copy the rest of the codebase into the image
COPY . .
# Finally, we run uWSGI with the ini file we
# created earlier
CMD [ "uwsgi", "--ini", "app.ini" ]

Docker Protip #1

You might have found it odd that I first copy requirements.txt into the image and later the rest of the codebase. I do this because Docker creates layers (or intermediate images) as it builds your image. Each layer is cached, and when a file that previously got copied into the image changes, it invalidates its cache and that of all the following layers. Therefore, we can copy a file that barely ever changes first (i.e. requirements.txt) and install modules in one go, before even introducing the rest of the codebase which will most likely change after each build, triggering a re-install of all of our modules/libraries.

Configuring our Generic Nginx Container

Here we will create our configuration file that will tell Nginx how to route traffic to uWSGI in our other container. Our app.conf will essentially replace the /etc/nginx/conf.d/default.conf that the Nginx container includes implicitly. Read more on Nginx conf files here.

# app.confserver {
listen 80;
root /usr/share/nginx/html;
location / { try_files $uri @app; }
location @app {
include uwsgi_params;
uwsgi_pass flask:5000;
}
}

The line uwsgi_pass flask:5000; is using flask as the host to route traffic to. This is because we will configure docker-compose to connect our Flask and Nginx containers through the flask hostname.

Building our Nginx Docker Image

Our Dockerfile for Nginx is simply going to inherit the latest Nginx image from the Docker registry, remove the default configuration file, and add the configuration file we just created during build. We won’t even use a CMD instruction, since it will just pick up the one from nginx:latest.

We will name the file Dockerfile-nginx.

# Dockerfile-nginxFROM nginx:latest# Nginx will listen on this port
EXPOSE 80
# Remove the default config file that
# /etc/nginx/nginx.conf includes
RUN rm /etc/nginx/conf.d/default.conf
# We copy the requirements file in order to install
# Python dependencies
COPY app.conf /etc/nginx/conf.d

Docker Protip #2

You will notice later on this article that I am exposing the port for this container both in its Dockerfile and on docker-compose.yml. I do this because commonly people don’t use docker-compose in production. I use it in development and then I deploy my containers through some separate service (e.g. Kubernetes, ECS, or Heroku). So by exposing the ports in both places, I ensure that different services will know which port to route connections to by just looking at the Dockerfile.

Orchestrating with Docker Compose

Now that our configuration is ready to run the Flask/uWSGI container, we can write the necessary configuration so that the entire stack will run just by typing docker-compose up in our terminal.

All the configuration for docker-compose goes in a YML file called docker-compose.yml. This file usually lives in the root of your project so that running any compose command will automatically pick it up. I won’t dive super deep into the specific syntax for the file, but you can find information on everything you can do with it in the compose file reference.

# docker-compose.ymlversion: '3'
services:
flask:
image: webapp-flask
build:
context: .
dockerfile: Dockerfile-flask
volumes:
- "./:/app"
nginx:
image: webapp-nginx
build:
context: .
dockerfile: Dockerfile-nginx
ports:
- 5000:80
depends_on:
- flask

Isn’t that convenient? Now you can run docker-compose up to run your entire app instead of having to run all these commands:

$ docker build -t my-flask -f Dockerfile-flask .
$ docker build -t my-nginx -f Dockerfile-nginx .
$ docker network create my-network$ docker run -d --name flask --net my-network -v "./:/app" my-flask
$ docker run -d --name nginx --net my-network -p "5000:80" my-nginx

Now that your Docker containers are running, feel free to rush to localhost:5000 to see your new web app live running in two separate containers!

Dissection time!

Let’s walk through the important lines on our docker-compose.yml file to fully explain what is going on.

The keys under services: define the names of each one of our services (i.e. Docker containers). Hence, flask and nginx are the names of our two containers.

image: webapp-flask

This line specifies what name our image will have after docker-compose creates it. It can be anything we want. docker-compose will build the image the first time we launch docker-compose up and keep track of the name for all future launches.

build:
context: .
dockerfile: Dockerfile-flask

This piece is doing two things. First (i.e. context), it is telling the Docker engine to only use files in the current directory to build the image. Second (i.e. dockerfile), it’s telling the engine to look for the Dockerfile named Dockerfile-flask to know the instructions to build the appropriate image.

volumes:
- "./:/app"

Here we’re simply instructing docker-compose to mount our current folder onto the directory /app in the container when it is spun up. This way, as we make changes on the app, we won’t have to keep building the image unless it is a major change, such as a software module dependency.

For the nginx portion of the file, there’s a few things to look out for.

ports:
- 5000:80

This little section is telling docker-compose to map the port 5000 on your local machine to the port 80 on the Nginx container (which is the port Nginx serves to by default). This is why going to localhost:5000 is able to hit your container.

depends_on:
- flask

This part is critical. As you might have noticed in the app.conf, we route traffic from Nginx to uWSGI and viceversa by sending data through the flask hostname. What this section does, is create a virtual hostname flask in our nginx container and setup the networking so that we can route the incoming data to our uWSGI app living in a different container. The depends_on directive also waits until the flask container is in a functional state before launching the nginx container, which avoids having a scenario where Nginx fails when the flask host is unresponsive.

Closing remarks

Docker compose is a great tool for managing multiple containers that need to communicate with each other, especially in development! It allows you to specify settings that you’d otherwise have to include on every docker run command, including network and volume bindings. Because all the configuration lies in a file, you can easily version control it and share with your peers so that your app runs as similarly as possible in all their workstations. Finally, it also makes it great for developing while your container is still running, since you can mount volumes (i.e. your working directory) and keep making changes on your favorite code editor or IDE without having to build the image or even restart the container every single time.

However, I am reluctant to use docker-compose for deploying apps to productions, since, in my opinion, there are way more reliable services to help you do so while still creating the necessary network bindings between containers when needed (e.g. AWS ECS, Heroku, Kubernetes, etc.)

In future articles, I will explain different work scenarios when to run docker-compose and when to run single containers, as well as which cloud platforms to run them on.

I hope you enjoyed the article! If you learned something new, please leave a clap and share on social media! BitCraft is a software development group and we’re always taking on new clients. Reach out to us at hello@bitcraft.io or visit our website at bitcraft.io.

--

--

Luis Ferrer-Labarca
BitCraft

Jack of all Trades. Startups, tech, and business development.