Programming with Erik

818 Followers
·

This Will Make Your Software Easier to Build, Run, and Distribute

Say goodbye to those hard-to-build software projects by containerizing them

Image for post
Image for post
Photo by Lucas van Oort on Unsplash

More often than not, software projects are difficult to build from source. This can have multiple reasons, here are just a few:

  • The project requires lots of dependencies.
  • The project requires outdated libraries. Once you install those, other projects might break in turn.
  • You’re running Windows or a Mac, but the software is designed to build and run on Linux.

For similar reasons, it can also be hard to run your software in production!

If you’re facing these problems, it’s good to know that there’s an easy fix. It doesn’t require virtualization but instead uses a principle called containerization.

What is a container?

A container is an entity that has everything required to run your software. It packs:

  • Your software
  • All dependencies
  • All system tools and libraries that might be needed

Containers are like virtual machines, but more lightweight. For instance, they start almost instantly. Containers virtualize just the operating system, while a VM virtualizes an entire machine with all its hardware.

Image for post
Image for post
Multiple containers running next to each other (source)

The most popular containerization toolkit, Docker, has become a standard. Docker containers can run anywhere: from your development PC to a self-hosted server to cloud hosting services like Amazon, Google, and Azure.

Containers make it easy to package and ship your software and provide a well-defined environment for it to run in.

What is an image?

A Docker container is always based on an image. You first define an image and then start one or more containers based on it. You can define an image in a file (called the Dockerfile) and this file can be checked into a VCS like git, together with your code. This allows you to document and create the exact environment needed to run your code.

You don’t have to build an image from scratch. Many software projects provide images that containerize their software. For practically all computer languages, including Python, there are multiple base images you can choose from.

Just like Python classes, you can extend such images with your own specifics, as I will demonstrate below. By doing so, you are adding a new layer on top of an existing image. Because of this layering, Docker images can be stored and build very efficiently. For example, many images might all share the same Debian Linux base image, and extend it with their own specific software requirements:

Image for post
Image for post
Multiple layers make up a container (image by author)

How to containerize your software

Creating a container for your software is super easy. After making my first Docker image, my thoughts were roughly this: “Is this all? I must have skipped a step!”

We’ll create a Python project as an example here, but this is just as easy for other languages. If you want, you can follow along. The article contains everything you need. Alternatively, you can clone this code from Github too.

So let’s assume we want to create a web service using Flask. It has some dependencies that are defined in a Pipfile, to be used with Pipenv.

I’ll start very simple, with the following Pipfile:

packages]
Flask = "*"

For now, we’ll create a Python file, just to prove that things are working as expected. I called it app.py:

print("Hello world")

Next, we need to create a Dockerfile that defines our container. Most projects these days offer official versions of their software as a Docker container. After a quick Google search, it turns out Python does so too. They even give us an example to start with.

Their example is based on virtualenv, but we prefer Pipenv, so it needs a few adaptions. This is what I came up with:

FROM python:3WORKDIR /usr/src/app
COPY Pipfile ./
RUN pip install --no-cache-dir pipenv && pipenv install
COPY *.py .
CMD [ "python", "./app.py" ]

To build your container, you need to run the docker build command. I’m using VSCode, which has a nice Docker extension allowing you to simply right click the Dockerfile, give it a name (called a tag), and start building.

I prefer the command-line because it shows us exactly what we’re doing and keeps us in control of all the options. So let’s build our image on the command-line:

C:\dev\python-docker> docker build -t my_webservice .

Let’s break it down:

  • We’re building an image and tagging it with the name my_service
  • The single dot simply means Docker needs to look for the Dockerfile in the current directory

When you run this for the first time, there will be lots of activity:

  • Docker starts pulling the Python Docker image first
  • Next, we set the working directory to /usr/src/app
  • We copy the Pipfile into the working directory
  • We run pip install to install pipenv
  • directly after it, we run pipenv install to install our dependencies
  • Finally, we copy all python files to the working directory
Image for post
Image for post
Docker is downloading all layers of the Python container. Image © by author.

Our image is finished, and we can run it with docker run. Let’s try:

PS C:\dev\python-docker> docker run my_webservice
Hello world
PS C:\dev\python-docker>

Improving our container

We have our basics working, so let’s create an actual web service now. Adapt app.py to look like this basic Flask example:

from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello_world():
return 'Hello, World!'

Because we’re running a Flask app now, we need to alter the CMD in our Dockerfile too:

CMD [ "pipenv", "run", "python", "-m", "flask", "run", "--host=0.0.0.0" ]

As you can see, we need Flask to listen on all network interfaces. Otherwise, It would just be listening on localhost inside the docker container, and it would be unreachable to the outside world.

Rebuild to Docker image with the same command as before:

C:\dev\python-docker> docker build -t my_webservice .

For extra security and to prevent overlapping ports, Docker, by default, won’t expose ports to the outside world. Since we’ll be running a server on port 5000 (Flask’s default), we need to expose this port explicitly. You can map a port from the container to a port on your PC with the -p command-line option:

C:\dev\python-docker> docker run -p 5000:5000 my_webservice 
* Environment: production
* Debug mode: off
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)

Now head over to http://127.0.0.1:5000/ to see our service in action. You should get a page saying “Hello world,” and you should see a log entry appear on your command-line.

Written by

Software developer by day, writer at night. Author of python3.guide, where you can start learning Python today

Sign up for Tech Explained

By Programming with Erik

Short, low-volume newsletter to keep you up-to-date on my latest articles Take a look

By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices.

Check your inbox
Medium sent you an email at to complete your subscription.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store