Dockerize a Project
So you have create your app and now it is time to ship it into the wild, unfortunately before you’ll have your app up and working like intended you’ll need to set the environment for it first.
Problem is creating an environment for your project can be a pain, just imagine the need to manually install the packages on your servers not to mention the errors and mess that’ll probably be created in the midst of setting it up. Thankfully, like many other thing, there’s a tool that trivialize this known as docker. In this post I’ll tell you a little bit about it and how our team used docker in order to ship our app.
What is Docker?

Docker is an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly. With Docker, you can manage your infrastructure in the same ways you manage your applications. By taking advantage of Docker’s methodologies for shipping, testing, and deploying code quickly, you can significantly reduce the delay between writing code and running it in production. — Taken from Docker documentation
Simply put docker is a tool that helps in development and deployment by separating the application from your infrastructure, taking advantage of the Linux containers to have a separate process tree independent of the system. Kinda similar to VM’s just much more lighter since unlike VM’s container share the same system resources and don’t need to have a full-blown OS. Instead you have a daemon that works on the background that manages docker objects such as images and containers, the former being the definition of the things to be executed while the latter is a running instance of a given image.
Quick Introduction
Our team have a Django Project consisting of directories and files as shown

Due to the structure we decided we need to copy the edom_backend
contents first before before installing the necessary packages, this is because the files located inside edom_backend
consist of code files that created the application.
Using Docker
Docker images are defined withing a text file called Dockerfile
, it is in said file where you will need to define all the steps in order to replicate your working environment. Such choosing an image of a program, copying files, installing packages, and so on. Here’s an example from our team’s app.
Here’s a quick rundown on what’s happening:
- It uses
python
base image with the tag3.10
which is a specific version of Python. - It ensures that the python output is sent straight to the terminal (e.g the container log) without being first buffered so you can see the output of the application in real time.
- It created a directory called app in the
/ (root)
and then it install a Python package called pipenv. - Then it change the working directory from there on to
/app
which is then followed by copying theedom_backend
directory contents to the/app
. - It look for the requirements of the app from Pipfile located in the
/app
before printing it’s result torequirements.txt
followed by installing all the packages listed on there.
So then are we done? Not yet, we still need to install the necessary dependencies first. While it may be possible to run everything as Docker containers it quickly become cumbersome to manage as not only do we need to manage the dependencies but also the application itself. Fortunately Docker has a very good solution for this.
Docker Compose
Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.
— Taken from Docker documentation
As explained above Docker Compose is a tool that allows one to define their full application including it’s dependencies with a single file docker-compose.yml
, making the development process substantially easy. By using it you’ll be able to:
- Easily start the entire application using
docker-compose up
. - Mount volumes from the local filesystem to the container with a single line in the YAML definition.
- Only restart the containers that have changes making the boot time much more faster.
Below is an example of our team docker-compose.yml
:
Once more unto the breach:
- It starts by defining the Docker Compose version followed by defining the services one by one.
- The first service known as
db
is a service for the database. As seen above it uses the image of the latestpostgres
as it’s images. volumes
are the preferred mechanism for persisting data generated by and used by Docker containers. It’s basically directories and files that exist on the host system that is connected with the container.- The restart on the service is used just to make sure the service is restarted when there are any changes.
- The
environment
line and the ones below them are environment variables used for the service. In this particular case the${USER}
is for thedocker-compose.yml
to use the environment variable on the host, this is due to how a permission issue that may occur. For more detail look here. - The
web
line defines the second container of the application. build
step tells Docker Compose to build the current path with docker build, implying that there is aDockerfile
in the current path and that should be used for the image.volume
is the same thing as in the previous service, it define the filesystem mapping between the host and the container. So isenvironment
which set the environment variables for the service.depends_on
express dependency between services. In this particular case thedb
service will start before theweb
service and will be stopped after theweb
service is stopped.
With all of this done now all you need to do in order to make the application up and running is simply type docker-compose up
and your done!