Streamlining Python Application Development with Docker: A Guide to Simplify Setup and Ensure Consistent Execution Across Machines

Donald Kish
Nerd For Tech
Published in
8 min readMay 15, 2023

Howdy friends! In this article, we’ll be demonstrating how to simplify the development environment setup and ensure consistent execution of Python code across different machines using Docker. If you’re new to Docker, containers, and bind mounts, don’t worry — we’ll provide you with a quick overview of these key terms before we dive into the scenario and objectives. Let’s get started!

Some key terms

Docker: a popular platform that allows developers to package, distribute, and run applications in containers

Docker Hub: a popular public repository where developers can share and distribute Docker images

Dockerfile: a script-like text file that contains a series of instructions that Docker uses to build a Docker image.

Docker Image: the basis for containers, and they include all the necessary dependencies, configurations, and other resources required to run an application

Containers: a lightweight and portable package containing software, libraries, etc., making it easy to deploy applications on different machines without worrying about the underlying system dependencies

Bind mount: a type of volume mount in Docker that allows a directory or file on the host machine to be mounted into a container. With a bind mount, the files and directories in the host directory are directly accessible from within the container. This allows for easy sharing of files between the host machine and the container and makes it possible for developers to make changes to the files on the host machine and see those changes reflected immediately in the container

Some key terms discussed in prior demonstrations

Boto3: A software development kit or library that manages access to various AWS services such as DynamoDB

Python: an interpreted, object-oriented, high-level programming language

Repositories: A repository contains your project files and their version history

Prerequisite Section

We will need a Docker Hub Account and an AWS account. I will be using the AWS Cloud9 environment. A demonstration for setting up Cloud9 can be found here.

Scenario:

As an employee working on a Python-based application, your task is to simplify the development environment setup and ensure consistent execution of the code across different machines. To achieve this, you need to create a Dockerfile for Boto3/Python, which can be customized to include any necessary libraries or dependencies required by the Python application.

You will also need to download three repositories to your local host and create three Ubuntu containers using the customized Dockerfile. Each container should be configured with a bind mount to one of the repo directories, allowing you to easily access the code and make changes as needed.

Once the containers are created, you should log into each container and verify access to each repo directory, ensuring that the containers have been properly configured and that the team can effectively collaborate on the codebase. By utilizing Docker, you can also ensure that the development environment is portable and can be easily replicated on different machines, streamlining the workflow, and enabling efficient collaboration.

Deliverables:

1. Build a docker file for Boto3 / Python

2. Use either the Ubuntu or Python official images on Dockerhub

3. Download any three repos to the local host

4. Create three Ubuntu containers

5. Each container should have a bind mount to one of the repo directories

6. Log into each container and verify access to each repo directory

Steps

Step 1: Preparing our Cloud9 environment

Like most of our adventures together, this one starts by getting our AWS/Cloud9 environment ready. Let’s start by signing into the AWS console and opening our Cloud9 environment.

Next, we will need to install Docker onto our Cloud9 environment by running the following commands.

sudo yum update -y
#performs updates
sudo yum install docker -y
#installs Docker
curl -sfL https://get.k3s.io | sh -
# "curl" transfers data to HTTPS to fetch the HTTPS address
# "https://get.k3s.io" which installs K3s lightweight Kubernetes disribution

We can see that our Docker version is installed and ready to go.

Step 2: Building our Dockerfile for Boto3

To accomplish our first objective we will need to create a Dockerfile for Boto3/Python. To do this we will need to create a new Python file. From the menu bar go to File> New From Template > Python File

Once created go to File > Save As > name Dockerfile.

We are now going to update our Dockerfile with code that will create the container

FROM python:3.9
#specifies the base image that a new Docker image should be based on

RUN apt-get update && \
apt-get install -y python3-pip
# Updates the package index and the required dependencies

RUN pip3 install boto3
#runs the command pip3 to install boto3 during the
# build process of a Docker image

ENTRYPOINT ["tail", "-f", "/dev/null"]
# specifies the default command to run when a
# Docker container based on this image is started
# This is often used as a way to keep a container running indefinitely

With the file updated, let’s cross our fingers and hope it runs when the time comes.

Next, we will need to build the image. When we run this command, Docker will look for a Dockerfile in the current directory and use it to build a new Docker image. The resulting image will be tagged with the name week-16 and will be available on our local Docker host.

docker build -t week16-image .
# Docker build is the command for building the Docker image
# -t specifies the name and an optional tag to be applied to the resulting image
# . specifies the build context which is the current directory
# where the file is located

It will take a minute for the command to execute but once it is done we should see a success statement at the end.

Let’s double-check to be on the safe side.

docker images
#shows docker images

Excellent job team. Let’s move forward with locating our image on Dockerhub. Dockerhub is a popular public repository where developers can share and distribute Docker images. It provides a centralized platform for storing and managing Docker images, making it easy to discover and use pre-built images for popular software packages and programming languages.

Step 3: Use either the Ubuntu or Python official images on Dockerhub

Let’s head over to Dockerhub and search Ubuntu in the top search bar.

After selecting Ubuntu we will copy the pull command located on the top right-hand side.

Next, we will paste it into our command line and execute it.

docker pull ubuntu

By executing this command Docker will connect to the Docker Hub registry, download the latest version of the Ubuntu image, and store it on your local Docker host. Once the image is downloaded, you can use it to create and run containers based on Ubuntu.

Awesome Opossum!

Step 4: Download any three repos to the local host

For this step, I went ahead and created three new repositories in my GitHub account.

This step is fairly easy as we will just need to clone our repositories using the clone command.

git clone <repository link>
Link is located on GitHub

Step 5: Create three Ubuntu containers

For this step, we will need to log in to Dockerhub. Luckily this is very easy as we will just use the docker login command and enter our username and password. Remember the password will not appear as you type it.

Next, we need to run this long command for each container we are creating. The rundown of this is:

docker run -d -t --name container1 -v "$(pwd)"week16demo1:/Week16 ubuntu

docker run: tells Docker to create and run a new container based on an image.

-d: runs the container in the background (detached mode), so that the command prompt is immediately returned to the user.

-t: attaches a pseudo-tty to the container, allowing you to interact with the container's command prompt.

--name container1: sets the name of the container to "container1".

-v "$(pwd)"week16demo1:/<Week16: mounts a volume from the host to the container. $(pwd) returns the current directory, and week16demo1 is the name of the directory that will be created inside the container. /<Week16 specifies the mount point for the volume inside the container, and <Week16 is the name of the directory that will be created at the mount point inside the container.

ubuntu: specifies the image to use for the container. In this case, it's the official Ubuntu image.

Let’s double-check that all three have been created. It looks like we were successful.

Step 6: Log into each container and verify access to each repo directory

Our last objective is to log in to each of our containers and verify access to each repo directory. To accomplish this we will use the following command.

docker exec -it container1 bash

When we run this command, Docker will attach an interactive terminal to the container named “container1” and start a Bash shell inside the container. We will use this shell to execute commands and interact with the container’s file system and environment. When we are done we will simply use the exit command to return to our original command line.

We are successful! Excellent work team, by doing this we have ensured that the development environment is portable and that it can be easily replicated on different machines, streamlining the workflow, and enabling efficient collaboration. The only thing left to do is tear it down.

docker container rm container1
#removes the container

We can use the docker container ps command again to check that all containers have been removed.

docker container ps

With that being completed we are all done! Excellent work team and I will see y’all next time!

Photo by Waldemar on Unsplash

--

--

Donald Kish
Nerd For Tech

𝙱𝚂 𝙲𝚢𝚋𝚎𝚛𝚜𝚎𝚌𝚞𝚛𝚒𝚝𝚢 | 𝙳𝚎𝚟𝙾𝚙𝚜 | 𝙻𝚒𝚗𝚞𝚡 | 𝙰𝚆𝚂 | 𝙿𝚢𝚝𝚑𝚘𝚗 | 𝙳𝚘𝚌𝚔𝚎𝚛 | 𝚃𝚎𝚛𝚛𝚊𝚏𝚘𝚛𝚖 | 𝙻𝚎𝚟𝚎𝚕𝚄𝚙𝙸𝚗𝚃𝚎𝚌𝚑