Building a Docker Image for Boto3: A Practical Guide with Bind Mounting

Sarra Barnett
Nerd For Tech
Published in
10 min readMay 3, 2023

This project task focuses on a Python-based application development team that seeks to streamline their workflow by creating a Dockerfile for Boto3/Python and utilizing customized Python containers with bind mounts to access their code repositories. With this approach, the team can easily access and collaborate on their codebase while ensuring portability and consistency across different machines. To help you understand this project better, I’ve provided some key concepts and terms explained below.

What is Containerization?
Containerization is the process of packing an application and its dependencies into a container, allowing it to run consistently across different computing environments. Containers are self-contained and isolated environments that share the host’s kernel but have their own filesystem and resources. This simplifies deploying and scaling applications, as multiple containers can run on the same host system without interfering with each other, and each container can be customized and managed independently.

What is Docker?
Docker is a popular open-source platform that provides tools and services for containerization. It enables developers to easily create, deploy, and run applications in containers, which are lightweight and portable environments that can run consistently across different computing environments. Docker uses a Dockerfile, which is a script that defines the configuration of a container, to build and package applications along with their dependencies. It also provides a centralized registry called Docker Hub, where developers can store and share their container images. Docker has become a widely adopted technology for containerization due to its ease of use, portability, and scalability.

What is Boto3?
Boto3 is a Python software development kit provided by AWS that allows developers to write software that makes use of services like Amazon S3 and Amazon EC2. It provides an easy-to-use API to interact with AWS services, which makes it an essential tool for anyone working with AWS. With Boto3, developers can write scripts to automate AWS services or build applications that rely on AWS services. It’s a powerful tool that simplifies the process of interacting with AWS resources.

Prerequisites:

  • Docker installed on your local machine or IDE:
  • A Docker Hub Account
  • A GitHub account with access to the three code repositories

Objectives:

  • Build a docker file for Boto3
  • Use the Python official image on Dockerhub
  • Download any three repos to local host
  • Create three containers
  • Each container should have a bind mount to one of the repo directories
  • Log into each container and verify access to each repo directory
  • Build a Docker file for Boto3
  • To build my Dockerfile, I used the Dockerfile Reference, here.
  • You can find best practices for creating an efficient and effective Dockerfile, here.

A Dockerfile is a text file that specifies all the necessary commands, in a sequential order, required to build a particular Docker image. Docker automatically builds the image by following the instructions outlined in the Dockerfile. A Docker image comprises of read-only layers, each representing a single Dockerfile instruction. These layers are stacked upon one another, with each one representing a change from the previous layer. In effect, each Dockerfile instruction creates a new layer.

In this project, we will be using three commands to create a basic Dockerfile. While there are many other commands available, we will focus on these three for simplicity.

  • FROM specifies the base image to use for the build, and is typically the first command in a Dockerfile.
  • RUN executes a command during the build process to install software, update packages, or perform other tasks necessary to prepare the image.
  • CMD specifies the default command to run when the container starts.

To begin building your Docker image, create a new file called Dockerfile in your local environment. By default, Docker will search for a file named “Dockerfile” in the build context directory to build the image.

To initiate the build process, we need to start our Dockerfile with a FROM command. This command sets the base image for subsequent instructions in a new build stage. Any valid image can be used as a base image. In this project, we will use a Python image from Dockerhub. To find the Python image, go to Dockerhub and search for “Python”. The first search result should be the Docker Official Image for Python.

Once you’ve located the Python Docker Official Image on Dockerhub, click on it and select a tag for the image. Docker recommends using the Alpine image, as it’s a full Linux distribution that’s tightly controlled and has a small size (currently under 6 MB). That’s the tag I chose for this project.

Here is the format for the first command:

FROM <image>:<tag>

And here’s my example of how to use it to specify the Python 3.10 Alpine image as the base:

FROM python:3.10-alpine

In order to install the required dependencies and packages for Boto3 and Python, you can add the appropriate RUN commands to your Dockerfile. These commands will be executed during the image build process to ensure that the necessary dependencies are installed within the container.

RUN <command>
RUN pip install --upgrade pip && \
pip install --upgrade awscli && \
pip install --upgrade boto3

Every RUN command adds a new layer to the Docker image. It’s usually recommended to have each layer perform a single task, and the fewer layers there are in an image, the more easily it can be compressed.

By using ‘&&’ in the RUN command, multiple commands can be executed in a single layer, which helps to reduce the size of the resulting image. Additionally, if one of the commands fails, the entire build process will fail, which can help identify errors early in the process. This is known as a “fail-fast” approach, which can be useful for ensuring that the resulting image is reliable and consistent.

The backslash \ character can be used to continue a single RUN instruction to the next line in a Dockerfile.

Finally, we will include a CMD instruction in the Dockerfile. The primary function of a CMD is to set default parameters for a running container. CMD doesn’t execute any command during the build process, but instead defines the intended command for the image. As this is a Python image, we will set python as the CMD.

CMD <command>

CMD "python"

Below is the complete text document of my Dockerfile:

Once you have finished creating your Dockerfile, save it and then open your terminal. In the terminal, navigate to the directory where you have saved your Dockerfile.

It’s time to create your Docker image! Here is the format for the command:

$ docker image build <OPTIONS> <PATH>

The option “-t” allows you to tag your image with a custom name. The period “.” specifies that the build context is the current directory.

As Docker builds your image, it displays the process in real-time, and upon completion, it should output a message indicating a successful build along with the corresponding image ID.

I further confirmed the image was built by running the command:

$ docker image ls

Excellent! With our image successfully built, let’s proceed to the next step.

  • Download any three repos to local host:

You may skip this step if you already have multiple repositories locally, otherwise, you can follow along to learn how to obtain them from a remote repository.

To get started, go to Github and select three repositories of your choice. If you don’t have that many of your own, you can fork and clone any public ones to your environment. Once you’ve chosen your repositories, go to the repository page and click on the green “Code” button, then copy the HTTPS link.

To clone the repository from Github to your local environment, open your terminal and type “git clone” followed by the copied HTTPS link from the Github repository. Press enter and you should have a copy of the repository in your local environment.

I repeated this process for three different repositories and confirmed their successful cloning by using the “ll” command to list all the directories.

  • Create three containers with a bind mount to one of the repo directories:

You can find the Docker Reference for bind mounts, Here.

A bind mount is a method for mounting a directory from the host machine into a container. When you use a bind mount, a file or directory on the host machine is mounted into a container at a specific location, and any changes made to the files or directories in that location will be reflected on both the host and container side.

This provides a way to share data between the host and container or between multiple containers, and can be used to provide persistent storage for containers. Bind mounts can also be used to share configuration files or data between a host and container or to access files on the host machine that are required by the application running in the container.

You can create a container with a bind mount to a local repository by running this command:

docker run -it -d --name <container_name> -v "$(pwd)"/target:/<local_repository> image_name
  • The -v option is an abbreviation for “volume” and serves to define a mount point or a volume for a container.
  • The -i option in the docker run command stands for "interactive mode". It instructs Docker to keep the standard input (stdin) open even if the container is not attached to a terminal. This allows you to interact with the container's command prompt and send input to its running processes. Without the -i option, the container would terminate immediately after running the specified command.
  • The -t option stands for "Allocate a pseudo-TTY". It enables a terminal interface with the container's command line interface, allowing the user to interact with the container's processes as if they were running directly on the host.
  • The -d option stands for "detached mode", which means the container runs in the background as a daemon process. It allows you to start a container and have it run in the background, without attaching to it. This is useful for long-running processes that you don't need to interact with directly.

To get detailed information about the container, run:

$ docker inspect <container_name>

This command returns a JSON object containing all the configuration and status information of the specified container, including details about the container’s network settings, volumes, environment variables, and more. It’s a useful command for debugging, troubleshooting, and understanding the configuration of a Docker object.

Here we can view the specifics of the bind mount that has been attached to the container:

To create the other two containers with bind mounts, simply execute the same command used previously.

You can confirm that the containers were successfully created by running the command:

$ docker container ls
  • Verify access to each repo directory:

To verify access to a repository in a container, you can log into the container and navigate to the bind mount directory where the repository is mounted. Once there, you can use standard commands such as ls to list the contents of the repository directory and cd to navigate into subdirectories.

To log into a container, run the command:

$ docker container exec -it <container_name> /bin/sh

I confirmed the success of the bind mount by checking that the repository is present inside the container with the “ls” command.

You can also run these commands to check that all dependencies have been installed successfully and view their current versions:
pip --version
aws --version
pip show boto3

To exit the container, you can run the command exit or use the key combination Ctrl + d. This will exit the container and bring you back to your local host terminal.

Awesome work!
In this project, we gained hands-on experience in building Docker images and containers and using bind mounts to access host directories inside containers. I hope it provided you with a basic understanding of how Docker works and how it can be used to create portable and efficient environments for running applications.

Thank you for following along with me on this beginner project with Docker. Feel free to follow and connect with me on LinkedIn to stay tuned for more Docker projects and Cloud/DevOps content!

☁️👩🏼‍💻☁️

--

--

Sarra Barnett
Nerd For Tech

☁️ Engineer | DevOps | Terraform Associate | AWS Developer | CKA