[Project 11] Building your own Docker container using official Python image and Boto3 library

Do Hyung Kim
8 min readMay 21, 2023

--

Greetings,

I would like to introduce Containerization technology. Containerization is an essential technology for modern software delivery and deployment.

Previously we explored several AWS services and Python language to control AWS resources via AWS Console, Command Line Interface or Software Development Kit like Python Boto3 library. All of these works are used to design and create software or application.

Once such softwares are created, they need to be delivered or deployed to testing and production teams. Then these teams may encounter a big hassle to manually adjust or create exactly the same runtime environment of the developer team to run the software on their local host systems. For this issue, containerization is a good resolution and it facilitates continuous deployment process of software. One of the container service providers is Docker.

Terminology

Docker is a set of platform as a service (PaaS) products that use operating system-level virtualization to deliver software in packages called containers. It was first started in 2013 and is developed by Docker, Inc. Docker automates application deployment using lightweight containers where applications can work efficiently in different environments.
(Source: Wikipedia, https://en.wikipedia.org/wiki/Docker_(software))

Containerization is a process of creating a package enclosing all necessary applications and their dependencies and libraries in a container. It makes easy and possible to run and deliver the entire application products regardless of their different system environments.

In order to deliver the exactly the same run time environment of testing team, the Docker image should be created out of the containerized application package.

Docker image is such a packaged template defining the way how a container is being created layer by layer from the bottom to the top in a chronological manner like a well-decorated multilayered cake (See the picture below). Therefore, Docker image tells how to run the container per each image layer through the Docker platform. Once the Docker image is delivered to testing and production teams, they can realize a live container that contains running applications out of the image using docker run command; it will execute each command embedded per each image layer starting from the bottom to the top throughout the entire Docker image. It sounds as if you are eating cake from the bottom part of the layered cake! :-)

Would you want to eat from the bottom to the top? (Layer cake image: Credit to Annie Spratt, Source: Unsplash.com)

You can also confirm whether the created container is actively running (docker ps or docker container ls) or stopped (docker container ls -a)

Once you satisfy your own Docker image, you can upload (docker push) to share the image with other users or download (docker pull) any necessary container images through an official Docker image hosting repository site called Docker Hub. You can also create your own Docker image (docker build).

To achieve all the above activities, Docker Engine controls the process of building, shipping and running the container-based applications. The engine creates a server-side daemon process in the Docker host that hosts images, containers, networks and storage volumes. The engine offers a client-side Command-line interface (CLI) to users (or Docker client) for the interact with the daemon through the Docker Engine API.

Docker Architecture (Source: Fawaz Paraiso, St´ephanie Challita, Yahya Al-Dhuraibi, Philippe Merle. Model-Driven Management of Docker Containers. 9th IEEE International Conference on Cloud Computing (CLOUD), Jun 2016, San Francisco, United States. IEEE CLOUD 2016. <hal-01314827>

Since your containers are ephemeral storage as their data are not physically written out in the container, you may need to make their data persist even after the container is removed using Bind Mounts strategy; it mounts the files and directories from the container into a designated path in the file system of your local physical host machine. Then exactly the same files and directories will remain and be available in the local host system even after your containers are removed. If you make changes to the files and directories in the local host machine, the same change will be also found in the containers as well.

Bind Mounts: The files and the directories of your container points to a path of your physical local host system. This allows to store the synchronized copy of the files and directories in the local host machine.

Prerequisites

  • AWS Account with an IAM User (AdministratorAccess permission)
  • Interactive Development Environment (IDE): AWS Cloud9 environment or Visual Studio Code (VS Code).

In my personal experience, both AWS Cloud9 and VS Code share many similar features. I am using VS Code for this article.

Procedures

1. Build docker file for Boto3 / Python using either the Ubuntu or Python official images from the Docker Hub.

The content of Dockerfile

We created a new Docker image layer by layer using the above Dockerfile as shown below. Let’s imagine that you are creating your own layered cake with each following recipes and decorations ! ;-)

  • Foundational operating system layer (line 2): pull the official Ubuntu image as a base image from Docker Hub Registry.
  • Second layer (line 5): Update the Ubuntu package list and install python3-pip package manager.
  • Third layer (line 8): Install the latest version of python3 and AWS boto3 library.
  • Fourth layer (line 11): Set a wk16project subdirectory based on the present working directory of our container.
  • CMD (line 17): designate a command to be executed when running a container.

The CMD is the command that should be executed upon running a container based on the above Docker image.

To test the functionality of the container, we also prepared a shell script reporting the current version of python3 program (See below). Then we copied the above script from our host machine into the above designated subdirectory inside our container (line 14). Lastly, the shell script will be executed (line 17) whenever we run a new Docker container based on the above new image.

Using docker build command, we named the new image as “first.img” using “-t” option and the image was built by pulling out official Docker image from Docker Hub from the scratch without reusing previously cached images (“ — no-cache”). Lastly, this image building task was done in the present working directory indicated as a dot(“.”).

Docker Build command creates a new Docker image.

The above output showed that each image layer was constructed and piled on top of the previous image layers. The new image, “first.img” was also confirmed using “docker image ls” command shown below.

Find out your new Docker image file, “first.img”.

To test the function of our new Docker image, we created a new container running the image. “ --rm” option allowed us to automatically eliminate this new container as soon as running the container was over. Thus, we did not worry about manually deleting the new container.

Our new container was generated and it successfully executed the embedded shell script showing the Python3 version of our local host Linux system shown below, confirming the functionality of our new image. Using this Docker image, you can create its live container to confirm the version number of your Python3 program in the local host system inside the container, as long as the system utilizes Docker program.

Run a new container using your new Docker image file to get the version number of your Python3 on your current local host system.

2. Download any 3 GitHub Repositories to your local host.

Let’s clone 3 GitHub repositories with the following commands.

git clone https://github.com/dohkim04/my-python-repo.git
git clone https://github.com/dohkim04/gold-member.git
git clone https://github.com/dohkim04/boto3_python_scripts.git

The contents of each GitHub repository were downloaded in their respective clone directories in your local host system below:

3. Create 3 Ubuntu containers.

Now, we are going to mount the files and the directories of your container into cloned repo directories present in your local host file system using Bind Mounts strategy. Before doing so, let’s create 3 individual containers.

docker container run -d -t --name ubuntu1 -v $(pwd)/<cloned repo directory>:/wk16project

The above command mounts each clone repository directory in your local host system to the /wl16project directory in your container.

Below is the explanation of each option used in the command.

  • $(pwd) : this points the file path of current working directory.
  • -d (detached) : run containers in background and print the container ID.
  • -v (volume) : sets $(pwd)/<cloned repo directory> as a working directory for your container.
  • -t (Allocate a pseudo-TTY) : allows to interact with the terminal process inside the container.

Since we have created 3 running containers, let’s check their statuses. All 3 containers are present (docker ps) and up and running (docker container ls).

4. Each container should have a Bind Mounts to one of the repository directories.

Next, let’s inspect the Bind Mounts status per each container. Under “Mounts” section, each source (cloned repo directory in your local host system) is mounted to its destination (a designated directory path in your container). Also, each container state indicates running (See below).

docker container inspect <docker_container_name>

Everything looks fine so far! :-)

5. Log into each container and verify access to each repository directory

Lastly, let’s interact with bash session inside each container with the following command.

docker container exec -it <container-name> bash

You are sucessfully able to log into each container via interactive bash shell session. Also, you can see the bind-mounted wk16project folder, which contains the cloned repo directory present in your physical local host system (Confirmed but not shown in the screen shot below).

Summary

Using Bind-Mounts strategy in Docker container, you are conveniently able to access to any updated files and directories in your local physical host system by simply checking out your wk16project directory inside your container. This is very useful for many software developers who share a big project folder in a local host system. Instead of getting a hassle to locate the path of their own project directory manually everytime, the developers can simply access to their own Docker containers inside their own computer to pull out the same copy of their work stored in the big project folder. Once they update and save their works inside their containers, all the changes will be reflected all the way back to the path of the originating project folder inside the local host machine. That way, you will not intefere with or mistakenly alter other developer’s project folder or files, either.

Thank you for reading my article. Feel free to contact me if you have any questions by doh.kim04@gmail.com.

You are welcome to connect with me in LinkedIn shown in my Medium profile located on the right side of my Medium article. Thank you for sharing your time with me and have a wonderful day! :)

Cheers,

Do Hyung Kim

--

--

Do Hyung Kim

DevOps Engineer 🥝 Linux Engineer 🌸 AWS Certified Developer Associate 🥒 LinkedIn.com/in/dohkim2022/ 🏕️ GitHub.com/dohkim04/ 🍉