Creating Docker Containers

Katie Sheridan
7 min readApr 29, 2023

--

What comes to mind when you hear the word “containers?” When I first heard about containers, I thought about shipping containers. You know those giant metal boxes stacked up on high in a multitude of different colors tagged by different brands to be hauled off to who knows where. I live near a port, so I see a ton of them along the river on my commute to work. Imagine a shipping container that contains within it: the blueprints, the land you intend to build on, the materials to build the home, and includes the HOA that manages the upkeep of that home. It’s kind of like how IKEA sends you all the materials in the box for how to build your project, and they ship it all to you. Once you’ve put the pieces together, it becomes a your finished project. It’s no long all the separate pieces and becomes a single entity.

When it comes to Docker, the term container is just like that.

A container is a bundle of components, packaged together, so that everything needed to build, run, and maintain an application are layered together as one unit. Docker is just one platform that virtualizes this kind of software for your cloud environment.

Benefits of using containers:

  1. Allows a team to easily to create a custom file, which can be customized to include any necessary libraries or dependencies required by the team’s Python application.
  2. Allows the team to easily access and configure parts of the code as needed.
  3. Allows team members to log into each container and verify access to each repo directory, ensuring that the containers have been properly configured and that the team can effectively collaborate on their codebase.
  4. Allows a portable environment that can be easily replicated on different machines, streamlining their workflow and enabling efficient collaboration.

Scenario: A software development team is working on a Python-based application. The team needs to simplify their development environment setup and ensure consistent execution of their code across different machines. They have decided to use Docker containers to accomplish this. The team wants to be able to download three separate repositories to their local hosts, create 3 containers using customized files, and each container should use a bind mount to one of the repositories to allow collaboration.

REQUIREMENTS:

  • Basic Knowledge of Docker, Linux CLI
  • Cloud Based IDE such as Cloud 9
  • Installation of Docker, and any of their updates in the IDE
  • Free account on Docker Hub

FOUNDATIONAL GOALS:

  • Build docker file for Boto3 / Python
  • Use either the Ubuntu or Python official images on DockerHub
  • Download any three repos to local host
  • Create three Ubuntu containers
  • Each container should have a bind mount to one of the repo directories
  • Log into each container and verify access to each repo directory

Let’s build out some containers!

Building the Containers and Files 🐳

Step 1: Prepare our Environment.

Our very first step is to set up the components we need to make the container. If you haven’t already downloaded Docker, their FAQ page is pretty helpful for this process. Navigate to your IDE and start your instance. For me, this means going into Cloud 9. I ran the following commands to update my CLI.

#How to install and initiate Docker in your IDE
mkdir <directory-name>
cd <yourdirectoryname>
sudo yum update -y
sudo yum install -y docker
sudo usermod -a -G docker ec2-user
sudo systemctl start docker.service
sudo systemctl enable docker.service
sudo systemctl status docker.service

#The following commands give you information about the versions you have.
sudo systemctl is-active docker
docker version
docker info

With the latest updates installed, we can begin the next step.

Step 2: Build the Dockerfile

In the IDE go to File> New From Template > PythonFile > Save As > Name your Python file, but without the .py at the end. When you save it, make sure it is in a directory set aside for your Docker files.

Make sure you save it in the Dockerfile or it won’t be able to be pulled.

Edit the new file using vim or vi. To complete script for our file we will need to go to DockerHub and find an image. The steps require an ubuntu image, so that’s what we’ll pick from the DockerHub menu.

And I picked the “jammy” image. All of these images are official and supported, so there isn’t one specific image to choose from. Grab the tag number an insert it into the script over in the Dockerfile. (For best practices writing Dockerfiles, check out the official documentation.)

Save the file, exit, and now we’re ready to build the Docker Container.

Back in the CLI, run the command:

docker build -t (Yourname)/ubuntu.22.04

The CLI will send over 100 lines of information and run for a minute or two before finally giving the success message at the end.

Run the following command to see your image has been created and listed at the top.

docker image ls

I still have my Ubuntu image up in another tab, so I went back to the one we referenced. I essentially want to pull from that specific image and make sure it is up to date.

Run the command:

docker pull ubuntu

And get the following:

This tells us we are working with the default tag “latest.”

Step 4: Download 3 Repositories to the Local Host

I already have 3 repositories over in GitHub, so I navigated over there and went to clone them. You’ll need to clone each separate repo as follows, editing the site with your own.

git clone https://github.com/Katie-Sheridan/gold-member
git clone https://github.com/Katie-Sheridan/gold-repo-bank
git clone https://github.com/Katie-Sheridan/boto3_py_scripts

Run ls -l to see if the files exist there.

All 3 images are listed, notated in blue, and we’re ready to move on.

Step 5: Create Three Containers

What I want to do is essentially bind the directory to the container. I can do this using a bind mount . Simply put, this will allow the container to access the file.

Login to Docker via the CLI, enter the username and password.

As I create the containers, I want to run a few extra arguments in the CLI such as -d (to detach the container and run it in the background), -t (to assign a TTY and allow a terminal in the container), and -v (to bind mount a volume).

I ran the following command for each container, using each repository for a separate container.

docker run -d -t --name <containerName> -v "$(pwd)"<repositoryName>:/<directory_name> <image_name>
docker run -d -t --name MemberContainer -v "$(pwd)"gold-member:/Week16DockerFile ubuntu
docker run -d -t --name GoldContainer -v "$(pwd)"gold-repo-bank:/Week16DockerFile ubuntu
docker run -d -t --name Boto3Container -v "$(pwd)"boto3_py_scripts:/Week16DockerFile ubuntu

Each of those long strings of random characters represents the new containers. To make it “pretty” I ran docker container ps:

Additionally, if you run docker inspect <containername> you can verify the mount. It will give a lot of information. The portion we care about here is produced in its own block under “mounts”

I verified this for each container which means we’re onto the last step.

Step 6: Log into each container and Verify Access to each Repo Directory

To verify access, we just need to change the command to exec, as in execute.

docker exec -it <container_name_or_id> bash

docker exec -it MemberContainer bash
docker exec -it GoldContainer bash
docker exec -it Boto3Container bash

And run for the other 2 containers. Don’t forget to run exit to get out of the root user.

We have successfully accessed all three of the containers. As is good practice, it’s a good idea to tear down the resources you don’t need.

To delete the containers you could run one of the following:

docker container rm <container_name>
#force remove the container, even if it is running
docker container rm -f <container_name>

I opted for option 2. I also ran docker container ps to see if I had any containers still listed, and got the all-clear!

This concludes my basic demo of building Dockerfiles and Containers. I hope you found this demo informative. If you have any tips, tricks or questions, please let me know!

Feel free to connect with me on linked in too!

--

--

Katie Sheridan

DevOps/ CloudEngineer--- I will be using Medium to blog about my projects and any tips or tricks I come across.