Networking With Docker Containers.

Shomarri Romell Diaz
DevOps Dudes
Published in
8 min readMar 7, 2023

Creating custom dockerfiles to build Docker containers that host to GitHub repositories, then adjusting networking for the containers.

Project Overview:

In this tutorial we are going to create three dockerfiles that contain Github repositories. We will build a custom image and then build three containers from this image. You will also learn networking concepts in Docker and how to connnect various containers to eachother.

Project Prerequisites:

  • Basic Docker knowledge
  • Basic understanding of Linux commands
  • Docker Desktop installed
  • Docker Hub account
  • GitHub repositories
  • Access to a CLI
  • Code editor eg VS Code

Project Objectives:

  • Create three dockerfiles that automatically connect to Github
  • Each docker file should connect to a different GitHub repo
  • Place one container on a network called Development
  • Place the other two on network called Production
  • Verify container on the Development network cannot communicate with the other containers
  • Verify containers on the Production network cannot communicate with eachother
  • Clean up the containers

Step One: Create three dockerfiles that automatically connect to Github

Create new directory

Create a new directory from which you will work out of in this tutorial via the command line interface (CLI)

Create new file in text editor

Open up your text editor of choice that you will use to create your dockerfile. In this tutorial I will be using visual studio code (VSC).

We will be creating three dockerfiles that will create three containers. Development, Production1 and Production2.

Proceed to open three tabs in VSC to create your dockerfiles, I have name mine “dockerfile.dev”, dockerfile.prod1" and “dockerfile.prod2”.

Create Development dockerfile

A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. Click here to view the Dockerfile reference guide.

Our docker file will be as follows:

FROM ubuntu:latest
RUN apt-get -y update
RUN apt-get -y install git
RUN git clone <repo_URL> /<new_directory_name>

Commands explained

We will be using the standard Ubuntu image for our dockerfile which can be found here.

The “FROM” command defines the base image you want to build from. You can search for base images on hub.docker.com.

The “RUN” instruction will execute any commands in a new layer on top of the current image and commit the results. The resulting committed image will be used for the next step in the Dockerfile .

We use “RUN” to install git and to download the chosen repositories to our image and subsequently our container once we build it. The “/<new_directory_name>” tells Docker to create a new directory when creating the Docker image, placing the GitHub repository clone inside that directory. Choose a name for your new directory, will name mine “myrepo”.

The final dockerfile should resemble the following:

Repeat the process for for your “prod1” and “prod2” dockerfiles cloning two additional yet different github repositories.

Step Two: Build Docker Image

Navigate to your CLI and be sure that your are in the directory you created earlier for this project. Run the “ls” command to verify that your dockerfiles have been successfuly created/saved.

Run the following command three times to build out wach each image from each dockerfile.

docker build -t <image_name> -f Dockerfile.dev .

The “-t” command stands for tag, used when naming your image

The “-f” command stands for find and is used when you have several dockerfiles in a directory, it tells docker that it needs to find the specified dockerfile to build an image from.

The “.” Simply means build from the current working directory.

run the following coman to verify the images were created:

docker image ls

Step Three: Build three containers from each image

Use the following command to build a container from the “dev”, “prod1” and “prod2” image:

docker run -dt --name <container_name> <image_name>

Docker run command explained

“docker run” creates the container

“-d” is for detach, to run the container in the background

“-t” stands for TTY and allows the container to carry on running

Once you have created your containers run the “docker conatiner ls” command to verify the containers have been created.

Confirm ability to access the containers

To get inside your containers and access the bash terminal run the following command on each container:

docker container exec -it <container_name> bash

The “it” command tells docker to hold open an interactive terminal.

Run the “ls” command once inside your container to confirm that you can see the new “myrepo” directory we told specified in our dockerfile earlier.

Change into the “myrepo” directory with

cd myrepo

Once inside run the “ls” command and verify you can access the repo and view the files within that repo.

Once confirmed, run the “exit” command to comeout of the container.

Repeat the above steps to verify access to the cloned git repositories on your other two containers.

Step 4: Create Docker Networks

Networking overview

One of the reasons Docker containers and services are so powerful is that you can connect them together, or connect them to non-Docker workloads. In plain english you are allowing container to “talk to eachother”. Docker containers and services do not even need to be aware that they are deployed on Docker, or whether their peers are also Docker workloads or not. Whether your Docker hosts run Linux, Windows, or a mix of the two, you can use Docker to manage them in a platform-agnostic way.

Default Docker Network

By default, containers upon creation are assigned to a network known as the “bridge” driver, unless otherwise specified. All containers on a virtual network (ex. “bridge”) can talk to each other by default.

To disable this, you must configure containers to be on separate networks.

The “docker network inspect bridge” command you will see that our three containers are all by default assigned to this network.

Create a custom network for the Development container

We will name our first network “Development” for our “dev” container. Run the following command to create a new network:

docker network create <network_name>

To connect the “dev container” to the “development network” run the following command, you can specify either the container name or the container ID either is fine:

docker network connect <network_name> <container_name>

Verify the connection was by using the following network inspect command:

docker network inspect <network_name>

As you can see below, I can verify that my “devgitcontainer” is now attached to this container.

Create a custom network for the Production containers

Repeat the above steps to create a “Production” network and attach both the “prod1” and “prod2” containers to the network.

docker network create production
docker network connect production prod1gitcontainer
docker network connect production prod2gitcontainer
docker network inspect production

You should have a similar output as the above screenshot showing that both containers “prod1” and “prod2” have been successfully attached the the “production” network.

Step 5: Verify containers on the Development network cannot communicate with conatiners on the Production network

Given that our development container and our x2 production containers are now on seperate networks, they should not be able to connect with eachother.

To test this we will enter the bash terminal for our dev container and attempt to “ping” one of our production containers.

Run the following command to enter the “devgitconatainer”:

docker exec -it devgitcontainer bash

Run the following comman to install ping:

apt-get install -y iputils-ping

Once installed, attempt to ping one of the production containers:

ping <container_name>

The command should not work.

We have successfully verified that containers on the development network cannot communicate with the containers on the production network.

Step 6: Verify containers on the Production network can communicate with eachother.

exit the development container.

Log into one of your production containers

docker exec -it prod1gitcontainer bash

download ping

apt-get install -y iputils-ping

Attempt to ping your second production container.

ping prod2gitcontainer

You should be able to see successful connection as shown above.

Use “control c” to stop the connection.

THE END!

We have successfully created containers from a custom dockerfile with access to git. We have created custome networks for both our development and connection containers then verified that our containers are on seperate networks, whilst confirming that all our containers that are on the same network can communicate with eachother.

If you enjoy cloud engineering content like this and would like to see more, follow me on LinkedIn.

More from Shomarri Diaz

DevOps ♾ | Cloud Engineer ☁️ | Linux 🐧 | AWS 🖥️ | Docker 🐳 | Find me at👇 https://www.linkedin.com/in/shomarridiaz/

--

--

Shomarri Romell Diaz
DevOps Dudes

DevOps ♾ | Cloud Engineer ☁️ | Linux 🐧 | AWS 🖥️ | Docker 🐳 | Find me at👇 https://www.linkedin.com/in/shomarridiaz/