What is Docker, how to build Docker containers and link the containers for networking

Shak
8 min readFeb 7, 2024

--

Docker is a way of virtualizing applications. When you package an app using docker you are essentially packaging the app in a container that has everything the app needs to run — the application code, dependencies, libraries, runtime and environment configuration etc. This standardizes the process of running any service on any environment much easier. You don’t have to download different dependencies in a local machine every time you want to run a particular app. Everything is included in the container. You can run multiple containers with different dependencies side by side in the same machine without downloading the dependencies. The containers run in an isolated environment so once we are done we can safely remove the containers without effecting anything else. In this project we will build docker containers housing Github repositories and link the containers for networking.

Prerequisites:

  • Docker Hub account
  • Docker installed on local machine or IDE
  • GitHub account with repositories
  • Basic Docker and Docker file knowledge
  • Basic Linux commands
  • IDE like VS Code

Objectives:

  • Create three Docker files that automatically connect to Github
  • Each docker file should connect to a different GitHub repo
  • Create three custom docker images from the three docker files
  • Create three containers, one from each custom docker image
  • Confirm you can access the GitHub repo in each container
  • Place one container on a network called Development
  • Place the other two containers on a network called Production
  • Verify container on the Development network cannot communicate with other networks
  • Verify containers on the Production network can communicate with each other

Step 1. Create three Docker files that automatically connect with Github

Create a directory where you will house the docker files and github repository clones. I will call my directory “dockerproject”. See below for reference and cd into the directory that you have created.

We will create three docker files each with a different Github repo in it. After we create the docker files, docker image and docker containers we are going to link one container to development and two containers to production. Let’s call the files dockerfile.dev (for development) and dockerfile.prod1 and prod2 (for production). I will use the “touch” command in CLI and then list the files using the “ls” command. See below for reference

You can write and edit your docker file directly through your CLI using Vim or Nano but I will use VS code to write these files. You can use any IDE you are comfortable with. Open the .dev file and we will start with the latest version of the official ubuntu image. Next we will add a Github repository that can be accessed from within the container. It is also a good practice to update packages using the Run apt-get update command. Our .dev file should look like this below

Now we will clone the github repository inside the container. Go to your Github account and copy the repository URL that you want to use. I have created a repository called developer for this project along with production1 and production2 repositories for the other docker files. For the production file we will copy the URL for the production repository and run the git clone command. See below for reference

The /devrepo below tells Docker to create a new directory when creating the Docker image, and place the GitHub repository clone inside that directory. I have chosen to name my directory “devrepo”. Your docker file should look like this below

Now that we have created our dev file we will repeat the same process for two of our other docker files — prod1 and prod2. The code will be the same except for the Github repository URLs. see below for reference.

Step 2 — Create Docker images based on the Docker files created

Navigate to your CLI and to the folder where the docker files were created and run the following command below

docker build -t <image_name> -f <filename> .

the t is short for tag and it gives your image a name and f is short for “find”. This will help locate our docker file. Don’t forget the . character at the end. This sets the build context to the current directory.

Docker image built for dockerfile.dev

Repeat the process for prod1 and prod2 docker files. Make sure to change the tags and names of the file after -f. See below for reference

Docker image for prod1
Docker image for prod2

You can check the images you have created by using the “docker images” command

Our docker images were successfully created!

Step 3. Create three containers, one from each image

In this section we will create three containers from each of the three files we have created earlier. Use the docker run command to create the containers. Here is the command below.

docker run -dt — name <container_name> <image_name>

The -d stands for detach and tells Docker to run the container in the background and print the container ID. The -t stands for TTY and allocates a pseudo-TTY, which in this case, keeps the container running. Container name is optional, Docker will automatically assign one if you don’t provide any and the image name is the image docker will use to create the container from. See below for reference

Repeat the process for prod1 and prod2. Make sure you give them a unique name and the right images.

You can check the containers by using the Docker container ls command. See below for reference.

We have successfully created the containers!

Step 4. Confirm Github repo access

We need to confirm we can access the Github repository inside each container. We will use the docker exec command with the -it flags to open an interactive terminal. The bash at the end of the command opens a bash shell so we can work in the container.

docker container exec -it <container_name> bash

This command will put you inside the root directory of the container

Now run the ls command to see the file system inside the container. You should see the devrepo directory we have created earlier. Go to the directory using the cd command and run the ls command to see the files inside your devrepo (which is the name we gave earlier to our github repository) directory.

I only have one file in my Github repository called README.md and it should show up. See below for reference.

This confirms that we can get into dev container and that we have successfully cloned our GitHub repository into the container. Exit out of this container using the exit command and repeat the process for the other two containers.

prod 1 github repo confirmation
prod2 github repo confirmation

Step 5 — Networking

Our next step is to optimally build the network between these containers so they can communicate with each other. We will make sure our developer environment remains isolated and the two production containers can network with each other. By default, when a container is created it is assigned to a private network known as the “bridge” driver and all containers within this virtual network can communicate with each other. You can use the docker network inspect bridge command to see the networks assigned to your containers. See below for reference. If we want to isolate the development containers we need to place it in a different network.

We will create a new network for our Development container and place the container in this new network by itself. We can use the command docker network create to create a new network.

Now we need to include our “development” container inside this new network. You can use the docker network connect command for this. Then you can check the connection by using the docker network inspect command. See below for reference.

Create production network

Now we will repeat the same networking steps as above and this time make a new network called production and include two of our production containers inside the network. See below for reference.

Our networks have been created!

Now we will verify the production networks cannot communicate with the development network. We will get into our development container using the docker container exec -it bash command and try to ping the production networks. Once you are in the container make sure to install “ping” using the apt-get install -y iputils-ping command and then try to ping the production containers. You should see a message like below confirming it is unable to connect.

Exit out of development container and repeat the process of getting into a production container and installing ping. Try to ping the other production container and you should see it is successful. See below for reference and use control c to stop pinging.

We have successfully created three docker files with each containers connecting to a different Github repository. Then we have created Docker images from these files and three different containers from the images. We have networked the two production containers in one network so they can communicate with each other and we have isolated the development containers inside an isolated network.

Thank you for following with me. Feel free to connect with me on Linkedin and follow along for more DevOps projects.

--

--

Shak

Financial Analyst turned Cloud Engineer | Tech Enthusiast | DevOps ♾ | Cloud Engineer ☁️ | Linux 🐧 | AWS 🖥️ | Python 🐍 | Docker 🐳 | Terraform 🏗