An Overview of Docker Cross-Container Communication

Roman Ceresnak, PhD
CodeX
Published in
5 min readFeb 15, 2023

In this article, I will discuss three types of communication using Docker technology.

Docker Overview

Applications can be developed, shipped, and run on Docker, an open platform. You can quickly deliver software by using Docker, which lets you separate your applications from your infrastructure. You can manage your infrastructure just like you manage your applications with Docker. You can significantly shorten the time it takes to run code in production by utilizing Docker’s methods for shipping, testing, and deploying code quickly.

Objects from Docker

Images, containers, networks, volumes, plugins, and other objects are all created and used with Docker. A brief description of images and containers can be seen down below.

Docker Images

An image is a template that can only be read and contains instructions for making a Docker container. A lot of the time, an image is based on another image with some alterations. You could, for instance, create an image that is based on the Ubuntu image but also installs your application, the Apache web server, and the configuration details required to run your application.

You could make your own images or only use images that other people have made and put in a registry. A Dockerfile with a straightforward syntax for defining the steps required to create and run the image is what you need to build your own image. A layer in the image is created by each instruction in a Dockerfile. Only the layers that have changed are rebuilt when the Dockerfile is changed and the image is rebuilt. When compared to other virtualization technologies, this is one reason why images are so light, small, and quick.

Docker Containers

An image’s runnable instance is called a container. Using the Docker API or CLI, you can start, stop, move, or delete a container. A container can be connected to one or more networks, have storage attached to it, or even be used to create a new image from its current state.

A container is fairly isolated from other containers and its host machine by default. The isolation of a container’s network, storage, or other underlying subsystems from other containers or the host machine can be controlled.

The image of a container and any configuration options you give it when it is created or started are what make it unique. Any state changes that are not saved in persistent storage are lost when a container is removed.

Networking(Cross) Container Communication

Communication between containers can be tricky, but I will introduce you to 3 options that should cover use cases that are widely used.

  1. Request from container to WWW.
  2. Request from container to host machine.
  3. Request from container to container.

So let’s jump to the first option that is marked in the picture as an orange square with the number 1 inside it.

Request from container to WWW

Imagine the following situation: You have an application written in Java, PHP, Ruby, or Node. The choice of language is up to you, and in the current state, the choice is completely irrelevant.

  • Your Docker container can connect to the outside world, but the outside world cannot connect to the container. To make the ports accessible for external use or with other containers not on the same network, you will have to use the -P (publish all available ports) or -p (publish specific ports) flag.
  • What is really needed is to create Dockerfile file and run following steps of instructions:
  • docker build -t application-image
  • docker run — name application -d — — rm -p 3000:3000 application-image

So let’s jump to the second option that is marked in the picture as an orange square with the number 2 inside it.

Request from container to host machine(localhost)

As you can see, the communication marked with the number 2 is the communication between the application running in containers and the non-relational MongoDB database installed on the computer.
In order for us to store data in the local MongoDB database, I created a connection in the node language that connects to localhost.

mongoose.connect("mongodb://localhost:27017/collectionName", {
useNewUrlParser: true,
useUnifiedTopology: true
});

The most important for us is the first line where we can see the value mongodb://localhost:27017/collectionName.

If we did not have the non-relational MongoDB database running on port 27017 installed, the application would give us an error message when creating the container. How could we solve this problem? There is an option that will work: host Docker internally.

To put it simply, host. docker. internal is a name. Every service that runs on your host and binds to the network interface that is also set as the Docker Daemon host-gateway can be accessed from inside a container on your host.

So instead of the following code

mongoose.connect("mongodb://localhost:27017/collectionName", {
useNewUrlParser: true,
useUnifiedTopology: true
});

We make changes, and the code should look like this:

mongoose.connect("mongodb://host.docker.internal:27017/collectionName", {
useNewUrlParser: true,
useUnifiedTopology: true
});

After that, do not forget to save changes and run the following lines of commands in your terminal:

  • docker build -t application-image
  • docker run — name application -d — — rm -p 3000:3000 application-image

Request from container to container

The most elegant solution is probably option 3.

Network creation

Within the Docker network, all containers can communicate with each other. The IP addresses are resolved automatically, without your intervention.

You just need to create a network, update the code, and rebuild the image and container.

  • Create a network: docker network create first-net
  • Create a MongoDB container with pointing to the new network: docker run -d — —name mongodb — —network first-net mongo
  • After that we have to rewrite code of our application to pointing to correct database
mongoose.connect("mongodb://mongodb:27017/collectionName", {
useNewUrlParser: true,
useUnifiedTopology: true
});
  • Save all the changes.
  • Rebuild image: docker build -t application-image
  • Run container: docker run — name application — —network first-net -d — — rm -p 3000:3000 application-image

Application is able to communicate with the MongoDB that is running in different container. You do not have to install the DynamoDB locally.

--

--

Roman Ceresnak, PhD
CodeX
Writer for

AWS Cloud Architect. I write about education, fitness and programming. My website is pickupcloud.io