Deep dive into container networking: Part 2

Setup Intercontainer communication on a node

Arpit Khurana
4 min readJul 25, 2019

In the last part of this article, we learned how a VETH pair and a couple of routes combined with Linux network namespace can help us build a container with its own virtual IP.

We were able to set up communication between host and container. Now we want to have two or more than two containers and enable communication between those.

There are various ways to achieve this, we will take a peek on some of them.

  1. VETH Pair between containers: This is one of the simplest option which comes to mind to enable inter container communication which looks like this.
VETH pair between pair of containers

In this approach, we are creating a VETH pair, which has its ends attached to both the containers among which we want to establish communication (VETHN12<->VETHN21). Apart from this, we already have a VETH pair in each container to communicate with the host.

Problems with this approach:

  1. If we have a lot of interconnected containers in the node we will need to create VETH pair for each such pair of containers who want to communicate. Which can be difficult to maintain
  2. For each such container pair, we will also have to add specific routes in each container to point to the other container.

2. Communication via host: We can also forward the packets through the host. In this case, packets will flow from one container to host via its VETH pair, then routed to the other container via other VETH pair.

This will require IP forwarding to be enabled on host, and maybe ARP proxying in our setup (since we are only setting IP addr to the interface present inside the container).

Veth pair with ip forwarding

This method is better than the previous one but we will still be adding a “/32” route for each of the containers. We have a better option.

3. Linux bridge: A Linux network bridge is a Link Layer device which forwards traffic between networks based on MAC addresses. It makes forwarding decisions based on tables of MAC addresses which it builds by learning what hosts are connected to each network. A bridge is generally used to unite two or more network segments. A bridge behaves like a virtual network switch, working transparently (the other machines do not need to know or care about its existence). Any real devices (e.g. eth0) and virtual devices (e.g. veth0) can be connected to it.

We will attach ends of our VETH pairs in default namespace to this bridge device

VETH pair attached to bridge

Now let’s see how to implement this.

Things to notice

“/24” in interface address: This was added for a reason. As we know that the bridge is a link layer device and acts as a virtual switch. Here all the containers, having IP in the range 10.0.0.1/24 are in the same network. Switch/Bridge just forwards the packs based on the MAC address.

We can confirm this by pinging one container from the other and then checking the arp entries

As you can see, pinging 10.0.0.2 (con1) from con2, creates an ARP cache entry in con2 with MAC address of con1 interface.

So by adding “/24” we are telling the container, that all the containers in 10.0.0.0/24 are directly accessible on the link layer. When we add such address the corresponding route is automatically added, as visible in the picture.

Bridge address: In the second last line of the script, we are setting “10.0.0.1/24" to the bridge interface(br0). So now we can add up to 254 containers on this node with IP’s ranging from 10.0.0.2 to 10.0.0.255, without adding any extra route on the host. Here bridge also acts as a gateway for the containers to communicate to the host or to the outside world.

You can try pinging the containers from the host and from one container to other both will work. You can also always run a binary in that container as shown in part 1.

So let’s recap, what we learned in 2 parts of this article. We used a network namespace combined with VETH pair to build a single container network. In this part, we discussed various approaches to setup inter container communication and which one is more suited. And finally how bridge works as an L2 device to connect containers to each other and to the host.

Any questions and reviews are welcomed. That’s all for part 2. :)

In the next part, we will learn how to have a cluster of 2 or more nodes running multiple containers on each node.

--

--

Arpit Khurana

Software developer @ Golang | Kubernetes | Android . Cloud and networking enthusiast . arpitkhurana.in