Deep dive into container networking: Part-3

Intercontainer communication with more than one node

Arpit Khurana
5 min readAug 4, 2019

In the previous part of this article, we ran 2 containers on a host using a bridge. Now is the last part, having more than 1 node with multiple containers running on each of them.

In the previous part, our final network looked like this

Now we need 2 such nodes with different subnets. I am using a pair of VirtualBox VMs running Ubuntu.

But first things first, we will want our VM’s to communicate with each other so that the containers running on them can also communicate through the network.

To achieve that, we can use various network configurations in VirtualBox, but I personally prefer host mode networking, since we are not dealing with external networks and internet connectivity right now.

You can create a host-only network by going to File > Host Network manager > Create.

My Host Network configuration

As you can see, I have selected 192.168.58.1/24 subnet for my VM’s. You can set this to some other network also. Just make sure it does not conflict with the container subnets.

Now we have to set the adapter configuration of our VMs. We need to enable a host-only adapter and assign it to the network that we just created. The same process is also to be followed for other VM

VM adapter config

So our final network configuration will look like this ,

The addresses of enp0s3 interfaces on both VMs can be set manually from inside VM since DHCP is not enabled in this network. You can enable DHCP if you want but it can cause problems if VM restarts. Since DHCP will allocate different IP and you will need to change the routes accordingly on every restart.

IP address can be set using simple “ip addr add” command as we have done earlier for virtual devices.You can check if your network works as expected by pinging one VM from another.

Now it’s easy to set up the containers in individual nodes as explained in the last 2 parts of this article. We just need to make sure that, the container subnets do not conflict with the host network and nor with each other. So as in the last part, we created a host with containers in 10.0.0.0/24 subnet. Similarly, we can create the other host with containers in 10.0.1.0/24 subnet.

At this point, assuming that, two hosts have containers running individually. The containers on one node do not know about the existence of containers on the other node. We will fix that by adding two things.

  1. IP forwarding: When enabled, “IP forwarding” allows a Linux machine to receive incoming packets and forward them. A Linux machine acting as an ordinary host would not need to have IP forwarding enabled, because it just generates and receives IP traffic for its own purposes (i.e., the purposes of its user). For our use case, we need it since packets coming from containers need to be forwarded to other nodes. You can enable it using the command below.
root> sysctl -w net.ipv4.ip_forward=1

2. Node routes: Another thing we need to do is to tell each node about the containers present on the other node. We can understand this using an example. Let us say, that container 1 on node 1 wants to send something to container 3(10.0.1.2) on node 2. Now when container 1 sends a packet with destination IP with 10.0.1.2, node 1 needs to know, that this packet needs to be routed to node 2.

We can do this by adding a route on node 1

root> ip route add 10.0.1.0/24 via 192.168.58.3 dev enp0s3

Here 192.168.58.3 is the node 2 IP address.

Similarly, we need to do the other way around too. If containers from node 2 want to communicate with node 1 containers, node 2 will need to know the route.

root> ip route add 10.0.0.0/24 via 192.168.58.2 dev enp0s3

This is it. Now you can try ping from container 1 to container 3. It will work.

So to recap all 3 parts of this article

  • Network namespace and a VETH pair is what it takes to run an isolated container on a node
  • If you want more than one containers, linux bridge is used as a switch to connect all of them.
  • Bridge also acts as a gateway for the container subnet
  • In a multi-node setup. Each node needs to enable ip forwarding and each node needs to know about the subnet of containers that other nodes are running. So if there are 3 nodes, the node 1 needs to know about container subnet of node 2 as well as container subnet of node 3.

Things to note

Security Considerations:

  • In our setup, the default iptables policy everywhere is ACCEPT, so all the ports of containers are accessible. Hence ping and other applications work directly. But in real clusters that is not the case, due to security reasons, we only allow certain incoming ports, that we expect traffic on. This can be achieved by setting the default policy as REJECT then add specific ALLOW rules for specific ports
  • In the above example of ping from container 1 to container 3, if you do tcpdump on the node 2, you will see the source IP of packets as container 1 IP (10.0.0.2). If you want to hide the IP of source container that can be done using a iptables MASQUERADE policy.
iptables -A POSTROUTING -j MASQUERADE -t nat
  • This command is explicitly telling the node to hide the source IP of all the packets going through it by replacing the source IP of the packet by its own IP. So after this command, the source IP visible on node 2 will be of node 1 rather than container 1. MASQUERADE will also hide the container IP from the external internet entities to which the container might be communicating.

Internet Connectivity:

  • This setup does not deal with internet connectivity for the containers. But it can be achieved easily either by adding a separate NAT adapter in the VMs or by enabling ip forwarding and Masquerade on host also.
  • You might also need to manually add a “nameserver 8.8.8.8” in /etc/resolve.conf to enable DNS since we did not enable DHCP so VM won't get a DNS server.
  • Again if you want to run a public server on the container, you should only allow incoming connections on specific port that you are listening on.

That’s all for part 3 :). End of series

--

--

Arpit Khurana

Software developer @ Golang | Kubernetes | Android . Cloud and networking enthusiast . arpitkhurana.in