Credit: Guillermo Mont

Docker Overlay Networks: That was Easy

TL;DR Treat Docker containers like hosts on overlay networks. Build overlay networks with a single Docker command. Still disable ICC. This is not “Service” discovery.

Jeff Nickoloff
Nov 6, 2015 · 6 min read

I want to write about first class network management in Docker and the overlay driver. Multi-host networking was announced at DockerCon SF this year when it was released in the experimental branch. This last week it landed in master for Docker 1.9, Swarm 1.0, and Compose 1.5.

Multi-host networking this is the feature that like everybody has been begging for since Swarm was announced. I feel that some were more dismissive than they should have been when it wasn’t included at the beginning. The value that every Docker project brings is clean interfaces and simplified integrations. I would have been more disappointed if they had rushed something that looked like a hack job than if they had never shipped the feature. But we have it now. I’d like to help show you how simple it is to use and explore what is actually built. But first, I don’t like third-party ads so here is shameless plug for my book, Docker in Action.

Deal of the Day November 6: Half off my book Docker in Action. Use code dotd110615au at

The first problem: service registration and discovery is an infrastructure concern, not an application concern.

The second problem: implementing service registration and discovery when infrastructure and application implementation are mutually agnostic is tough.

Docker networking solves these problems by backing an interface (DNS) with pluggable infrastructure components that adhere to a common KV interface. The result is a systems where individual containers have unique IP addresses and names on an overlay network. With few exceptions this decouples your system from host IP address and port conflict problems.

The Setup

Let’s build a quick network using the canonical example. Make sure you have Docker 1.9, Machine 0.5, and Compose 1.5 installed.

Get started by creating a host that will provide KV. This is a one-time exercise. Meaning that one KV store (or cluster) can serve N overlay networks in the same Swarm cluster. The following commands will create a single instance of Consul running in a container.

Now you’re ready to provision a Swarm cluster. These three commands will create three Swarm nodes (one master) that use the kv machine for Swarm discovery and overlay networking.

The difference from how you might have been provisioning Swarm clusters up to this point is addition of two engine-opts, “cluster-store” and “cluster-advertise.” These tell Docker where to look for the key-value store. All of the steps so far are similar to those you would have followed if you were running your own Swarm discovery prior to Swarm 1.0 and Docker 1.9. At this point you could use Docker and the Swarm master like you had previously and the experience would be the same. Remember to configure your Docker environment to interact with your Swarm cluster:

A this point you’re ready to see where the magic happens.

Building an Overlay Network

The following command will create a new overlay network named, “myStack1”

You can create as many overlay networks as you’d like. Create a few more just for fun:

Checkout your handy work with the network ls subcommand.

These networks are materialized as metadata stored on the KV server. For that reason, a network created on any node of your cluster will be immediately visible to all nodes of your cluster. These networks don’t create any active components outside of an attached container. So, you’ll need to create a few containers to really appreciate what is going on. You can tell Docker which network a new container should be on using the “net” flag.

Both of these containers will be attached to the same network, and be discoverable by container name (regardless of start order). Further, when a container is restarted it will remain discoverable without cascading restarts. That is a huge win.

Jump in and hit one from the other.

Crtl+PQ out and go the other direction.

Try to break things. Restart containers, create new containers on the same network and verify that the /etc/hosts files on existing containers are re-written. In my experience, the whole integration is seamless.

Overlay Networks do not* Provide Isolation

Correction thanks to Nicola Kabar who caught a mistake in my tests.

Containers that are on same host but connected to different overlay networks can’t talk to each other over local bridge. Containers launched with (`-net myoverlay`) are not added to the default `bridge` network but instead , they’re added to `docker_defaultgw` which blocks inter-container traffic (ofcourse unless you’re using host-port mapping).

Registration and Discovery for the Infrastructure vs Application

There are a few different “clusters” here. Swarm registers and discovers nodes participating in a Swarm resource pool. This is a pure infrastructure concern and should be abstracted from the application. Containers attached to a known overlay network advertise host names (not services) which resolve to private IP addresses. This is application level registration and discovery. Discovery at this level should be performed through common interfaces (DNS) but registration should remain an infrastructure concern. What we have today with Docker solves both of these problems and maintains clear lines of abstraction.

The issue that has not been addressed with this offering is application level service discovery. To illustrate the difference, consider an environment provisioned with Compose.

Bringing up this environment in a Swarm cluster w/ an overlay network would yield two containers: “<proj>_web_1” and “<proj>_upstream_1.” Both of these names would be available on the network and the NGINX config might explicitly reference “<proj>_upstream_1” as an upstream service. However, that container name only refers to a single instance of the upstream service. If we scale up those names (while predictable) will present a proper “service discovery” problem.

In this case, the NGINX configuration would need to anticipate these host names OR user some other resolution mechanism.

If your goal is to resolve a service name to a collection of contributing host names then Docker’s overlay networking will fall short. But again, I’d argue this is not really in scope for the project as defined today. This is an obvious gap that many people will experience and a great target for a label-like abstraction. This is a consistent and fast moving project. I’m confident that this is a problem that will be addressed in a near generation.

In the meantime, I’m just thankful that all this cool stuff has landed in master.

If you learned something from this article and would like to support the development of more content like this, please consider picking up my book.

On Docker

Tangential thoughts and conversational notes about Docker…

On Docker

Tangential thoughts and conversational notes about Docker from my experience and research for Docker in Action.

Jeff Nickoloff

Written by

I'm a cofounder of Topple a technology consulting, training, and mentorship company. I'm also a Docker Captain, and a software engineer.

On Docker

Tangential thoughts and conversational notes about Docker from my experience and research for Docker in Action.

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store