Credit: Guillermo Mont

Docker Overlay Networks: That was Easy

TL;DR Treat Docker containers like hosts on overlay networks. Build overlay networks with a single Docker command. Still disable ICC. This is not “Service” discovery.

Jeff Nickoloff
Published in
6 min readNov 6, 2015

--

I want to write about first class network management in Docker and the overlay driver. Multi-host networking was announced at DockerCon SF this year when it was released in the experimental branch. This last week it landed in master for Docker 1.9, Swarm 1.0, and Compose 1.5.

Multi-host networking this is the feature that like everybody has been begging for since Swarm was announced. I feel that some were more dismissive than they should have been when it wasn’t included at the beginning. The value that every Docker project brings is clean interfaces and simplified integrations. I would have been more disappointed if they had rushed something that looked like a hack job than if they had never shipped the feature. But we have it now. I’d like to help show you how simple it is to use and explore what is actually built. But first, I don’t like third-party ads so here is shameless plug for my book, Docker in Action.

Deal of the Day November 6: Half off my book Docker in Action. Use code dotd110615au at https://www.manning.com/books/docker-in-action

The first problem: service registration and discovery is an infrastructure concern, not an application concern.

The second problem: implementing service registration and discovery when infrastructure and application implementation are mutually agnostic is tough.

Docker networking solves these problems by backing an interface (DNS) with pluggable infrastructure components that adhere to a common KV interface. The result is a systems where individual containers have unique IP addresses and names on an overlay network. With few exceptions this decouples your system from host IP address and port conflict problems.

The Setup

Let’s build a quick network using the canonical example. Make sure you have Docker 1.9, Machine 0.5, and Compose 1.5 installed.

Get started by creating a host that will provide KV. This is a one-time exercise. Meaning that one KV store (or cluster) can serve N overlay networks in the same Swarm cluster. The following commands will create a single instance of Consul running in a container.

docker-machine create \
-d virtualbox \
kv
docker $(docker-machine config kv) run -d \
-p 8500:8500 -h consul \
progrium/consul \
-server -bootstrap

Now you’re ready to provision a Swarm cluster. These three commands will create three Swarm nodes (one master) that use the kv machine for Swarm discovery and overlay networking.

docker-machine create \
-d virtualbox \
--swarm \
--swarm-master \
--swarm-discovery="consul://$(docker-machine ip kv):8500" \
--engine-opt="cluster-store=consul://$(docker-machine ip kv):8500" \
--engine-opt="cluster-advertise=eth1:2376" \
c0-master
docker-machine create \
-d virtualbox \
--swarm \
--swarm-discovery="consul://$(docker-machine ip kv):8500" \
--engine-opt="cluster-store=consul://$(docker-machine ip kv):8500" \
--engine-opt="cluster-advertise=eth1:2376" \
c0-n1
docker-machine create \
-d virtualbox \
--swarm \
--swarm-discovery="consul://$(docker-machine ip kv):8500" \
--engine-opt="cluster-store=consul://$(docker-machine ip kv):8500" \
--engine-opt="cluster-advertise=eth1:2376" \
c0-n2

The difference from how you might have been provisioning Swarm clusters up to this point is addition of two engine-opts, “cluster-store” and “cluster-advertise.” These tell Docker where to look for the key-value store. All of the steps so far are similar to those you would have followed if you were running your own Swarm discovery prior to Swarm 1.0 and Docker 1.9. At this point you could use Docker and the Swarm master like you had previously and the experience would be the same. Remember to configure your Docker environment to interact with your Swarm cluster:

eval "$(docker-machine env --swarm c0-master)"

A this point you’re ready to see where the magic happens.

Building an Overlay Network

The following command will create a new overlay network named, “myStack1”

docker network create -d overlay myStack1

You can create as many overlay networks as you’d like. Create a few more just for fun:

docker network create -d overlay myStack2
docker network create -d overlay myStack3
docker network create -d overlay myStack4
docker network create -d overlay myStack5
docker network create -d overlay myStack6

Checkout your handy work with the network ls subcommand.

docker network ls

These networks are materialized as metadata stored on the KV server. For that reason, a network created on any node of your cluster will be immediately visible to all nodes of your cluster. These networks don’t create any active components outside of an attached container. So, you’ll need to create a few containers to really appreciate what is going on. You can tell Docker which network a new container should be on using the “net” flag.

docker run -d --name web --net myStack1 nginx
docker run -itd --name shell1 --net myStack1 alpine /bin/sh

Both of these containers will be attached to the same network, and be discoverable by container name (regardless of start order). Further, when a container is restarted it will remain discoverable without cascading restarts. That is a huge win.

Jump in and hit one from the other.

docker attach shell1
ping web
apk update && apk add curl
curl http://web/

Crtl+PQ out and go the other direction.

docker exec -it web /bin/sh
ping shell1

Try to break things. Restart containers, create new containers on the same network and verify that the /etc/hosts files on existing containers are re-written. In my experience, the whole integration is seamless.

Overlay Networks do not* Provide Isolation

Correction thanks to Nicola Kabar who caught a mistake in my tests.

Containers that are on same host but connected to different overlay networks can’t talk to each other over local bridge. Containers launched with (`-net myoverlay`) are not added to the default `bridge` network but instead , they’re added to `docker_defaultgw` which blocks inter-container traffic (ofcourse unless you’re using host-port mapping).

Registration and Discovery for the Infrastructure vs Application

There are a few different “clusters” here. Swarm registers and discovers nodes participating in a Swarm resource pool. This is a pure infrastructure concern and should be abstracted from the application. Containers attached to a known overlay network advertise host names (not services) which resolve to private IP addresses. This is application level registration and discovery. Discovery at this level should be performed through common interfaces (DNS) but registration should remain an infrastructure concern. What we have today with Docker solves both of these problems and maintains clear lines of abstraction.

The issue that has not been addressed with this offering is application level service discovery. To illustrate the difference, consider an environment provisioned with Compose.

web:
image: nginx
volumes:
- ./app.conf:/etc/nginx/conf.d/app.conf
upstream:
image: myapp

Bringing up this environment in a Swarm cluster w/ an overlay network would yield two containers: “<proj>_web_1” and “<proj>_upstream_1.” Both of these names would be available on the network and the NGINX config might explicitly reference “<proj>_upstream_1” as an upstream service. However, that container name only refers to a single instance of the upstream service. If we scale up those names (while predictable) will present a proper “service discovery” problem.

compose scale upstream=4
# Creates:
# <proj>_upstream_2
# <proj>_upstream_3
# <proj>_upstream_4

In this case, the NGINX configuration would need to anticipate these host names OR user some other resolution mechanism.

If your goal is to resolve a service name to a collection of contributing host names then Docker’s overlay networking will fall short. But again, I’d argue this is not really in scope for the project as defined today. This is an obvious gap that many people will experience and a great target for a label-like abstraction. This is a consistent and fast moving project. I’m confident that this is a problem that will be addressed in a near generation.

In the meantime, I’m just thankful that all this cool stuff has landed in master.

If you learned something from this article and would like to support the development of more content like this, please consider picking up my book.

--

--

Jeff Nickoloff
On Docker

I'm a cofounder of Topple a technology consulting, training, and mentorship company. I'm also a Docker Captain, and a software engineer. https://gotopple.com