Docker Networks: Discovering Services on an Overlay

TL;DR Overlay networks deliver containers as hosts. Current service discovery tools are not built for individual container registration or discovery. If we want to “elastic” scale components within an overlay, we need new (simpler) service registration and discovery tools.

--

Last week I wrote an article introducing overlay networks in Docker 1.9. In this article I want to talk about what this means for your inter-container communication architecture, how it might impact deployments, the things it simplifies, and the new challenges that it introduces. You can find the introduction here:

In brief, as of Docker 1.9 you can attach containers to an overlay network instead of a local Docker bridge or the host’s interfaces directly. Each container on the overlay network has its own IP address discoverable by container name via /etc/hosts. This is an evolution of container linking that provides known name dependencies in Swarm clusters without a host colocation requirement. If you know the name of the container you can discover that container on the network. Now that you’re caught up…

Same Problems, New Context, New Tooling Required

The problems are service registration and discovery. How do I advertise that I contribute a service and how do I find all of the nodes that contribute a particular service? Deployments were slow and infrequent in the time before cluster computing. We solved this then with DNS and tools like Bind. When cluster computing started heating up it was solved with DNS, custom protocols, or shared access to common databases and tools like Chubby. These generalized into open source offerings like Zookeeper, Consul, and Etcd. In the middle of the container and Docker revolution Jeff Lindsay gave us Registrator which pushed generalized service registration into the infrastructure layer.

Now we find ourselves wanting to build full stacks on overlay networks for isolation. These stacks look like one or two reverse proxy containers that route and balance load across several containers contributing individual, but redundant service instances. Overlay uses KV stores under the covers to model the network topology and enable cross-host container to container communication. It does not provide SRV record resolution. The gut reaction might be, “Let’s use the same tools (Consul, Registrator, etc) inside of the overlay!” And while I appreciate the enthusiasm, that is just not how the tools work. That class of tool is designed to solve all of the hard problems with clustering, not specifically service discovery and registration.

Existing tools implement clustering so you don’t have to.

These tools all work by registering processes that all run on a single host/node/IP address with local agents/cluster nodes which in turn advertise to their cluster peers that it offers service X on port Y. Since the cluster peers know the IP address of the advertising node, there is no need to redundantly communicate the network location. The rub is that in an overlay network every container has its own IP address. So, the only way you could make this work is by running Consul agents inside of every container on the network that contributes a service. That is certainly not transparent to the developer, or compatible with off-the-shelf images. Registrator makes no sense in an overlay network. It is loosely coupled with the Docker daemon event stream. The Docker daemon is host specific, not container specific (unless you’re doing some Docker in Docker thing).

What is Available?

I’ve been digging for a solution over the past few days. My criteria for an idea solution include:

  • relatively light weight (I don’t need/want clustering, heart-beating, or health-checking)
  • reverse-proxy integrations exist
  • provide service discovery via DNS
  • provide service registration via nsupdate or REST interface

So far, my favorite partial solution is Kong by Mashape. It is a wrapper around NGINX that provides an API for service registration (among other several cool features). I say it is a partial solution because you cannot register multiple upstreams for a single route yet.

If you used Kong, your registration would look like a single cURL command run as part of your entrypoint, or as a sidekick script polling the service and registering when healthy. Either way, you’re looking at one or two lines of shell.

Running a DNS server would work. There are a few of those out there. The features you’d need are pretty sparse relative to those provided by tools like Bind.

My search has yielded disappointing results. In all honestly, I’d love to pickup miekg/dns and build something to serve SRV records. Ping me if you’d be interested in connecting and putting some OSS together on GitHub. On the other hand, if you know of an open source project that you think is simple to use and would fit these criteria respond, highlight, comment or whatever. I’m sure others would be interested in the insight.

These are Two Sides of an Inflection Point

The tools for service registration and discovery are still critical. The entrypoints to your large-scale server software still need to bind to the internet at some point. That point is the host-container port mapping. If you’re using an on-overlay reverse proxy, that proxy’s service port will need to be bound to its host’s network interface to be reachable. That is the inflection point where the existing tools take over, and perhaps integrate with internet accessible load balancers or DNS systems.

You might ask, “If we need this anyway, why use overlay networks at all?”

Well, you can certainly get away without using them. But I think overlay networks are a big deal because:

Overlay networks allow us to both logically group related containers, and isolate unrelated container groups in a way that is mutually application and infrastructure agnostic. Without overlay networks we are stuck exposing service points of all of our containers on the host interface and reverse DNS lookups are useless.

Does overlay networking make sense for my use-case?

Maybe, you know your use-case better than anyone and overlay networking is one more neat piece in your big lego-bin of tools.

Overlay networking makes the most sense when your use-case requires container-to-container communication and you want to distribute your containers over a shared resource pool (like a Swarm cluster).

Remember, you can combine overlay networking with shared network namespaces and host exposed ports. In fact you still need host port mapping to expose any service in an overlay network to the outside world.

Go try things, break stuff, and iterate. Remember, if you’d like to develop a more complete understanding of how to use Docker and support the development of more content like this, please consider picking up my book, Docker in Action.

--

--

Jeff Nickoloff
On Docker

I'm a cofounder of Topple a technology consulting, training, and mentorship company. I'm also a Docker Captain, and a software engineer. https://gotopple.com