Docker Universal Control Plane — Service Discovery

Alex Rhea
Alex’s Blog
Published in
2 min readDec 22, 2016

Service Discovery, it remains one of the problems inside Docker’s Universal Control Plane. Service Discovery is the process by which randomly generated ports are mapped to domain names at a proxy or load balancer layer. Often times implementations fall back to using a combination of Consul/ETCD/Zookeeper and Registrator to register new containers in a key value store then generate a config in a specially crafted container image. While these solutions achieve the same goal, there are more moving pieces adding additonal complexity, maintenance, and operations knowledge.

Interlock is an open source project from the UCP team that listens to the Docker Swarm event stream and registers/removes containers from a Nginx/HAProxy configuration then reloads the configuration to a side loaded instance of Nginx/HAProxy.

Building

The first step is to roll our own Interlock image. We do this to have our own copy of the Interlock image and so that we can bake in the configuration directly to the image, reducing the configuration at deployment time. Below is the launch.sh script that we will reference in a later Dockerfile. This launch script uses Mozilla’s SSL config generator to provide input to the Interlock configuration.

Next we specify our customized Dockerfile that we will use to build our Interlock image. We start with Interlock as the base image, install some base utilities, copy over a custom Nginx template, copy our launch script, then make it the startup script.

Deploying

Now that we have our base image built and in a registry. We will deploy it using Docker Compose. We will need two containers deployed to our proxy nodes. 1 Interlock instance and 1 Nginx instance. We mount the ucp-node-certs volumes, which will be present on all UCP cluster nodes to the /certs mount point so that Interlock can communicate with the swarm. We also specify our Swarm host. This will be your UCP controller hostname and the 2376 port by default. On the Nginx configuration we specify a custom label that tells Interlock to load the generated configuration to this instance.

Running in Production

Interlock and Nginx provide a great and simple solution for Service Discovery, but there are a few gotcha’s to watch out for. Occasionally Interlock becomes disconnected from the Swarm event stream, a simple restart generally solves this issue. I recommend placing a TCP load balancer in front of your Nginx instances for High Availability and to reduce the direct attack surface on the cluster.

Have Further Questions?

If you have any questions or comments, feel free to reach me via Twitter!

--

--