Using Dockerized Consul as dnsmasq backend

My internal network for development and staging has a lot of moving pieces, most of which is managed with Saltstack, and deployed on VMs and in a local DC/OS cluster, and the lingering challenge was how to dynamically update internal DNS so I don’t have to update mappings in dnsmasq as I add and remove, or just reboot or rebuild, VMs and containerized services.

Since I rely on commercial DNS and provisioning tooling from my various cloud providers, and in the case of hardware I manage directly, automation tooling like Fabric, this was not required in my production environments, so I allowed myself some latitude to slap something together, but also take an opportunity improve this as well.

The approach I landed on, as my environment grew, and my needs got a little more involved, was:

  1. On my DNS server (itself a containerized service), I run a service intended to identify and set as the network address pushed through Consul the address for the network I’d actually be accessing (not always simple when many hosts may have multiple interfaces that don’t require exposure, etc. and this is a neater, more canonical way of selecting the address actually assigned and used on the network bound to that host):
  2. When this address and the system hostname is pushed to Consul, this updates the backend used by dnsmasq, and adding the mapping to the zone for my internal domain (for example, $HOST.boulder.gourmet.yoga)

Since all of my network VMs also run Docker (as managed through Salt, many requirements in most state files are served in the form of Docker images, rather than packages from an apt-repo, etc.), Consul will be no exception.

On my DNS host, I run two containers, the primary Consul instance, and the IP address ID service (a very simple Ruby application):

jmarhee@tonyhawkproskater2 ~/return_ip $ docker ps                                                                                                                                                                                    
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b5b4dd6d86dc progrium/consul "/bin/start -server -" 3 seconds ago Up 2 seconds 10.0.1.13:8300-8302->8300-8302/tcp, 10.0.1.13:8400->8400/tcp, 53/tcp, 172.17.0.1:53->53/udp, 10.0.1.13:8500->8500/tcp, 10.0.1.13:8301-8302->8301-8302/udp consul0
return_ip        registry.boulder.gourmeta.yoga/return_ip                                      "/bin/sh -c 'ruby /ro"   2 minutes ago       Up 2 minutes        0.0.0.0:80->4567/tcp                                                                                                                                        return_ip

I deploy Consul on the DNS node using a command like the following:

docker run --restart=unless-stopped -d -h consul0 --name consul0 -v /mnt/consul:/data -p 10.0.1.13:8300:8300 -p 10.0.1.13:8301:8301 -p 10.0.1.13:8301:8301/udp -p 10.0.1.13:8302:8302 -p 10.0.1.13:8302:8302/udp -p 10.0.1.13:8400:8400 -p 10.0.1.13:8500:8500 -p 172.17.0.1:53:53/udp progrium/consul -domain gourmet.yoga -dc=boulder -server -advertise 10.0.1.13 -bootstrap-expect 12

and something like this on the client nodes:

docker run --restart=unless-stopped -d -h consul-$(hostname) --name consul-$(hostname) -v /mnt:/data -p $(curl -s 10.0.1.13):8300:8300 -p $(curl -s 10.0.1.13):8301:8301 -p $(curl -s 10.0.1.13):8301:8301/udp -p $(curl -s 10.0.1.13):8302:8302 -p $(curl -s 10.0.1.13):8302:8302/udp -p $(curl -s 10.0.1.13):8400:8400 -p $(curl -s 10.0.1.13):8500:8500 -p 172.17.0.1:53:53/udp progrium/consul -domain gourmet.yoga -dc=boulder -server -advertise $(curl -s 10.0.1.13) -join 10.0.1.13

where the curl commands are calling out to test connectivity, but also verify the advertisement address. I specified the datacenter and domain parameters to override the dc1.consul formatted-default behavior to match my naming scheme boulder.gourmet.yoga .

To make use of the DNS feature, a service must be defined, and in this case, since I’m looking to basically namespace a development environment, I’m just going to declare (without a port definition) a service called dev and register the service:

For example, I connected my workstation (`iampizza`) to this network, and when I check the members output from Consul, I see the following:

jmarhee@tonyhawkproskater2 ~/return_ip $ docker exec -it consul0 consul members                                                                                                                                                       
Node Address Status Type Build Protocol DC
consul-iampizza 10.0.1.10:8301 alive server 0.5.2 2 boulder
consul0 10.0.1.13:8301 alive server 0.5.2 2 boulder

So, as a result of adding this member, using the above dc and domain specification, it should come out to consul-iampizza.$SERVICE.boulder.gourmet.yoga (for example) when I go to query for the address through my DNS server.

Hooking this up to dnsmasq as a useful backend is a little more straightforward; I rebuilt my dnsmasq container with the addition of a link to the consul0 container.

From this point, the configuration on the DNS server side (if, for example, you use BIND and not dnsmasq), follows the documentation as one might if running Consul locally, and not in a container:

Show your support

Clapping shows how much you appreciated Joseph D. Marhee’s story.