Some thoughts on Running a Local Docker Registry

I’ve written about managing my local development environment before, but want to get into a little more detail about my local Docker registry, where I store images while I’m testing, staging, etc. before I deploy the app into whatever environment it’ll need to go into.

The reason I’d like to highlight this piece of my environment in particular is that any application that relies on a large number of containerized services on each deploy (not to mention , when ingested through the testing and integration pipeline) would likely benefit from a private, local-to-the-environment registry to provide these images, and can easily be integrated into a testing pipeline (a topic for another post, but the registry can be easily integrated into common test and CI suites like Jenkins).

When I deploy a registry to a private network to support an application (or network of applications) on a public cloud, I usually rely on tools like Terraform, and cloud-init to do my heavy-lifting:

But, because I also have a local environment, and some tools (like Terraform, in particular) don’t have super convenient support for things like libvirt (it does have an OpenStack provider, however!) without some additional legwork, and applying the same configuration requires the use of other tools that may, or may not, be common to both environments. In this case, things I might off-load onto provisioning tools, where I usually prefer to keep provisioning and configuration management separate, are re-worked to be manageable pieces of state for configuration management (in this case, Saltstack) to handle.

My strategy here is, by no means, ideal, but because I am an individual contributor for the most part on many of my projects, my tolerance for less-than-best practices is pretty high, however, the logic underlying the implementation may make some sense for you.

My registry just uses the Docker-provided registry image:

https://hub.docker.com/_/registry/

This container runs on a VM specifically designated as a registry on my network. As I’ve written about before, this VM is managed in Saltstack, and there are a few things that are managed as states to extend my base state that gets applied to all of my network VMs.

It extends a base state file that does things like configure my wheel user (i.e. jmarhee on nodes with roles like registry or docker anything where I, myself, am the primary user, vs. a local git server where the relevant user might be git), adds my public key, configures my /etc/sudoers.d/users files.

I use a grain called role to match states to VMs in my Salt top file, so in this case:

jmarhee@tonyhawkproskater2 /srv/salt $ sudo salt -C "G@role:docker_registry" grains.item role
registry.boulder.gourmet.yoga:
role:
docker
docker_registry

You’ll see there are two states applied, but only the latter is relevant to the registry (the docker state just installs Docker, and pre-authenticates for my registry on nodes with this role):

sync registry_cron:
file.managed:
- name: /home/jmarhee/docker_cron
- source: salt://files/docker_cron
- user: jmarhee
- group: jmarhee
- mode: 600
check_and_add_cron:
cmd.script:
- require:
- file: /home/jmarhee/docker_cron
- source: salt://files/load_cron.sh
- user: jmarhee
- group: jmarhee
- shell: /bin/bash

which basically just does two things: Loads a text file containing all of the cron jobs I’d like run on the VM, and then a script to actually load all of them into the VM crontab. In this case, a script to pull down the container and run it, which also contains logic to backup the container itself to the registry periodically.

I mention this seemingly recursive strategy because I also have a volume mounted to the container to allow the data to persist across reboots, and itself gets its data backed up nightly. This additional volume contains the registry data, certificates, etc.

In order to keep everything up to date in DNS, so the registry remains reachable, I run two containers; the registry, and a Consul container that connects to my DNS server’s backend on my network to keep the address in sync with the hostname. If I were to forgo managing this as a service in Docker Compose (which I’ll demonstrate here for the sake of simplicity), the commands would look something like:

docker run -d -p 443:5000 --restart=always --name registry -v /home/jmarhee/certs:/certs -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/registry.crt -e REGISTRY_HTTP_TLS_KEY=/certs/registry.key registry:2
docker run --restart=unless-stopped -d -h consul-$(hostname) --name $(hostname) -v /mnt:/data -p $(curl 10.0.1.13):8300:8300 -p $(curl 10.0.1.13):8301:8301 -p $(curl 10.0.1.13):8301:8301/udp -p $(curl 10.0.1.13):8302:8302 -p $(curl 10.0.1.13):8302:8302/udp -p $(curl 10.0.1.13):8400:8400 -p $(curl 10.0.1.13):8500:8500 -p 172.17.0.1:53:53/udp progrium/consul -server -advertise $(curl 10.0.1.13) -join 10.0.1.13

A little bit more about how (and why) this is accomplished is detailed here:

The benefit to a) dockerizing these services in the first place, and b) managing their deployment in a build and release pipeline, and managing hosts in this manner is that it makes a somewhat inflexible setup, much more disposable.

By keeping redundant copies of the registry volume (and other such volume data), I can replace the registry at-will, and retain my data, and it makes the VMs and the containerized services replaceable, and re-deployable with relatively low amounts of involvement from automation. So, for example, in the event of a failure, I might have the container rebuild itself from whatever source (Dockerfile, compose, etc.), or if a VM fails, I might have libvirt kill the VM and repave it from configuration management (which reattaches the passed-through data volumes from the hypervisor).

The above are just a couple of the ways I’ve made my local environment a little easier to work with, and its provisioning and management a little more automated and flexible, so I don’t necessarily need to deal with things like runaway pay-as-you-go style fees for VMs I might’ve forgotten about, etc. without sacrificing some (definitely not all) of the ease of that experience.