Deploying a self-hosted, free public-certified Docker registry … the Docker way

“ If all you have is a hammer, everything looks like a nail.” — Abraham Maslow

“Containers” — Igor Ovsyannykov © StockSnap.io

Yes. I’m guilty of falling in love with the way things are achieved in Docker.

Needles to say, Docker is a great tool and the thing I like most about it is the way it forces you to think about your deliverables in an attempt to handle runtime dependencies.

Before it entered the scene, running production code was a little bit like writing things in stone: once working, procedures to take it down and free up the resources while not affecting the other services that you had deployed on your server, was a pain at best and a nightmare at worst. Of course, you could resort to virtualizing your environment by other means (eg. VMWare or VirtualBox — virtual machines in general), but, lets be honest, the resource cost of such a decision was too damn-right stupid for most use-cases.

Having discovered it and seeing its advantages, it’s not that hard to treat everything, as the old saying goes, as if it was a nail and dockerize-the-shit-out of all that you produce as a software developer. I know the feeling, I’m there and it does seem to work.

And so we have our nail …

For those of you who want to shortcut the journey that lays before us, you can clone my github repo to see a working setup made for docker compose. Fire it up via the go.sh script, fill in the required fields, play with it and if you ever feel the need for clarifications, feel free to return here. You might enjoy it. Oh, and don’t forget to set your DNS records straight before launching it, preferably. ;-)

The ‘nail’: Deploy your own domain/subdomain routed, secured-for-free, docker registry using off-the-shelf docker images

One of the first things you come across while learning Docker is its hub. The Docker hub (or registry) is a place where you can publicly share your images for others to deploy or extend, but there may come a time where this isn’t enough. You may work someday on a project that has to be kept totally in-house and that’s where you’ll want to host your own stuff.

To deploy your own Docker registry onto which you can host your own images, one would simply have to follow the short and clear instructions present on the official Docker Docs page.

The problem arises when you want to have your registry available via a registered domain for easy reference.

Lets say that you want, for example, to have your images accessible at registry.acme.com . What then? You could deploy your registry locally, but how would you make it available at that address? Surely, you would need to redirect all traffic coming on that subdomain to your registry container into which the registry is active, by default, on port 5000.

A good solution to this would be to use a reverse-proxy container. Basically, this container would listen on port 80 (http) and route all communications targeting registry.acme.com to the registry container.

Simple enough, there is an image that does just that: nginx-proxy . Once fired, nginx-proxy monitors the local docker-machine for all container activity and when it senses a container being created, it uses some environmental variables to generate a proxy configuration that gets wired up into its internal nginx web-server. Removing the container has just the opposite effect: it removes the configuration block that handles that assigned domain/subdomain address from the nginx container.

So lets see how we might go on doing just that.

First we need to fire up the nginx-proxy container:

docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy

The key point to mention here is the host’s docker.sock path which is shared as a volume with the proxy container. This is how the container is able to monitor the host’s docker-machine activity.

With nginx-proxy running, we then go on and start the registry:

docker run -d -e VIRTUAL_HOST=registry.acme.com -e VIRTUAL_PORT=5000 registry:2

And that’s it! We have a working domain-referenced, docker registry, right?

Not quite.

Upon trying to push an image to registry.acme.com you would get a message similar to the following:

Get https://registry.acme.com/v1/_ping: x509 […]

The request reached something because we got a response so nginx-proxy is working, but what happened? An immediate thing becomes apparent: even though we reversed-proxied a http service, the registry responded on https (http secured).

As it turns out from their docs, domain registries are only possible if they are secured (hence the https response)!

So lets secure it and try again …


As you might know, securing a web-service cannot be done without burning some steam. This usually requires a payed subscription to a certificate authority along with a rather tedious list of technical steps to be done— and that’s only if you know the subject — really — well. Or at least it used to …

Today, we have Let’s Encrypt! For those of you who don’t know, Let’s Encrypt is a free SSL/TLS certificate issuing authority and by free, I truly mean free, as in no-money-required-and-forever-like-this.

Using Let’s Encrypt, you can secure your web-service for free. Better still, there is a companion Docker image for the nginx-proxy one that listens to the nginx configs and generates the certificates whenever trying to define a HTTPS service. The container’s name is: letsencrypt-nginx-proxy-companion (really, this is its name!).

In order to use the companion service, we would need to start the nginx-proxy container with some shared volumes so that the 2 containers will share the domain config files along with the ceritificates path. Also, we would need to export the nginx/html path from within the container so that the companion app can have access to it while checking the domain during standard letsencrypt owner-check procedure (basically it needs to put some files there to prove that you are the owner of the domain/subdomain).

As part of the domain/subdomain owner verification procedure and certificate generation, the companion service-container requires all containers that want a https link routed to them to have, upon firing them up via docker run, both LETSENCRYPT_HOST and LETSENCRYPT_EMAIL environmental variables defined. We will see how to do that in just a moment …

For clarity, lets say that we have the domains config (or virtual hosts) path stored in /domains/vhost.d while the certificates we will store in /domains/certs. Then, we would need to start the domain router (nginx-proxy) as follows:

docker run -d -p 80:80 -p 443:443 --restart=always \
-v /domains/certs:/etc/nginx/certs:ro \
-v /domains/vhost.d:/etc/nginx/vhost.d \
-v /usr/share/nginx/html \
-v /var/run/docker.sock:/tmp/docker.sock:ro \
--name=domain-router jwilder/nginx-proxy

As you can see, we also mapped port 443 to the host’s equivalent. That’s expected since https runs on 443. We also named the container since we need to reference it from the letsencrypt companion one when mounting/sharing the volumes.

Next, we start the letsenctypt companion container:

docker run -d --restart=always \
-v /domain/certs:/etc/nginx/certs:rw \
--volumes-from domain-router \
-v /var/run/docker.sock:/var/run/docker.sock:ro \
--name=domain-router-companion jrcs/letsencrypt-nginx-proxy-companion

Now we can go on and start the registry in a secure way:

docker run -d --restart=always \
-v /domains/certs:/certs \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/registry.acme.com.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/registry.acme.com.key \
-e VIRTUAL_HOST=registry.tuscale.ro \
-e VIRTUAL_PORT=5000\
-e VIRTUAL_PROTO=https \
-e LETSENCRYPT_HOST=registry.acme.com \
-e LETSENCRYPT_EMAIL=victor.adascalitei@acme.com \
--name=acme-registry registry:2

Note: the “always restart policy” is important here. Without it, the registry container will immediately die since the companion service takes some time to generate the files while the registry exits if it doesn’t find the certificates at startup. This restart policy ensures that docker keeps on trying to start the registry until, eventually, the letsencrypt companion service finishes its job and the required certificates are made available.

Testing the registry is straight forward. A normal docker push should do it:

$ docker push registry.acme.com/my-first-image
The push refers to a repository [registry.acme.com/my-first-image]
e0878b9ea4f4: Pushing [=========================>] 29.31 MB/29.31 MB
872ce62aabbd: Pushing [=========================>] 44.73 MB/44.73 MB
f1f1b72da69a: Pushed
8e331471d477: Pushed
a05ad5eac50b: Pushing [=========================>] 46.82 MB/46.82 MB
...

Eureka! We have it working! Well … not quite. I wasn’t completely honest with you when I printed the last console output. To be fair, while running the command, the push exited with the following message:

...
60a0858edcd5: Pushing [> ] 468 kB/44.31 MB
b6ca02dfe5e6: Waiting
error parsing HTTP 413 response body: invalid character '<' looking for beginning of value: "<html>\r\n<head><title>413 Request Entity Too Large</title></head>\r\n<body bgcolor=\"white\">\r\n<center><h1>413 Request Entity Too Large</h1></center>\r\n<hr><center>nginx/1.11.8</center>\r\n</body>\r\n</html>\r\n"

The reason why we get this error message is due to the size of the image-chunk we wish to push to the registry. It’s too big, but this did not come from the registry container as you might expect. Instead, it was generated from the proxy container due to the way it generated the config files.

Indeed, checking the proxy’s container configuration with a

docker exec -ti domain-router cat /etc/nginx/conf.d/default.conf

yields something similar to

# registry.acme.com
upstream b1a161f54e3a916db6dcddb6cdca7953830a7579 {
# acme-registry
server 172.19.0.6:5000;
}
server {
server_name registry.acme.com;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
return 301 https://$host$request_uri;
}
server {
server_name registry.acme.com;
listen 443 ssl http2 ;
access_log /var/log/nginx/access.log vhost;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS';
ssl_prefer_server_ciphers on;
ssl_session_timeout 5m;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;
ssl_certificate /etc/nginx/certs/registry.acme.com.crt;
ssl_certificate_key /etc/nginx/certs/registry.acme.com.key;
ssl_dhparam /etc/nginx/certs/registry.acme.com.dhparam.pem;
add_header Strict-Transport-Security "max-age=31536000";
location / {
proxy_pass https://b1a161f54e3a916db6dcddb6cdca7953830a7579;
}
}

To fix this, we would have to remove the maximum upload body-size that a client can submit to this server address. Later on I found that you could also get a 411 http response as described by issue #1486. We fix them both using the following nginx properties:

client_max_body_size 0;
chunked_transfer_encoding on;

To be able to inject them into the proxy server block where they belong, we would have to follow nginx-proxy convention and create a file named registry.acme.com in /domains/vhost.d .

Restarting the nginx-proxy container via a docker restart command should apply them.

If we want to test the registry, re-running the previous docker push command will work — this time — as expected: the image will (hopefully) get pushed to the registry.

Before we wrap up and end this, let’s tackle a final issue just for the sake of completion. I’m talking here about security.


Now that we have our registry accessible via a web-address, we should also secure it a bit with a username and password.

To generate the credentials, we would first have to make an authentication directory which we will share with the registry container (we will name it auth — with absolute path set to /domains/auth — for simplicity) and then run

docker run --entrypoint htpasswd registry:2 -Bbn username password > auth/htpasswd

providing our own username and password. This will generate a htpasswd file in the auth folder that contains our secure credentials with which to access the registry.

With the credentials in place, we then go on and start the registry — after previously stopping it, of course — as follows (I’ll bold out the differences so that they are easier to spot):

docker run -d --restart=always \
-v /domains/certs:/certs \
-v /domains/auth:/auth \
-e REGISTRY_AUTH=htpasswd \
-e "REGISTRY_AUTH_HTPASSWD_REALM=ACME Docker Registry Realm" \
-e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \

-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/registry.acme.com.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/registry.acme.com.key \
-e VIRTUAL_HOST=registry.acme.com \
-e VIRTUAL_PORT=5000\
-e VIRTUAL_PROTO=https \
-e LETSENCRYPT_HOST=registry.acme.com \
-e LETSENCRYPT_EMAIL=victor.adascalitei@acme.com \
--name=acme-registry registry:2

If you would have followed the official docs, you would have seen that my run script pretty much matches the one present there.

And indeed if you were to go on and do a login like so

docker login -u username -p password registry.acme.com 

you would see a Login successful message. Don’t be fooled, though! If you then would continue and do a docker push, for instance, you would get an authorization required message.

What gives?

The problem here lies in the way the proxy container passes the authorized request to the registry container. Although it does forward it, it doesn’t preserve the reference to the authorized communication channel, and since image transactions occur in patches/chunks, it tries to reinitialize the link after each processed fragment, login out the user in the process.

To fix this final issue, we would have to fully pass the request the registry container by updating the location directive from the proxy container.

To do this, and following again nginx-proxy’s way of doing things, we would have to write

proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 900;

inside a file named registry.acme.com_location and place it in /domains/vhost.d

After this, restart the proxy-container and now … finally … all will work as expected: docker login will login and docker push/any other sub-command will do as expected.

And that’s it — for real — !

This concludes our journey into making a domain referenced secure-registry using only Docker material & a little bit of glue to hold the things in place. This wasn’t such a hard nail to hit now, was it? :-)