Running Docker Enterprise 2.1 on Digital Ocean — Part 3

André Fernandes
9 min readNov 22, 2018

On the first part of this series we have learned how to setup a new Docker Enterprise 2.1 cluster on DigitalOcean with Terraform and Ansible.

On the second part we configured Docker Enterprise to automatically provision block volumes and load balancers on DigitalOcean.

In this article we will discuss a few ways to deal with HTTPS endpoints and certificates.

What do you need

Make sure you have completed the first and second part of this series and check if your current shell is configured to connect with the remote cluster (running client bundle’s “env.sh” script).

We will need DigitalOcean CLI (Command Line Interface) as well, also known as doctl.

About DigitalOcean's LetsEncrypt Integration

Both Steps 1 and 2 below are a lot simpler — almost reduced to a few mouse clicks — if the top-level domain involved (mycompany.com) is managed by DigitalOcean itself. These steps describe how to create a wildcard certificate with LetsEncrypt and how to configure DigitalOcean load balancer for HTTPS termination — both things can be done automatically by DigitalOcean as described in this article.

In my case the top-level domain belongs to AWS, where a subdomain was defined and delegated to DigitalOcean. DigitalOcean’s LetsEncrypt Integration does not work in this scenario, so here we go.

Step 1: Wildcard Certificate

In Part 1 we created a HTTPS certificate for the Docker UCP hostname before UCP installation. That was very straightforward: we ran Certbot from the very host that held the hostname "ucp.devops.mycompany.com", which is the natural workflow for Certbot's ACME protocol.

We cannot do that again because the ports needed for this are taken now by UCP itself. But do not despair, there is actually a workaround. When you control the DNS (and we do own our domain/subdomain in DigitalOcean) you can negotiate the certificates with Certbot servers is a different way.

Important: open a new shell for this - we are running this command against your local docker engine, so we need a shell that DID NOT run the UCP bundle "env.sh" script. The certificates will be written to a "letsencrypt" folder in the current directory.

The command below will tell Certbot servers to use DNS challenge (remember to use the domain and e-mail from before):

docker run --rm -ti \
-v $(pwd)/letsencrypt:/etc/letsencrypt \
certbot/certbot certonly --agree-tos \
-d "*.apps.devops.mycompany.com" \
--preferred-challenges=dns --manual \
--email=admin@example.com

Answer "Yes" to the first question and wait for a message like the one below:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Please deploy a DNS TXT record under the name
_acme-challenge.apps.devops.mycompany.com with the following value:
NjeNlv9Gxcz...............JxTOgHjqzMBefore continuing, verify the record is deployed.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Press Enter to Continue

Do not press <Enter> yet! Now you must edit your domain in DigitalOcean ("Networking/Domains/<your domain>"), select "TXT" and create a new entry named "_acme-challenge.apps" with the value Certbot informed:

After this TXT record is manually created Certbot servers will understand you own the domain, so now you may finally press <Enter> on the previous screen. Output will be something like this:

Waiting for verification...
Cleaning up challenges
IMPORTANT NOTES:
- Congratulations! Your certificate and chain have been saved at:
/etc/letsencrypt/live/apps.devops.mycompany.com/fullchain.pem
Your key file has been saved at:
/etc/letsencrypt/live/apps.devops.mycompany.com/privkey.pem
...

Check the contents of the "letsencrypt/live" folder: there is a new "apps.devops.mycompany.com" folder containing the certificates for the wildcard domain.

You can use Certbot again to verify the certificates received:

docker run --rm -ti \
-v $(pwd)/letsencrypt:/etc/letsencrypt \
certbot/certbot certificates

Output should contain this:

...
Found the following certs:
Certificate Name: apps.devops.mycompany.com
Domains: *.apps.devops.mycompany.com
Expiry Date: 2019-02-19 06:31:15+00:00 (VALID: 89 days)
Certificate Path: /etc/letsencrypt/live/apps.devops.mycompany.com/fullchain.pem
Private Key Path: /etc/letsencrypt/live/apps.devops.mycompany.com/privkey.pem
...

How cool is this?

Remember we had a working load balancer for this very same wildcard domain? Looks like we have two options open now:

  • Use the load balancer itself for HTTPS termination (cluster gets only HTTP)
  • Load balancer passes-through HTTPS traffic to the cluster (cluster does HTTPS termination)

Step 2: Load Balancer Does HTTPS

First thing in this step is to manually change the DigitalOcean load balancer just created to resolve HTTPS termination alone (i.e. no HTTPS traffic hits the cluster at all).

Let's make the certificate created on the previous step available for DigitalOcean. The tool for the job is doctl (DigitalOcean command line tool):

doctl compute certificate create \
--private-key-path letsencrypt/live/apps.devops.mycompany.com/privkey.pem \
--certificate-chain-path ./letsencrypt/live/apps.devops.mycompany.com/fullchain.pem \
--leaf-certificate-path ./letsencrypt/live/apps.devops.mycompany.com/fullchain.pem \
--name apps-devops

If doctl isn't available you can upload the certificate manually instead. On DigitalOcean look for "Account/Security/Certificates", click "Add certificate" and then click "Custom":

New custom certificate

It is a bit clunky but you'll have to copy and paste the contents of the certificate files on "letsencrypt/live/apps.devops.mycompany.com":

  • Name: apps-devops
  • Certificate: paste in the content of "fullchain.pem" file
  • Private key: paste in the content of “privkey.pem” file
  • Certificate chain: paste in the content of “fullchain.pem” file
New certificate (file contents pasted)

Save this new certificate.

Now let us use it on the load balancer we already have.

Important: you should never have to fiddle with the load balancer manually the way we are about do to. Remember: this load balancer was setup automatically by the CCM: if it needs reconfiguring the CCM should be used for that. We will get to it later, but first we will mess around recklessly since nobody is looking.

Under "Manage/Networking/Load balancers/<your load balancer>/Settings" change the current forwarding rules for port 443 so that it resolves HTTPS termination with the certificate you just supplied. Also the forwarding port for 443 must be the same for port 80 (32919 in the picture below, surely something else in your own case).

HTTPS termination in the load balancer

Save the changes and test them with two curl commands that should have similar results:

curl http://cafe.apps.devops.mycompany.com
Server address: 192.168.175.76:80
Server name: coffee-7dbb5795f6-k7ffn
Date: 21/Nov/2018:11:01:37 +0000
URI: /
Request ID: 7a57d37fa0a90858961bc16d91fbb641
curl https://cafe.apps.devops.mycompany.com
Server address: 192.168.175.75:80
Server name: coffee-7dbb5795f6-zsngr
Date: 21/Nov/2018:11:02:24 +0000
URI: /
Request ID: ed77771e9f05531ad3357e4ef7da57ec

Notice the same behavior with the other application:

curl http://tea.apps.devops.mycompany.com
Server address: 192.168.175.77:80
Server name: tea-7d57856c44-qdrm2
Date: 21/Nov/2018:11:04:00 +0000
URI: /
Request ID: fb867646ed2ea3b7c769826ac70c9fa4
curl https://tea.apps.devops.mycompany.com
Server address: 192.168.175.79:80
Server name: tea-7d57856c44-dshh4
Date: 21/Nov/2018:11:04:06 +0000
URI: /
Request ID: 82caf8dfa72002f8d98e24ac0f96d4b5

Ok, since we now understand how the load balancer should be configured let us do it the proper way (via CCM). We will learn how to deploy an ingress controller that automatically configures a DigitalOcean load balancer that resolves HTTPs termination alone just like above.

First let us remove the current ingress controller we deployed on Part 2 using helm (use a shell configured to reach the cluster):

helm delete --purge my-nginx

This deletes both the ingress controller and the load balancer managed by DigitalOcean.

We are going to recreate the ingress controller in a way that CCM recognizes settings for the cloud provider's external load balancer. You will need the id of the certificate we created just before — you can always use doctl to retrieve it:

doctl compute certificate list [-t "YOUR-DO-TOKEN-HERE"]
ID Name (...)
9e844bb7-..............-8312c3ff4edc apps-devops (...)

The command below will install the same ingress controller, but with proper annotations for CCM this time (you must replace the certificate id):

helm install stable/nginx-ingress \
--name my-nginx \
--set rbac.create=true \
--namespace nginx-ingress \
--set controller.service.annotations."service\.beta\.kubernetes\.io/do-loadbalancer-protocol"="http" \
--set controller.service.annotations."service\.beta\.kubernetes\.io/do-loadbalancer-algorithm"="round_robin" \
--set controller.service.annotations."service\.beta\.kubernetes\.io/do-loadbalancer-tls-ports"="443" \
--set controller.service.annotations."service\.beta\.kubernetes\.io/do-loadbalancer-certificate-id"="9e844bb7-.........-8312c3ff4edc" \
--set controller.service.annotations."service\.beta\.kubernetes\.io/do-loadbalancer-healthcheck-path"="/healthz" \
--set controller.service.annotations."service\.beta\.kubernetes\.io/do-loadbalancer-redirect-http-to-https"="true" \
--set controller.service.targetPorts.https="http"

In short, the annotations set above take care of these load balancer configurations:

  • HTTPS termination (certificate and ports)
  • Healthcheck path
  • HTTP to HTTPS redirect for requests
  • Plain HTTP between load balancer and cluster (i.e. no HTTPS endpoint in the cluster itself)
  • HTTP to HTTPS redirection

Wait on DigitalOcean site until the load balancer is created (takes a little while).

Very important: if the load balancer IP address changed from its previous value (there is a good chance it did) you must update the DNS entry "*.apps" under your domain to use the new address.

curl https://cafe.apps.devops.mycompany.com
Server address: 192.168.175.96:80
Server name: coffee-7dbb5795f6-62k9d
Date: 22/Nov/2018:15:45:03 +0000
URI: /
Request ID: a0d1e0a56a540630fbc3de08217e1e4b

Much better! You can also test the HTTP to HTTPS redirection:

curl -L http://cafe.apps.devops.mycompany.com
Server address: 192.168.175.95:80
Server name: coffee-7dbb5795f6-v9s7g
Date: 22/Nov/2018:15:48:42 +0000
URI: /
Request ID: 3377f89127b46d44605127471abad958

Warning: it is possible that updating the DNS entry for the wildcard domain won't reach your current DNS server immediately. In this case both commands above will fail and you will be stuck.

Fortunately there is a workaround to use curl without relying on name resolution. For example:

curl -LH "Host: cafe.apps.devops.mycompany.com" <load-balancer-ip>

Step 3: Load Balancer Does HTTPS Pass-through

Some might argue that forwarding plain HTTP from the load balancer to the cluster is a bit unsafe. Let us try a different approach to HTTPS termination instead: let the cluster itself be the termination and let the load balancer just forward the packets blindly.

First we have to deploy a Secret containing the certificates into the cluster. You can edit the "cafe/cafe-secret.yaml" file to enter the certificate and its key:

apiVersion: v1
kind: Secret
metadata:
name: cafe-secret
type: Opaque
data:
tls.crt: LS0tLS1CRUd...........FLS0tLS0K
tls.key: LS0tLS1CRUd...........ZLS0tLS0K

The "tls.crt" value is the contents of "fullchain.pem" on Base64 encoding:

base64 -i letsencrypt/live/apps.devops.mycompany.com/fullchain.pem

The "tls.key" value is the contents of “privkey.pem” on Base64 encoding:

base64 -i letsencrypt/live/apps.devops.mycompany.com/privkey.pem

Deploy the Secret into the cluster:

kubectl apply -f cafe/cafe-secret.yaml
secret "cafe-secret" created

We shall now remove both ingress controller and ingress resource already deployed on previous steps/articles:

helm delete --purge my-nginx
release "my-nginx" deleted
kubectl delete -f cafe/cafe-ingress-http.yaml
ingress.extensions "cafe-ingress" deleted

Now we shall deploy a new ingress resource that defines TLS (HTTPS) endpoints for the web applications already deployed:

kubectl apply -f cafe/cafe-ingress.yaml
ingress.extensions "cafe-ingress" created

Tip: compare both "cafe-ingress-http.yaml" and "cafe-ingress.yaml" files to understand the differences (specially the TLS endpoint termination).

Finally let us deploy the ingress controller (and therefore a new load balancer) that defines HTTPS pass-through:

helm install stable/nginx-ingress \
--name my-nginx \
--set rbac.create=true \
--namespace nginx-ingress \
--set controller.service.annotations."service\.beta\.kubernetes\.io/do-loadbalancer-protocol"="http" \
--set controller.service.annotations."service\.beta\.kubernetes\.io/do-loadbalancer-algorithm"="round_robin" \
--set controller.service.annotations."service\.beta\.kubernetes\.io/do-loadbalancer-tls-ports"="443" \
--set controller.service.annotations."service\.beta\.kubernetes\.io/do-loadbalancer-healthcheck-path"="/healthz" \
--set controller.service.annotations."service\.beta\.kubernetes\.io/do-loadbalancer-redirect-http-to-https"="true" \
--set controller.service.annotations."service\.beta\.kubernetes\.io/do-loadbalancer-tls-passthrough"="true"

After a little while DigitalOcean load balancer will be online and properly configured with HTTPS Passthrough:

Load Balancer with HTTPS Passthrough

Once again, the load balancer's external IP address may have changed — in this case you'll have to update the "*.apps" DNS entry in your domain.

You can test the web application the same way you did before:

curl https://cafe.apps.devops.mycompany.com
Server address: 192.168.175.95:80
Server name: coffee-7dbb5795f6-v9s7g
Date: 22/Nov/2018:19:55:27 +0000
URI: /
Request ID: 572cb18edc38b0176c9d79f779f8d8a0

If DNS name resolution still points to the old IP address you can always use the workaround we discussed before:

curl -LH "Host: cafe.apps.devops.mycompany.com" <load-balancer-ip>

Conclusion

Once deployed with proper settings and annotations, the ingress controller and its resultant load balancer should last for long. All the other resources can come and go (pods, services, deployments).

This article series demonstrates how nicely (and transparently) Docker Enterprise can fit into DigitalOcean cloud provider. Kubernetes does not have to be a nightmare, nor should you give up Swarm mode just because all the other lemmings are jumping from a cliff.

Have fun with Docker Enterprise on DigitalOcean!

--

--

André Fernandes

@vertigobr Founder & CPO, we build cloud native businesses.