Running Docker Enterprise 2.1 on Digital Ocean — Part 2

André Fernandes
5 min readNov 20, 2018

On the first part of this series we have learned how to setup a new Docker Enterprise 2.1 cluster on DigitalOcean with Terraform and Ansible.

In this article we will discuss how to configure Docker Enterprise in such a way that:

  • Persistent volumes create block storage volumes in Digital Ocean
  • Ingress controllers create load balancers on Digital Ocean

This way a Docker Enterprise cluster will behave just like a Digital Ocean "native" kubernetes cluster.

What do you need

Make sure you have completed the first part of this series and check if your current shell is configured to connect with the remote cluster (running client bundle's "env.sh" script).

Make sure you have kubectl and helm installed. The "kubectl" tool (a command-line for kubernetes) — is bundled with Docker Desktop, but you can always install it manually.

On OSX these tools can be installed with Homebrew:

brew install kubernetes-cli
brew install kubernetes-helm

Notice that Docker Enterprise runs Kubernetes 1.11, it is advisable to use a kubectl version as close to 1.11 as possible (1.12 is fine).

It is assumed that you went through the first article and that your shell resides on the project's already cloned repository (https://github.com/vertigobr/dockeree-digitalocean). You can verify this by running the command below:

kubectl get nodes
NAME STATUS ROLES AGE VERSION
do-manager Ready master 43m v1.11.2-docker-2
do-worker1 Ready <none> 30m v1.11.2-docker-2
do-worker2 Ready <none> 30m v1.11.2-docker-2

Step 1: Configure Secret

A valid token must be used internally to create resources on your DigitalOcean account, so you will have to create a Secret stored in the cluster.

Just copy the "digitalocean-secret.yml.template" provided in the project into a new "digitalocean-secret.yml" file (please *do not* rename the original file):

cp digitalocean-secret.yml.template digitalocean-secret.yml

Now edit "digitalocean-secret.yml" and replace the value under "access-token" entry by your own DigitalOcean token.

You will create the Secret resource running the command below:

kubectl apply -f digitalocean-secret.yml
secret "digitalocean" created

Step 2: Install Storage CSI

We will install Digital Ocean's CSI (Container Storage Interface) from https://github.com/digitalocean/csi-digitalocean. The CSI plugin allows you to use DigitalOcean Block Storage with Docker Enterprise Kubernetes Orchestrator transparently, turning it into the default storage for persistent volumes.

A single command does the trick:

kubectl apply -f https://raw.githubusercontent.com/digitalocean/csi-digitalocean/master/deploy/kubernetes/releases/csi-digitalocean-v0.2.0.yaml

Several resources will be created (you can find them under "kube-system" namespace). You can check that this new storage controller is the cluster's default one:

kubectl get sc
NAME PROVISIONER AGE
do-block-storage (default) com.digitalocean.csi.dobs 27m

Important: we must stay on CSI version 0.2.0 as long as Docker Enterprise Kubernetes remains on version 1.11.

To test the CSI you just have to create a persistent volume claim. Luckily we have a ready yaml for that:

kubectl apply -f https://raw.githubusercontent.com/digitalocean/csi-digitalocean/master/examples/kubernetes/deployment-single-volume/pvc.yaml
persistentvolumeclaim "csi-deployment-pvc" created

You can check DigitalOcean to see the new block storage volume created:

Block Storage created by CSI plugin

Step 3: Install CCM

DigitalOcean CCM (Cluster Control Manager) contains an important servicecontroller component, responsible for creating LoadBalancers when a service of Type: LoadBalancer is created in Kubernetes.

The CCM can be installed with the command below:

kubectl apply -f https://raw.githubusercontent.com/digitalocean/digitalocean-cloud-controller-manager/master/releases/v0.1.8.yml

Step 4: Install Helm (Tiller)

Helm (a Kubernetes Package Manager) operates with a server-side component (Tiller). You must install it and give away permissions:

helm init
kubectl create rolebinding default-view \
--clusterrole=view \
--serviceaccount=kube-system:default \
--namespace=kube-system
kubectl create clusterrolebinding add-on-cluster-admin \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:default

Output will be like this:

...
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
...
Happy Helming!
rolebinding.rbac.authorization.k8s.io "default-view" created
clusterrolebinding.rbac.authorization.k8s.io "add-on-cluster-admin" created

Step 5: Setup Ingress Controller

The CCM github repository has several examples of ingress controllers (load balancers) for different situations. We will come back to those later. For now we will install a general-purpose load balancer with Helm:

helm install stable/nginx-ingress --name my-nginx --set rbac.create=true --namespace nginx-ingress

This ingress controller should have triggered a new load balancer in DigotalOcean itself. Check back on DigitalOcean's "Networking/Load Balancers" page to see something like this:

Automatic Load Balancer

Take note of its IP address: you can test a load balancer right now, curling its default backend and "healthz" endpoint:

curl <ip_address>
default backend - 404
curl -v <ip_address>/healthz
...
< HTTP/1.1 200 OK
...

Awesome! Let's move on.

Step 6: Some Domain Magic

This would be an excellent moment to create a wildcard subdomain under the domain you have been managing under DigitalOcean.

Now you will create a new "A" record under your domain in DigitalOcean console with a wildcard hostname ("*.apps") and point it to the load balancer's IP address:

Wildcard subdomain (under your own domain)

With this easy trick any ingress resource that declares its own hostname under the domain "*.apps.devops.mycompany.com" will be served properly through this load balancer.

On the other hand, any URL under this domain that isn't mapped to an ingress resource will be automatically served by the default backend:

curl dummy.apps.devops.mycompany.com
default backend - 404

If the new subdomain takes too long to propagate to your current DNS you can always curl directly into the load balancer IP and fake the hostname:

curl -H "Host: dummy.apps.devops.mycompany.com" <ip_address>
default backend - 404

Step 7: A Complete Example (HTTP)

Based on the original examples from the "nginx-ingress" controller on Github, a few files reside on the "cafe" folder for us to test the environment with a couple of real applications on the backend.

  • cafe.yaml: two small sample web applications
  • cafe-ingress-http.yaml: an ingress http config for both applications

The applications can be deployed by the command below:

kubectl apply -f cafe/cafe.yaml
deployment.extensions "coffee" created
service "coffee-svc" created
deployment.extensions "tea" created
service "tea-svc" created

These services are running now in the cluster, scaled for multiple pods as specified on the deployment file.

An ingress resource is what we need now to have them served by a proper URL:

kubectl apply -f cafe/cafe-ingress-http.yaml
ingress.extensions "cafe-ingress" created

Both applications can now be reached, each one in its own URL:

curl cafe.apps.devops.mycompany.com
Server address: 192.168.175.76:80
Server name: coffee-7dbb5795f6-k7ffn
Date: 20/Nov/2018:19:41:50 +0000
URI: /
Request ID: 337252aea94dae202c9da959ecdc333e
curl tea.apps.devops.mycompany.com
Server address: 192.168.175.79:80
Server name: tea-7d57856c44-dshh4
Date: 20/Nov/2018:19:41:57 +0000
URI: /
Request ID: 4f10a61e67e3e8f5a538101ae999f319

Notice that the wildcard DNS point to Digital Ocean load balancer, auto-configured by CCM to forward the request to the worker nodes' ingress controller, who in its turn knows where the services live.

On the next article on this series we will learn to use HTTPS certificates both on the load balancer (i.e. outside the cluster) and on the very ingress services (inside the cluster). We will also learn another way to obtain HTTPS certificates.

See ya!

--

--

André Fernandes

@vertigobr Founder & CPO, we build cloud native businesses.