Deploying multiple Traefik Ingresses with LetsEncrypt HTTPS certificates on Kubernetes

Carlos Eduardo
May 2, 2018 · 7 min read

As detailed on my first article, I’ve set an architecture for Kubernetes to be as similar to “production” as possible even being run on small ARM boards.

Here I will detail the network where I use Weaveworks Net as the overlay and focus on the LoadBalancer and Ingress controllers.

Network Topology

Image for post
Image for post

IP Plan


As detailed in the architecture above, I’ve deployed two Traefik instances to the cluster. One instance to serve the local requests in the internal wildcard domain managed in my router and another Traefik instance to serve the external requests coming from the internet thru a wildcard domain configured in my external DNS.

These instances have separate service IP addresses and each instance have it’s own ingress rules.

To allow external access, I’ve configured my external DNS server, managed by my domain registrar, to resolve all calls to the external domain * using a wildcard entry. This “A” entry can be dynamically updated by the DynDNS config in the router and the CNAME points the wildcard to the A record.

Another option in case you use GoDaddy as registrar/DNS is generating their API key and using this project to dynamically update the subdomain used here. I’ve created a deployment and configMap to run a pod updater.

Image for post
Image for post
Image for post
Image for post
Image for post
Image for post
Configuring Dynamic DNS in the router

I’ve also set port-forwarding on my router forwarding the HTTP and HTTPS traffic to the IP address I requested on service manifest for the external Traefik instance. The IP pool is managed by MetalLB.

Image for post
Image for post


MetalLB is a load-balancer implementation for bare metal Kubernetes clusters using standard routing protocols. It’s very simple to deploy and can be configured to use ARP mode or BGP mode (this requires a BGP capable router). I’ve deployed with ARP mode.

What is great about MetalLB is that it provides the LB functionality exactly in the same way as a cloud LoadBalancer or big LB appliances would so you benefit having the same configuration on the lab or in a production or cloud environment.

Internal Traefik ingress controller

The internal Traefik controller was deployed to the cluster using a serviceType: LoadBalancer in the service manifest. The IP was requested manually from the pool to be able to configure on the DNS.

All applications are made accessible internally over the internal Traefik controller. To allow this, my internal router have a wildcard DNS entry resolving all * names to the allocated IP requested on the service manifest.

The configuration added to DNSMasq options on router: address=/

This is the Deployment and ConfigMap used on internal Traefik ingress controller. It doesn’t require redirection to HTTPS (I’m using HTTP instead of HTTPS internally) neither certificate generation.

Traefik Deployment
Traefik ConfigMap

I also generate Prometheus statistics to be collected by my monitoring stack as detailed on another post.

To deploy the application ingresses, it’s just a matter of using a plain manifest with a PathPrefix rule pointing to the application service (K8s dashboard in this case):

External Traefik ingress controller

The external Traefik controller manages the access to applications from the internet.

This is a more delicate subject because I didn’t want all my applications available externally and also I required some level of protection to them like HTTPS and authentication for the ones that didn’t provide it.

Since this controller generates the certificates dynamically, they need to be in cluster HA mode and share state and the certificates using a Consul or etcd KV store.

I deployed a Consul cluster with 3 nodes using Helm. Helm doesn’t provide ARM images but I’ve built them and have the deployment script/manifest in After having Helm deployed and the client installed, it’s just a matter of:

# Install Helm from and execute the scripts just the first time if you don't have it already deployed

Another option would be using an Etcd cluster using CoreOS etcd-operator. The manifests are in the etcd dir and there is a batch file to deploy it.

The deployment is straightforward using the provided manifests from here:

# First deploy etcd operator or Consul according to the instructions above.

# Deploy external Traefik config
kctl apply -f external-traefik-configmap.yaml

After deployment, the the KV store will have all keys for Traefik config and they will work as a cluster where only one replica will fetch the certificates and store them centralized. The values are loaded into KV store by a initContainer in de StatefulSet.

Image for post
Image for post

This is the config used for the external controller

The important parts in the configuration are the sections for HTTP to HTTPS redirection in [entryPoints] and the selector in [kubernetes]specifying that only the ingresses with the label traffic-type=external are picked by the external ingress.

Also, to provide HTTPS certificates, the [acme] section takes care of the requests to LetsEncrypt to dynamically generate and renew valid certificates. The method used is HTTP-01 that LetsEncrypt servers does a call to my local HTTP port (that’s why the port 80 is open and forwarded on my router) to validate that I own the domain.

In case configuration change is needed, update the ConfigMap, apply to the cluster and delete the Traefik pods so the StatefulSet recreates them. KV store will be updated by a initContainer in the pod itself.

Image for post
Image for post
Kubernetes Dashboard with valid certificate

To create the external ingress, the format is pretty similar just adding the label traffic-type=externaland the forced HTTP to HTTPS redirection annotations like example below:

Since the K8s dashboard doesn’t provide authentication, I’ve also set a simple auth using Traefik to protect it with a user/password. The authentication is stored on a secret in Kubernetes and is generated from a file created by a simple script that uses Openssl utility to create a basic http auth file. The file can be edited to contain multiple users, one per line.

if [[ $# -eq 0 ]] ; then
echo "Run the script with the required auth user and namespace for the secret: ${0} [user] [namespace]"
exit 0
printf "${1}:`openssl passwd -apr1`\n" >> ingress_auth.tmp
kubectl delete secret -n ${2} ingress-auth
kubectl create secret generic ingress-auth --from-file=ingress_auth.tmp -n ${2}
rm ingress_auth.tmp

The secret name and namespace must match the ones defined on the ingress manifest.

Image for post
Image for post

Considerations and quirks

Lack of client IP on internal applications

Due to a limitation on MetalLB ARP mode, it’s currently not possible to see the external client IP on the internal application logs. This limitation will be removed in the future and can be tracked in this issue. The logs will present the internal kube-proxy IP, usually something in the range or similar.

After this gets solved, it will be just a matter of adding externalTrafficPolicy: “Local” to the Traefik services spec section.


As can be seen, deploying a full stack of network elements to support your applications is not a complex task and can even provide HTTPS and load balancing across all nodes.

As usual, the files are available in my repository and please, send me feedback on Twitter or comments.


Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch

Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore

Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store