How to set up SSL certificates for free on Azure Kubernetes Service with Let’s Encrypt

Geoffrey De Vylder
9 min readOct 27, 2019

--

Most guides (including the one from Microsoft) includes instructions on setting up SSL. However, it doesn’t provide much detail and uses external tooling like helm (https://helm.sh/) that is a bit too much overhead for what we were trying to achieve when setting this up for the first time.

This guide outlines all the steps you need to set up SSL on the incoming traffic on your Kubernetes cluster in detail using only the basic kubectl tool.

Setting up the required tooling

  • You need an existing Kubernetes cluster running on Azure (AKS).
  • You will need an existing application running and have a “Service” for your own application, making it available to other pods in the cluster.
  • Install the command-line tool “kubectl” on your machine to execute commands on a Kubernetes cluster. Configure kubectl to talk to your cluster.
  • I also assume you already have a DNS name or how to set up a temporary one provided by Azure (test.westeurope.cloudapp.azure.com for example).

What we are trying to achieve

First a short description of how we are going to enable SSL on incoming traffic. One way of doing this would be setting up SSL on your pod but that requires you to configure every pod individually.

Kubernetes has a concept called “Ingress”. An ingress is a set of routing rules you can write in a yaml file that lives inside your Kubernetes cluster that define how incoming HTTP requests should be routed to internal services.

Adding an Ingress to your cluster gives you a single point of handling incoming traffic (Single as in you only have to define it once, it can still be scaled horizontally).

By having this central point you can also do things here like managing SSL on the level of the Ingress, which is what we are going to be doing in this guide.

An Ingress and some routing rules

Install an “Ingress Controller”

As I said earlier, adding an “Ingress” gives you a more advanced way to configure HTTP request routing to your underlying services.

There’s one catch though, you need to tell Kubernetes what technology it has to use to handle the actual routing. This concept is called an “Ingress Controller”.

We’ll pick a popular Ingress Controller implementation called “NGINX Ingress” that implements the Ingress rules by routing all traffic through a pod that is running an NGINX server ( https://www.nginx.com/). The good thing is you don’t need to know that much detail about the underlying technology, you define the rules and another service will translate that to configuration for the NGINX server.

We will install the NGINX Ingress controller in a namespace “ingress-nginx” so it is separated from your application.

I have copied most of the instructions from the official website so you check that if you need more details.

First we will create all necessary Kubernetes resources that will be used by the Ingress service.

Download the deployment descriptors on https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml to a local file on your system.

Open it in a text editor, search for “replicas” and change the value depending on what you want. One is the default and is OK for testing environments.

For more critical environments, it’s wiser to set this to a higher value so another pod can take over handling of your HTTP traffic when one is hanging or killed for whatever reason. I’ve set mine to 2.

Then apply the local file to your cluster:

kubectl apply -f ~/mandatory.yaml

Next we will create a service that ingress uses as an entry point into your cluster by applying another file:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml

The type is set to “LoadBalancer”, which means Kubernetes will ask Azure to create an IP address and assign an Azure load balancer to it.

(Optional) If you already created a public IP in Azure before you can download the file and make some changes to reuse that specific IP.

In that case, add the IP address under the type: LoadBalancer line:

type: LoadBalancer
loadBalancerIP: xx.xx.xx.xx # Your IP here

Your Ingress controller has been created. You can verify if everything looks OK with the command:

kubectl get all --namespace ingress-nginx

For the next part you need some kind of publicly available DNS name that points to your IP address. If you already have a domain name you’re good to go.

If you don’t have your own domain name, you can use a basic DNS service from Azure. You can assign a “DNS label” to a public IP and Azure will create a publicly available URL “your-name.yourzone.cloudapp.azure.com”.

Navigate to your resource group in the Azure portal, search for a “public ip” resource that you either already had or is automatically created by Kubernetes by setting the type of the NGINX service to “LoadBalancer”.

Go to “Configuration” and assign a DNS name label to the IP and save the complete name somewhere for reference in the next steps.

Create an Ingress in your application’s namespace

Now we will create the actual routing rules.

To do so we’ll define an Ingress by creating another local yaml file. You can call it something like my-app-ingress.yaml.

Copy the configuration below but be sure to modify it for your application:

  • Be sure to modify the namespace to match the on your actual application is running in.
  • Be sure to change the “host” field to the DNS name your application is available on.
  • Be sure to change the serviceName and servicePort to match the service details of the service of your application.
apiVersion: extensions/v1beta1
kind: Ingress
namespace: default
metadata:
name: routing-for-my-app
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: <name>.<yourlocation>.cloudapp.azure.com
http:
paths:
- path: /
backend:
serviceName: name-of-your-backend-service
servicePort: 8080

By annotating the ingress with kubernetes/io/ingress.class: nginx you indicate that you want to use the nginx we created earlier. Apply the file to your cluster:

kubectl apply -f ~/my-app-ingress.yaml

Test the routing

Navigate to the URL you configured and see if you correctly get to your backend.

Install cert-manager to be able to fetch SSL certificates

Now that we have configured our central point for accessing our services, we’ll configure automatically generating valid SSL certificates to enable SSL on your server.

We will use the service Let’s Encrypt ( https://letsencrypt.org/) to generate trusted secure certificates.

Again, you don’t need to know too much details but it’s important to know that Let’s Encrypt creates certificates that are only valid for 90 days. You will need some kind of automated process that requests new certificates when the old ones expire.

Because just like with Ingress / NGINX we don’t want to learn all the details we will use a tool that is built to simplify certificate management and hides some details for us, “cert-manager”.

The architecture of cert-manager, diagram downloaded from https://github.com/jetstack/cert-manager

Create a dedicated namespace to deploy the cert-manager stuff in.

kubectl create namespace cert-manager

Kubernetes has a bunch of predefined objects you know like pods, services, deployments. Cert-manager builds upon that concept and adds some custom types like “Certificate”, “Issuer”, “Certificate Request” etc.

Install cert-manager by applying this file to your cluster:

kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v0.11.0/cert-manager.yaml --validate=false

You can then check all the api resources created by cert-manager:

kubectl api-resources | grep cert-manager

Verify the installation and wait until everything is running:

kubectl get pods --namespace cert-manager

Install Let’s Encrypt as a Cluster Issuer

Cert-manager is a generic interface for getting certificates. You also need to provide a specific implementation. We’ll get our SSL certificates through Let’s Encrypt. In “cert-manager language” Let’s Encrypt is a “Cluster Issuer” as it issues certificates for our cluster. Until we’re finished we will be using the “staging” environment of let’s encrypt as the production one has certain rate limits that could block us after a couple of incorrect configurations.

Create this local file. Be sure to set up the e-mail address correctly as it will be used by Let’s Encrypt to send you an e-mail when your certificates are about to expire:

apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
email: your@email.com
privateKeySecretRef:
name: letsencrypt-staging
solvers:
- http01:
ingress:
class: nginx

Create another local yaml file that contains a “Certificate” object, indicating that you would like to request a certificate for your domain from a certain issuer (letsencrypt-staging in our case, a reference to the cluster issuer).

apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
name: tls-secret-stg
namespace: default
spec:
secretName: tls-secret-stg
dnsNames:
- <name>.<yourlocation>.cloudapp.azure.com
acme:
config:
- http01:
ingressClass: nginx
domains:
- <name>.<yourlocation>.cloudapp.azure.com
issuerRef:
name: letsencrypt-staging
kind: ClusterIssuer

What you basically ask cert-manager to do is ask your provider for a certificate and store it in a kubernetes secret, accessible to all pods in the cluster.

Next we will update our ingress to tell it to look for the certificate we just asked cert-manager to store.

Open your previously configured Ingress and add an annotation, indicating that you will use the cluster issuer you just created and to get the certificate from the secret and add an indication that you want to enable tls:

apiVersion: extensions/v1beta1
kind: Ingress
namespace: default
metadata:
name: routing-for-my-app
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-staging
spec:
tls:
- hosts:
- <name>.<yourlocation>.cloudapp.azure.com
secretName: tls-secret-stg
rules:
- host: <name>.<yourlocation>.cloudapp.azure.com
http:
paths:
- path: /
backend:
serviceName: name-of-your-backend-service
servicePort: 8080

Testing the SSL certificate provisioning

That were a lot of steps but if everything went well you are finished now!

Try surfing to https://yourdomainname.com

You should get a warning, don’t worry this is normal because we are using the staging environment from Let’s Encrypt.

Check the details of the SSL certificate using your browser, it should be something like this:

If it works that’s great, you can skip below steps in this section!

If this is not working however, you can use these commands to verify a couple of things.

kubectl api-resources | grep cert-manager.io
kubectl get certs
kubectl get cr
kubectl describe <name of cert / cr / ...>

Some issues I had:

  • An earlier version of cert-manager was already installed. After reinstalling it both versions existed next to each other which got weird results.
  • I made some typos in the name of the certificate / cluster issuer so the objects that should be working together could not find each other.

The event log in the describe commands will give you an idea about some issues you might be having.

Switching over to the production Let’s Encrypt issuer

Now that everything is configured, we’ll set up a second Cluster Issuer next to the other one. We’ll configure this one for Let’s Encrypt as well but will use their production environment.

Basically you copy all the previous config and change some names and the Let’s Encrypt URL.

It’s time to create some more local yaml files for the final step.

Another Cluster Issuer for production:

apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-production
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: your@email.com
privateKeySecretRef:
name: letsencrypt-production
solvers:
- http01:
ingress:
class: nginx

Another certificate configuration for production:

apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
name: tls-secret-prd
namespace: default
spec:
secretName: tls-secret-prd
dnsNames:
- <name>.<yourlocation>.cloudapp.azure.com
acme:
config:
- http01:
ingressClass: nginx
domains:
- <name>.<yourlocation>.cloudapp.azure.com
issuerRef:
name: letsencrypt-production
kind: ClusterIssuer

And finally, point your ingress route to the production details:

apiVersion: extensions/v1beta1
kind: Ingress
namespace: default
metadata:
name: routing-for-my-app
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-production
spec:
tls:
- hosts:
- <name>.<yourlocation>.cloudapp.azure.com
secretName: tls-secret-prd
rules:
- host: <name>.<yourlocation>.cloudapp.azure.com
http:
paths:
- path: /
backend:
serviceName: name-of-your-backend-service
servicePort: 8080

And apply the files.

Navigate to https://yourdomainname.com again.

You should see that the certificate is valid this time, marking the end of this guide!

--

--