Use Let’s Encrypt, Cert-Manager and External-DNS to publish your Kubernetes apps to your website

Lexa
ASL19 Developers
Published in
11 min readMay 16, 2019

In this tutorial, we aim to bring a few tools together to automate the process of publishing your Kubernetes applications to your website. This includes creating records on your DNS provider end and creating TSL certificates to secure your website.

The tools we will be using in this tutorial are:

In this example, we will be using Route53 as our DNS provider. This tutorial could apply to most Kubernetes engines.

In summary, we will be executing the following steps:

  1. Install Helm and the add-on applications through Helm
  2. Expose application with a ClusterIP service
  3. Create letsencrypt staging-level cluster issuer
  4. Create ingress with tls-acme annotation and tls spec
  5. Create a certificate template
  6. Move onto prod, and create real TLS certificates

1. Install Helm and the add-on applications through Helm

Before we install our add-on tools, we need to take care of Helm and Tiller. Let’s look at their application as defined by Helm docs:

Helm is a tool that streamlines installing and managing Kubernetes applications. Think of it like apt/yum/homebrew for Kubernetes.

Helm has two parts: a client (helm) and a server (tiller)

Tiller runs inside of your Kubernetes cluster, and manages releases (installations) of your charts. (source)

Now you will need to make sure that you are on the correct Kubernetes context. To check which context you are on, run:

$ kubectl config current-context

Now you can install Helm on your computer by running the following:

$ brew install kubernetes-helm

If you are not using MacOS, you can find other ways of installing Helm here.

Now that you have installed Helm (client) you will need to install Tiller (server) on your Kubernetes cluster.

In rbac-config.yamlinsert the following:

apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system

Now run the following to install Tiller on your cluster along with its role:

$ kubectl apply -f rbac-config.yaml
$ helm init --service-account tiller

You are now ready to install the add-ons (cert-manager and external-dns) using Helm. Note that each of these installations needs a few variables to be set.

nginx-ingress

With the NGINX Ingress Controller for Kubernetes, you get basic load balancing, SSL/TLS termination, support for URI rewrites, and upstream SSL/TLS encryption (source). It enables enterprise level application delivery on K8S. Using an ingress controller and ingress rules, a single IP address can be used to route traffic to multiple services in a Kubernetes cluster (source).

This also makes it clear as to why we do not expose our app as a LoadBalancer service. Later we will see how to use a static IP for our ingress controller.

image source

You can install the Nginx Ingress Controller through the following command. I have chosen the default namespace for this run, however, you can install it in the kube-systemnamespace, or any other.

$ helm install \
--name nginx-ingress \
--namespace default \
--set controller.service.loadBalancerIP=[YOUR_STATIC_IP]
--set controller.publishService.enabled=true stable/nginx-ingress

In case you do not already own a static IP, you can remove the variable from this command, and you will automatically receive a public IP. This IP will be the external IP address of your ingress controller, which essentially is the IP address to which the traffic will be directed before it hits your services.

cert-manager

Installing cert-manager in my experience is a bit more difficult than the rest of the add-ons, and that is because this tool gets updated pretty frequently, but you can always be sure that you are installing the latest version by following this link. For the sake of this tutorial, here is the installation method that currently works.

$ kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.7/deploy/manifests/00-crds.yaml$ kubectl label namespace default certmanager.k8s.io/disable-validation=true$ helm repo add jetstack https://charts.jetstack.io$ helm repo update$ helm install \
--name cert-manager \
--namespace default \
--version v0.7.2 \
--set ingressShim.defaultACMEChallengeType=dns01 \
--set ingressShim.defaultACMEDNS01ChallengeProvider=route53 \
--set ingressShim.defaultIssuerName=letsencrypt-prod \
--set ingressShim.defaultIssuerKind=ClusterIssuer \
jetstack/cert-manager

Note: the variables addressed with the --setflag pertain to the fact that we are using Route53 and DNS01 challenge in this example. You can remove these variables in case you do not need them.

external-dns

The following command installs external-dns and authorizes the add-on to make changes on my DNS provider end. We will be using this tool to automatically generate sub-domain records on Route53.

You can set your own variables in case you are not using Route53. Check this link to find suitable auth variables.

Note that we are setting the policy to upsert-only, which doesn’t allow external-dns to delete any records but only create them, as well note that I am setting domainFiltersvariable. This allows me to limit my AWS role permissions to only include my given domain zone and not my other domains.

$ helm install  \
--name external-dns \
--set aws.accessKey=XX \
--set aws.secretKey=XX \
--set aws.region=us-east-1 \
--set policy=upsert-only \
--set domainFilters={example.com} \
stable/external-dns

The following shows the limited permissions needed for the installed tool to function properly:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "route53:GetChange",
"Resource": "arn:aws:route53:::change/*"
},
{
"Effect": "Allow",
"Action": [
"route53:ChangeResourceRecordSets",
"route53:GetHostedZone",
"route53:ListResourceRecordSets"
],
"Resource": [
"arn:aws:route53:::hostedzone/[YOUR_HOSTED_ZONE]",
]
},
{
"Effect": "Allow",
"Action": [
"route53:ListHostedZones"
],
"Resource": "*"
}
]
}

You should now be able to see the result of the above installations in your deployments:

$ kubectl get deploy
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
cert-manager 1 1 1 1 3m
external-dns 1 1 1 1 2m
nginx-ingress-c... 1 1 1 1 4m
nginx-ingress-d... 1 1 1 1 4m

Keep in mind that for our future debugging steps, it is of great help to check the logs for the cert-manager and Nginx Ingress Controller pods. To find the name of the pods created, run kubectl get pods, and get the logs for each pod through kubectl logs [POD_NAME].

2. Expose application with a ClusterIP service

There are 3 types of services available in Kubernetes. The following information has been recited from this source, to give you a better idea about each service type.

ClusterIP Service (source)

A ClusterIP service is the default Kubernetes service. It gives you a service inside your cluster that other apps inside your cluster can access. There is no external access. To access this service, you will require to use a “proxy”.

NodePort Service (source)

A NodePort service is the most primitive way to get external traffic directly to your service. NodePort, as the name implies, opens a specific port on all the Nodes (the VMs), and any traffic that is sent to this port is forwarded to the service. Compared to ClusterIP, there is an additional port called the nodePort that specifies which port to open on the nodes. If you don’t specify this port, it will pick a random port.

LoadBalancer Service (source)

A LoadBalancer service is the standard way to expose a service to the internet. On GKE, this will spin up a Network Load Balancer that will give you a single IP address that will forward all traffic to your service.

The above information and diagrams have been taken from this source, feel free to click on the link to read more about each of these services.

We will be using a service of type clusterIP in order to route the traffic through our ingress controller to ensure security with TLS certificates. The service will then be accessed through an ingress that we will define later. Use the following service.yamlfile to expose your app.

apiVersion: v1
kind: Service
metadata:
labels:
app: my-app
name: my-app-service
spec:
type: ClusterIP
ports:
- name: http
port: 80
targetPort: 80
- ... <your service/pod ports>selector:
app: my-app

Add your own ports to the yaml file. Keep in mind that the targetPort refers to ports you have defined in your pod as containerPortand the portvalue is the service port that points to your containerPort. Create your ClusterIP service by running:

$ kubectl create service -f service.yaml

3. Create letsencrypt staging-level cluster issuer

In order to automate the creation of your TLS certificates, we need to define a ClusterIssuer and a Certificate. The two together define what kind of certificate we will be issuing and what domain and dns provider we will be using.

Creating real certificates is a costly process for the provider (Let’s Encrypt), as such they have introduced a quota of 50 certificates issued per week, per domain. In order to prevent overshooting this limit, they have provided a way to create staging level (fake) certificates for testing and debugging purposes. Here we are using the staging level certificates; we will later see how to move onto production certificates (real certificates).

Depending on your DNS provider, your cluster issuer’s yaml file may vary. Here, you can find some guidelines on how to edit your yaml files in case you use a different provider that what we have here (Route53). use the following issuer.yaml file to create your ClusterIssuer (staging certificates).

apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
namespace: default
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
email: [YOUR_EMAIL]
privateKeySecretRef:
name: letsencrypt-staging
dns01:
providers:
- name: route53
route53:
region: us-east-1
accessKeyID: [YOUR_ACCESS_KEY_ID]
secretAccessKeySecretRef:
name: acme-route53
key: secret-access-key

Notice that the server URL used here is the staging URL (https://acme-staging-v02.api.letsencrypt.org/directory). Later we will use https://acme-v02.api.letsencrypt.org/directoryto generate real certificates to move to prod. You will notice here that in order to gain access to Route53, you have provided secret-access-key under the secretAccessKeySecretRef field instead of the actual secret’s value. Do not enter your secret access key directly here for security purposes (unless you want to wake up to a 20k AWS bill tomorrow morning!). For that matter, you will need to create a secret resource in the same namespace as your ClusterIssuer named acme-route53with secret-access-keyas the key and your AWS secret access key as the value. If you have your secret access key stored locally, you can create this secret like this:

$ kubectl create secret generic acme-route53 --from-file=secret-access-key=[YOU_SECRET_ACCESS_KEY_FILE]

Now you will need to give the correct permissions from your dns provider side. If you are using Route53, the previous policy addressed above should give you the permissions you need.

4. Create ingress with tls-acme annotation and tls spec

We will go through this step before we create the Certificate resource. This is because once an ingress with the correct tls annotation is created, cert-manager will automatically create a Certificate resource. If we create our Certificate resource before this step, this new resource will override our own Certificate. We do not want that!

In this step, you need to create your everyday ingress while adding the tls annotation and the tls field in the ingress spec category. You can use the following ingress.yaml file as an example:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
certmanager.k8s.io/cluster-issuer: letsencrypt-prod

spec:
tls:
- hosts:
- "*.example.com"
secretName: example-wildcard

rules:
- host: subdomain.example.com
http:
paths:
- path: /
backend:
serviceName: my-app-service
servicePort: 80

Run the following command to create your ingress resource.

$ kubectl create certificate -f ingress.yaml

By creating this resource you are publishing your application (service) onto https://subdomain.example.com . Note that this means external-dns has now automatically created the required records on Route53 for your subdomain. To verify the record changes have gone through, look at your external-dns pod logs.

This ingress will now create a Certificate resource which we will override next. Remember, in order for the process to be complete, we need a new Certificate resource and to change the issuer’s URL to prod in order to create real certificates.

5. Create a certificate template

Your Certificate can be created using the following cert.yaml file and running $ kubectl create certificate -f cert.yaml:

apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: example-wildcard #name of certificate
spec:
secretName: example-wildcard #name of tls secret
issuerRef:
name: letsencrypt-prod
kind: ClusterIssuer
commonName: "*.example.com"
dnsNames:
- example.com
acme:
config:
- dns01:
provider: route53
domains:
- "*.example.com"
- example.com

Depending on what subdomains you are planning on using the certificate for or wildcard, this file will vary. See this link.

You can now monitor your certificate being created and synced with your domain through the following commands.

  • You can run $ kubectl describe certificate example-wildcard. The output should show you the events unfolding one after another as your fake certificate gets created.
  • Run $ kubectl describe clusterissuer letsencrypt-prod in case you want to debug a problem with the issuer.
  • You can run the same describe command on your ingress: $ kubectl describe ingress my-ingress
  • Most importantly you can monitor every step of the process by checking your cert-manager and nginx-ingress-controller pod logs

You should now be able to browse to your domain through https (with fake certificate). You should, however, get a Not Secure warning with the following message:

Your connection is not private

NET::ERR_CERT_AUTHORITY_INVALID

6. Move onto prod, and create real TLS certificates

Once your app is debugged and you have the desired logs and event messages, you can go ahead and simply change your issuer server URL to the prod URL. Your Issuer yaml file will look like:

apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
namespace: default
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: [YOUR_EMAIL]
privateKeySecretRef:
name: letsencrypt-prod
dns01:
providers:
- name: route53
route53:
region: us-east-1
accessKeyID: [YOUR_ACCESS_KEY_ID]
secretAccessKeySecretRef:
name: acme-route53
key: secret-access-key

Once applied, make sure to delete the old TLS secret:

$ kubectl delete secret example-wildcard

In case the propagation does not happen automatically, delete your ingress, old issuer, certificate, and certificate secret and re-apply all of the yaml files with the new prod URL. You should now be able to see a successful cert generation process in your cert-manager logs, and easily access your domain with https connection.

You are done!

Some notes:

Let’s go over a few challenges that you might face as you follow the steps above.

  • In case you need to delete cert-manager and reinstall it, you will need to delete the custom resource definition objects. Run the following to delete the objects and reinstall cert-manager:
$ kubectl delete customresourcedefinition certificates.certmanager.k8s.io$ kubectl delete customresourcedefinition clusterissuers.certmanager.k8s.io$ kubectl delete customresourcedefinition issuers.certmanager.k8s.io$ helm del --purge cert-manager
$ helm install --name cert-manager ... #take the command from above
  • In order to restart your Nginx Ingress Controller, simply find the corresponding pod, and delete it. The pod will be recreated. However, if you would like to delete your Nginx Ingress Controller, you can run the command: helm del --purge nginx-ingress.
  • Here is detailed documentation on how Let’s Encrypt works with your dns provider and domain names.
  • When making your static IP, make sure to define a region. Do not use the --globaltag. We need regional IP for our Ingress Controller.
  • If you are getting an error about the “wrong challenge type not allowed”, you are either not providing the correct Let’s Encrypt account or you are using the wrong challenge type (http01 instead of dns01). Make sure your cert name and secret name are referred to correctly (especially in your ingress.yaml). As well, make sure that you do not change the name of your issuer constantly. Another note is to create your Certificate after your ingress to override the default Certificate.
  • I used a combination of this tutorial, this tutorial, Kubernetes docs, GKE docs, Let’s Encrypt docs, cert-manager docs and GitHub issues dashboard for debugging.

If you have deleted your fake certificate and you are still getting the INVALID AUTH page as you try to access your domain, you could be making one of the following mistakes:

  • Your browser cache is not cleared.
  • You are deleting the wrong certificate.
  • Your new issuer noticed the old certificate and did not create a new one (delete the old one).
  • Keep in mind to delete the certificate AND the secret. The secret contains your tls certificate. The Certificate object merely tells the issuer what kind of certificate you need.

In case you overshoot your Let’s Encrypt prod certificate quota, you can send in a request to increase your quota to a higher limit through the rate-limiting request form.

Enjoy!

--

--