Cloud Security: DNS Configuration and TLS Encryption in Google Kubernetes Engine

Dr Stephen Odaibo
The Blog of RETINA-AI Health, Inc.
12 min readApr 21, 2020
Kubernetes, Letsencrypt, and Google Cloud Logos (Public Domain)

Security and identifiability remain of paramount importance as we mold and embrace the exciting cloud-centric paradigm shift occurring in enterprise software development. As the landscape shifts from monolith towards microservices (and back), from on-premise computing to the cloud, and from adhoc provisioning and configuration to infrastructure-as-code, the encryption and domain name configuration infrastructure are also changing. Therefore in this tutorial, we will specifically cover the following two topics:

  • Configuring Cloud Domain Name Services (DNS) in Google Kubernetes Engine (GKE) with custom domain names
  • TLS encryption of network traffic between our cluster and the internet

A pre-requisite for this tutorial is the content covered in my prior tutorial on GKE ingress mechanism. Further, we assume the following infrastructure is available and set-up:

  • GKE cluster with some ideally web application deployed. See prior tutorial
  • Ownership of some domain name obtained from one of the DNS vendors

Domain Name Server

We will configure Cloud DNS in Google Cloud Platform (GCP), binding our custom domain name to our kubernetes cluster. Let us begin the process by creating a global static IP address in Google Cloud as follows:

$ gcloud compute addresses create myhealthcare-ml-static-ip --global

This creates a static IP address that will be the IP address of our DNS domain as shown below, using the command: $ gcloud compute addresses create <name you choose for your IP address> --global

To see the IP address that was created we do: $ gcloud compute addresses describe <name you choose for your IP address> --global

In this case, the IP address is 35.190.43.218.

Adding the A Record:

In your cloud console, navigate to Network Services >> Cloud DNS >> Create Zone. And Click “Create Zone”

You’ll be taken to the screen below. Click Zone type as “public” and fill out a name for the Zone name. Enter your domain name. The name of my domain is “clinicaldata.ai,” yours of course is different.

In the above, we simply chose and entered a name for our DNS zone and DNS name. Then click the create button.

In the above, we entered the IP address which we earlier created in Google cloud. Then click create, to yield the following page:

Adding the C Name:

Click on “Add record set” button in the above console. Then in the “Resource Record Type,” scroll down to CNAME and select it. Include the prefix www in the DNS Name field as shown below. And in the “Canonical name” field, enter the domain name which you have obtained from a vendor and end it with a period as shown. For instance “clinicaldata.ai.” Note that there is an ending “period” in the canonical name. The result is as below:

Then click create. The result is as follows:

Next step is to bind the nameservers displayed above to the domain name DNS registration with the DNS provider. In particular, here, the following nameservers should be copied and entered into the domain name registration:

ns-cloud-c1.googledomains.com.
ns-cloud-c2.googledomains.com.
ns-cloud-c3.googledomains.com.
ns-cloud-c4.googledomains.com.

Examples of domain name registrars include: Google domains, namecheap, Godaddy, porkbun, squarespace, hostgator, cloudflare etc. They each have various pros and cons including pricing, reliability, customer service, extension offerings (e.g .ai, .tv, etc). I obtained clinicaldata.ai from porkbun, so I log in to my account there, navigate to manage domains, and see this:

Expanding the details yields:

In the above, to the right of AUTHORITATIVE NAMESERVERS, we see four entries which are the default nameservers. We must change these to the DNS nameservers above from our Cloud DNS. We click edit and make the change, yielding:

Binding External Domain Name to Ingress

In the above, we showed how to configure cloud DNS and attach an external domain name to it. What remains is to bind this gcloud-DNS-configured external domain to our ingress mechanism, so that when clients on the internet visit that domain name, they land at our ingress and are routed into our kubernetes cluster as instructed by us in the ingress object. In a previous tutorial, I demonstrated how to route external internet traffic in to our cluster using the ingress mechanism. But in that tutorial, we delegated the responsibility of selecting the IP address to Google Kubernetes Engine, via its automatic creation of GCLB (Google Cloud Load Balancer for HTTP(S)). In that case, we did not have control of what IP address was used and we did not have a custom domain. Let us now see how to use our custom domain, clinicaldata.ai, which we configured above. Picking up from the prior tutorial, we simply need to update to the ingress object with our domain name as shown below:

Our custom domain name shown in yellow circle

We update our infrastructure with the kubectl apply -f <your ingress YAML> and then look in the console to see this:

We’re pleased and enter our new URL in the browser to see if it indeed works:

And while we are delighted that it does work, we are not too pleased about the “Not Secure.” sign we see. This is a warning from our browser that the clients connection to the server is not encrypted and therefore not secure. To fix this we need to implement TLS encryption.

TLS Encryption

The remainder of this tutorial draws primarily from the official cert-manager documentation, which is excellent.

Installing cert-manager

Installing cert-manager involves a number of steps which need to be done in the correct order to avoid errors.

First switch the namespace to kube-system as follows:

$ kubectl config set-context --current --namespace=kube-system

Let’s proceed by installing Kubernetes Cert-manager, which is the certificate manager for Kubernetes. It is an open source project by Jetstack. We will be using kubernetes package manager, Helm. To begin, run the following command: $ helm init

Prior to installing the cert-manager using helm chart, one must first install the following custom resource definition (CRD) extensions directly from jetstack’s git repo: certificates, issuers, cluster issuers, orders, and challenges:

  • certificates.certmanager.k8s.io
  • issuers.certmanager.k8s.io
  • clusterissuers.certmanager.k8s.io
  • orders.certmanager.k8s.io
  • challenges.certmanager.k8s.io

We use the following command to install the above CRDs:

kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.15.0-alpha.0/cert-manager-legacy.crds.yaml

We can confirm that we have needed resources by running kubectl get crd | grep cert-manager :

Next we must execute a sequence of three commands to create a service account for Tiller (Helm Server), assign that service account a cluster admin role, and deploy a patch that will enable us install the cert-manager. That three command sequence is as follows:

$ kubectl create serviceaccount --namespace kube-system tiller

$ kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller

$ kubectl patch deploy --namespace kube-system tiller-deploy -p ‘{“spec”:{“template”:{“spec”:{“serviceAccount”:”tiller”}}}}’

Next update repo:

$ helm repo add jetstack https://charts.jetstack.io

At this point we are ready to install cert-manager as follows:

$ helm install -- name cert-manager -- namespace kube-system jetstack/cert-manager

Checking with helm status cert-manager

Create GCP Service Account and Obtain Key

In the code below we create a GCP IAM service account, which we will call dns-admin, then we will create a key for that account which we will store in gcp-dns-admin.json, and finally we will assign the service account the role of dns.admin. See below:

GCP_PROJECT = <your GCP Project ID here>

gcloud iam service-accounts create dns-admin \
--display-name=dns-admin \
--project=${GCP_PROJECT}

gcloud iam service-accounts keys create ./gcp-dns-admin.json \
--iam-account=dns-admin@${GCP_PROJECT}.iam.gserviceaccount.com \
--project=${GCP_PROJECT}

gcloud projects add-iam-policy-binding ${GCP_PROJECT} \
--member=serviceAccount:dns-admin@${GCP_PROJECT}.iam.gserviceaccount.com \
--role=roles/dns.admin

We can verify that the key was indeed downloaded onto our cluster local by checking the file gcp-dns-admin.json

Setting up the Cluster Issuer

One has the option of an Issuer or a Cluster Issuer. Both agents represent certifying authorities and can issue signed certificates, but the Issuer is scoped to a particular namespace while Cluster Issuer is scoped to the entire cluster. We will use a Cluster Issuer. Here is its YAML:

In the above YAML, pay attention to the fields highlighted in yellow. The kind is ClusterIssuer. Note that we set the namespace to kube-system. What matters is that the namespace in which we helm-installed cert-manager is the same namespace to which we apply Cluster issuer and Certificate YAMLs. We can pick any name for the privateKeySecretRef:namefield, but we must recall that name as we will need to fill it in exactly in the issuerRef:name: subfield of the Certificate YAML below. The serviceAccountSecretRef:key subfield is the location of the GCP dns-admin service account key. We must enter this exactly in the Certificate YAML. The serviceAccountSecretRef:name on the other hand is read by the cert-manager and is useful for logs debugging, but is not explicitly required to be entered in the remainder of this implementation. The ClusterIssuer code is below:

apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: lensencrypt-prod
namespace: kube-system
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: <enter your email here>
privateKeySecretRef:
name: lensencrypt-prod
solvers:
# ACME DNS01 Provider Verification Config
- dns01:
# GCP DNS
clouddns:
# gcloud service account secret key
serviceAccountSecretRef:
name: cert-manager
key: credentials.json
# Your GCP Project where DNS is configured
project: $PROJECT_NAME

We apply the cluster issuer YAML using

$ kubectl apply -f < ClusterIssuer YAML filename here >

After which we can check cluster issuer:

Cluster Issuer has been created and is in “Ready” state
Secret associated with Cluster Issuer has been created. It will be used to establish secure communication via symmetric encryption between our system and the certifying authority.

$ kubectl describe clusterissuer lensencrypt-prod yields:

This implies that we have successfully registered an ACME account with the ACME server. This account has been bound to our cloud DNS and cluster infrastructure via the GCP DNS-admin service account we created earlier and whose key we specified in the cluster issuer YAML. With this in place, when we request for the certifying authority, Letsencrypt, to assign a certificate to our cluster, it has all the wiring to initiate a challenge-order transaction with which it verifies that we indeed own and control the Cloud DNS and cluster for which we are requesting the certificate. The secret that was created in association with the Cluster Issuer will be used to symmetrically encrypt the communication channel between our GKE system and the certifying authority, Letsencrypt.

Next we request a certificate from the certifying authority by using the following YAML:

In the above certificate YAML, note that we have assigned it to the kube-system namespace because for things to work, both the cluster issuer and the certificate must reside in the same namespace into which we earlier installed cert-manager. The secretName field above is to be recalled as it must exactly match the tls:secretName field in the ingress YAML. And the issuerRef:name field must exactly match the acme:privateKeySecretRef:name in the cluster issuer YAML from earlier. The commonName and dnsNames fields should each contain the name(s) of our domain(s) which we have purchased from a DNS provider and configured in our cloud DNS as outlined earlier in this tutorial. Here is the Certificate YAML code:

apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
name: certls
namespace: kube-system
spec:
secretName: certls
issuerRef:
name: letsencypt-prod
kind: ClusterIssuer
commonName: "clinicaldata.ai"
dnsNames:
- "clinicaldata.ai"

We apply the above certificate YAML:

Initial check shows that the certificate is not yet ready:

After 3–5 mins (sometimes longer), the certificate is successfully issued:

At this point we can implement our ingress file following the pattern from earlier in this tutorial. The only modification is to instruct that the newly created TLS certificate is to be used. The modified ingress YAML is as shown:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: tlsingress
annotations:
kubernetes.io/ingress.class: "gce"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
kubernetes.io/ingress.global-static-ip-name: "myhealthcare-ml-static-ip"
spec:
tls:
- hosts:
- clinicaldata.ai
secretName: certls
rules:
- host: clinicaldata.ai
http:
paths:
- path: /english
backend:
serviceName: english-service
servicePort: 8080
- path: /yoruba
backend:
serviceName: yoruba-service
servicePort: 8080

Upon applying the above ingress YAML and waiting 3–5 mins we obtain the following:

Navigating to clinicaldata.ai, clinicaldata.ai/english, and clinicaldata.ai/yoruba in our browser yields the following:

The Padlock Icon (Circled in Green) and the “https” prefix show our TLS encryption.

Clicking on the padlock icon, we see more details confirming that the connection is secure and the certificate is valid:

Clicking on the certificate in the above expands it out in more detail as shown below:

Conclusion

In this tutorial, we have demonstrated how to obtain a custom DNS name and configure it in a Google Kubernetes Engine Cloud DNS, such that it terminates into our Kubernetes cluster. We also showed how to use Letsencrypt certificates and cert-manager to implement TLS termination in our GKE cluster.

BIO: Dr. Stephen G. Odaibo is CEO & Founder of RETINA-AI Health, Inc, and is on the Faculty of the MD Anderson Cancer Center, the #1 Cancer Center in the world. He is a Physician, Retina Specialist, Mathematician, Computer Scientist, and Full Stack AI Engineer. In 2017 he received UAB College of Arts & Sciences’ highest honor, the Distinguished Alumni Achievement Award. And in 2005 he won the Barrie Hurwitz Award for Excellence in Neurology at Duke Univ School of Medicine where he topped the class in Neurology and in Pediatrics. He is author of the books “Quantum Mechanics & The MRI Machine” and “The Form of Finite Groups: A Course on Finite Group Theory.” Dr. Odaibo Chaired the “Artificial Intelligence & Tech in Medicine Symposium” at the 2019 National Medical Association Meeting. Through RETINA-AI, he and his team are building AI solutions to address the world’s most pressing healthcare problems. He resides in Houston Texas with his family.

REFERENCES
S.G. Odaibo, Ingress Control in Google Kubernetes Engine
Cert-Manager, https://cert-manager.io/docs/
Tommy Elmesewdy, Kubernetes: cert-manager on GKE using Letsencrypt

--

--

Dr Stephen Odaibo
The Blog of RETINA-AI Health, Inc.

Physician. Retina Specialist. Computer Scientist. Mathematician. Full Stack AI Engineer. Christian. Husband. Dad. CEO/Founder RETINA-AI Health, Inc.