Kubernetes 10-domain Ingress
The “Because One Can” series
A customer is interested in exposing its multi-tenant Google Kubernetes Engine (GKE) service through per-customer TLS endpoints. The GCP L7LB only supports 10 certs (there are plans to expand this) but I decided to give 10 a whirl and try some other things on the way including Jsonnet and Google Cloud DNS transactions.
Setup
I’ve covered much of this ground before and you likely already know how to create Kubernetes Engine clusters. I’m going to create a regional cluster and mess with the RBAC so, if this is of interest, read on:
export ROOT=$(whoami)-$(date +%y%m%d)
export PROJECT=${ROOT}-multi-domain
export CLUSTER=${ROOT}-cluster-01
export BILLING=[[YOUR-BILLING-ID]]
export REGION=[[YOUR-PREFERRED-REGION] #us-west1gcloud alpha projects create $PROJECTgcloud beta billing projects link $PROJECT \
--billing-account=$BILLINGgcloud services enable container.googleapis.com \
--project=$PROJECT
You may specify master and node versions but you need to determine what’s available in your region. You can eyeball the versions here and pick your favorite:
gcloud beta container get-server-config \
--project=$PROJECT \
--region=${REGION}
and then create the cluster:
gcloud beta container clusters create $CLUSTER \
--username="" \
--cluster-version=1.8.4-gke.1 \
--machine-type=custom-1-4096 \
--image-type=COS \
--preemptible \
--num-nodes=1 \
--enable-autorepair \
--enable-autoscaling \
--enable-autoupgrade \
--enable-cloud-logging \
--enable-cloud-monitoring \
--min-nodes=1 \
--max-nodes=2 \
--labels=medium=c6c2bbf381bb \
--region=$REGION \
--project=$PROJECT
For broad (!) RBAC permissions:
ACCOUNT=$(gcloud config get-value account)kubectl create clusterrolebinding $(whoami)-cluster-admin-binding --clusterrole=cluster-admin --user=$ACCOUNTkubectl create clusterrolebinding kube-system-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:defaultkubectl create clusterrolebinding default-cluster-admin --clusterrole=cluster-admin --serviceaccount=default:default
And, finally:
gcloud beta container clusters get-credentials $CLUSTER \
--project=$PROJECT \
--region=$REGIONkubectl proxy --port=0 &
The proxy command should report
Starting to serve on [[SOME-URL]]
All being well, you can then open Kube UI from that:
https://[[SOME-URL]]/ui
and/or via the Cloud Console too:
http://console.cloud.google.com/kubernetes/list?project=${PROJECT}
Secrets
Let’s create 10 TLS certs for our planned Ingress. There’s a good quick way to generate TLS certs and manifest these as Kubernetes Secrets:
PREFIX=[[YOUR-PREFIX]]
DOMAIN=[[YOUR-DOMAIN]]for NUM in {0..9}
do
NAME=${PREFIX}${NUM}.${DOMAIN}
openssl req \
-x509 \
-nodes \
-days 365 \
-newkey rsa:2048 \
-keyout ${NAME}.key \
-out ${NAME}.crt \
-subj "/CN=${NAME}"
done
Then (or combined if you’d prefer)
for NUM in {0..9}
do
NAME=${PREFIX}${NUM}.${DOMAIN}
echo "
apiVersion: v1
kind: Secret
metadata:
name: ${NAME}
data:
tls.crt: `base64 --wrap 0 ./${NAME}.crt`
tls.key: `base64 --wrap 0 ./${NAME}.key`
" | kubectl apply --filename -
done
All being well, the following should include the 10 secrets domain${NUM}.${DOMAIN} and a default token:
kubectl get secrets --output=name
Deployment|Service
Using my go-to testing image to create a foundational service for the Ingress:
kubectl run whoami \
--image=emilevauge/whoami \
--replicas=2 \
--port=80kubectl expose deployment/whoami \
--port=9999 \
--target-port=80 \
--type=NodePort
I used Jsonnet (see below) to rework the Deployment and Service too and will include those below but will focus the explanation on the Ingress.
Jsonnet
The Kube UI is an excellent tool for observing Kubernetes and the Cloud Console UI tools are getting better. As an aside, it was pointed out to me (and I concur) that switching between Kubernetes (pods, services, ingresses) and GCP worlds can be jarring. I’m not entirely convinced that the Cloud Console UI addresses this for me.
Regardless, for changing Kubernetes, the command-line remains the best tool and, even though kubectl provides many (well-thought-out) commands for creating deployments, services, etc. etc., at some early point, you just need to grab config files by their horns and become comfortable crafting them, understanding them and applying them.
There are various efforts underway to insulate developers from writing config files. I’m compelled to try Ksonnet but today I’m going to stick with an earlier, simpler tool Jsonnet:
You can grab Jsonnet from its GitHub repo and then ‘make’ it:
git clone https://github.com/google/jsonnet.git
cd jsonnet
make
jsonnet --helpJsonnet commandline interpreter v0.9.5General commandline:
jsonnet [<cmd>] {<option>} { <filename> }
Note: <cmd> defaults to "eval"The eval command:
jsonnet eval {<option>} <filename>
Note: Only one filename is supported
I’m following a pattern (thanks to a Bitnami kube-manifests GitHub repo for guidance and best practices) of having a file for the instance (whoami-ingress.jsonnet) and a file for the ‘class’ (ingress.jsonnet):
Please don’t take these by any means as a masterclass in Jsonnet. I got them to work and that’s all I offer ;-) The whoami-ingress imports the ‘class’ and provides values for the required arguments. If these aren’t provided, the “error” is generated.
The Ingress file intends to generate a Ingress spec see the “kind” and “apiVersion” and you should recognize the pattern. The motivation for Jsonnet is that this YAML has repeating sections (10 of them) for “rules” and for “tls”and, two of the many things that are great with templating are variable substitution (covered) and iteration. Jsonnet is similar to Python with its loops. I use it: [ {X(N)} for N in [….]]. Both loops iterate 0..9 and both result in 10 copies of the JSON generated in the array. Both loops leverage a closure called “name” that combines its single parameter — the loop iterator value (num) — with the values of “prefix” and “domain” that are global to the script.
NB closure vs. function can be a little academic but, in this case, “name” is a function that takes one parameter (num) but closes over the global values of “prefix” and “domain” in the script. So “name” is a function closure ;-)
NB The “$” prefixing here identifies “prefix” and “domain” as being global to the script.
The result of Jsonnet processes whoami-ingress.jsonnet is just an Ingress resource. You can see this generated:
jsonnet whoami-ingress.jsonnet
{
"apiVersion": "extensions/v1beta1",
"kind": "Ingress",
"metadata": {
"annotations": {
"kubernetes.io/ingress.class": "gce"
},
"name": "multi-domain"
},
...
}
You can apply this directly against your cluster:
jsonnet whoami-ingress.jsonnet \
| kubectl apply --filename -ingress "multi-domain" unchanged
The nice folks at Heptio provide a Visual Studio Code plugin for Jsonnet:
Which will preview output:
Kube UI doesn’t do much with Ingress resources but:
And, here’s the Cloud Console view which is much more interesting:
and the L7 configured:
I challenge anyone to find an easier way to program GCP L7s than with Kubernetes Ingress resources.
All that remains is to add these domains to our DNS records.
Cloud DNS
If you’re using Cloud DNS, it’s relatively trivial to script these additions using the Cloud SDK (aka “gcloud”). Before you proceed, you may wish to grab a snapshot of your current DNS Zone configuration. Just in case:
DNS_ZONE=[[YOUR-CLOUD-DNS-ZONE]]gcloud dns record-sets export ${DNS_ZONE}.yaml \
--zone ${DNS_ZONE} \
--project=${PROJECT}
We’ll use transactions and note that this is effected through the creation of a transaction.yaml script:
DNS_ZONE=[[YOUR-CLOUD-DNS-ZONE]]gcloud beta dns record-sets transaction start \
--zone=${DNS_ZONE} \
--project=${PROJECT}Transaction started [transaction.yaml].
Here’s the transaction.yaml that results when I run this command:
---
additions:
- kind: dns#resourceRecordSet
name: domain.com.
rrdatas:
- ns-cloud-d1.googledomains.com. cloud-dns-hostmaster.google.com. 5 21600 3600 259200
300
ttl: 21600
type: SOA
deletions:
- kind: dns#resourceRecordSet
name: domain.com.
rrdatas:
- ns-cloud-d1.googledomains.com. cloud-dns-hostmaster.google.com. 1 21600 3600 259200
300
ttl: 21600
type: SOA
I believe (!?) that the curious addition and deletion of seemingly the same entry is because the entries do differ subtly. In my case version (!?) “4” is deleted and “5” is created. I will confirm this behavior.
Now we simply iterate over our 10 host-domains and add them as A(lias) records pointing to the IP of the Load-balancer that was created by the Ingres:
PREFIX=[[YOUR-PREFIX]]
DOMAIN=[[YOUR-DOMAIN]]LB=$(kubectl get ingress/multi-domain \
--output=jsonpath='{ .status.loadBalancer.ingress[0].ip }')for NUM in {0..9}
do
gcloud beta dns record-sets transaction add "${LB}" \
--name=${PREFIX}${NUM}.${DOMAIN} \
--ttl=300 \
--type=A \
--zone=${DNS_ZONE} \
--project=${PROJECT}
done
and finally commit the changes:
gcloud beta dns record-sets transaction execute \
--zone=${DNS_ZONE} \
--project=${PROJECT}
and, you should be able to confirm that the changes have been effected:
gcloud beta dns record-sets list \
--zone=${DNS_ZONE} \
--project=${PROJECT} \
| grep "${PREFIX}[0-9].${DOMAIN}"
Test
You can confirm that the changes are effective for your client’s DNS lookup with:
for NUM in {0..9}
do
nslookup ${PREFIX}${NUM}.${DOMAIN} 8.8.8.8
done
and, you should be able to curl the endpoints:
for NUM in {0..9}
do
curl --silent --insecure https://${PREFIX}${NUM}.${DOMAIN}/ \
| grep "Host: ${PREFIX}[0-9].${DOMAIN}"
done
Conclusion
Those Ingress resources can be gnarly but here was one that supported 10 (and you can’t get more than that [today]) domains. Its creation was facilitated by a wander through the wonders that are Jsonnet.
The SRE folks at Google try to avoid “toil” and oftentimes you can avoid toil by automating. Sometimes to automate, you have to learn new tools. Jsonnet is powerful and I’m confident that, if I continue to use it, I’ll flex my knowledge of it and it’ll be a useful addition to my toolset.
Cloud DNS is great. It hasn’t always been the easiest beast to program. But, as you saw, it’s getting better.
Tidy-up
You should revert your DNS records:
gcloud beta dns record-sets transaction start \
--zone=${DNS_ZONE} \
--project=${PROJECT}for NUM in {0..9}
do
gcloud beta dns record-sets transaction remove "${LB}" \
--name=${PREFIX}${NUM}.${DOMAIN} \
--ttl=300 \
--type=A \
--zone=${DNS_ZONE} \
--project=${PROJECT}
donegcloud beta dns record-sets transaction execute \
--zone=${DNS_ZONE} \
--project=${PROJECT}
Running our prior test should return zero:
gcloud beta dns record-sets list \
--zone=${DNS_ZONE} \
--project=${PROJECT} \
| grep "${PREFIX}[0-9].${DOMAIN}"
If anything untoward arises, you followed my advice and took a backup, so you may restore that.
I create clusters per task and delete them just as often:
gcloud beta container clusters delete $CLUSTER \
--project=$PROJECT \
--region=$REGION \
--quiet
But, if you want to tidy-up, you can just delete all the wonderful things we created:
kubectl delete ingress/multi-domainfor NUM in {0..9}
do
kubectl delete secret/${PREFIX}${NUM}.${DOMAIN}
donekubectl delete service/multi-domain
kubectl delete deployment/multi-domain
If you’d like to (irrevocably) delete the GCP project — it’s irrevocable — you may:
gcloud projects delete ${RPOJECT} --quiet
That’s all folks!