Kubernetes 10-domain Ingress

The “Because One Can” series

Daz Wilkin
Google Cloud - Community
8 min readDec 22, 2017

--

A customer is interested in exposing its multi-tenant Google Kubernetes Engine (GKE) service through per-customer TLS endpoints. The GCP L7LB only supports 10 certs (there are plans to expand this) but I decided to give 10 a whirl and try some other things on the way including Jsonnet and Google Cloud DNS transactions.

Setup

I’ve covered much of this ground before and you likely already know how to create Kubernetes Engine clusters. I’m going to create a regional cluster and mess with the RBAC so, if this is of interest, read on:

You may specify master and node versions but you need to determine what’s available in your region. You can eyeball the versions here and pick your favorite:

and then create the cluster:

For broad (!) RBAC permissions:

And, finally:

The proxy command should report

All being well, you can then open Kube UI from that:

Kube UI

and/or via the Cloud Console too:

Cloud Console: Kubernetes

Secrets

Let’s create 10 TLS certs for our planned Ingress. There’s a good quick way to generate TLS certs and manifest these as Kubernetes Secrets:

Then (or combined if you’d prefer)

All being well, the following should include the 10 secrets domain${NUM}.${DOMAIN} and a default token:

Deployment|Service

Using my go-to testing image to create a foundational service for the Ingress:

I used Jsonnet (see below) to rework the Deployment and Service too and will include those below but will focus the explanation on the Ingress.

Jsonnet

The Kube UI is an excellent tool for observing Kubernetes and the Cloud Console UI tools are getting better. As an aside, it was pointed out to me (and I concur) that switching between Kubernetes (pods, services, ingresses) and GCP worlds can be jarring. I’m not entirely convinced that the Cloud Console UI addresses this for me.

Regardless, for changing Kubernetes, the command-line remains the best tool and, even though kubectl provides many (well-thought-out) commands for creating deployments, services, etc. etc., at some early point, you just need to grab config files by their horns and become comfortable crafting them, understanding them and applying them.

There are various efforts underway to insulate developers from writing config files. I’m compelled to try Ksonnet but today I’m going to stick with an earlier, simpler tool Jsonnet:

You can grab Jsonnet from its GitHub repo and then ‘make’ it:

I’m following a pattern (thanks to a Bitnami kube-manifests GitHub repo for guidance and best practices) of having a file for the instance (whoami-ingress.jsonnet) and a file for the ‘class’ (ingress.jsonnet):

whoami-ingress.jsonnet
ingress.jsonnet

Please don’t take these by any means as a masterclass in Jsonnet. I got them to work and that’s all I offer ;-) The whoami-ingress imports the ‘class’ and provides values for the required arguments. If these aren’t provided, the “error” is generated.

The Ingress file intends to generate a Ingress spec see the “kind” and “apiVersion” and you should recognize the pattern. The motivation for Jsonnet is that this YAML has repeating sections (10 of them) for “rules” and for “tls”and, two of the many things that are great with templating are variable substitution (covered) and iteration. Jsonnet is similar to Python with its loops. I use it: [ {X(N)} for N in [….]]. Both loops iterate 0..9 and both result in 10 copies of the JSON generated in the array. Both loops leverage a closure called “name” that combines its single parameter — the loop iterator value (num) — with the values of “prefix” and “domain” that are global to the script.

NB closure vs. function can be a little academic but, in this case, “name” is a function that takes one parameter (num) but closes over the global values of “prefix” and “domain” in the script. So “name” is a function closure ;-)

NB The “$” prefixing here identifies “prefix” and “domain” as being global to the script.

The result of Jsonnet processes whoami-ingress.jsonnet is just an Ingress resource. You can see this generated:

You can apply this directly against your cluster:

The nice folks at Heptio provide a Visual Studio Code plugin for Jsonnet:

Which will preview output:

Heptio Jsonnet: Jsonnet → JSON (YAML)

Kube UI doesn’t do much with Ingress resources but:

Kube UI: Ingress “multi-domain”

And, here’s the Cloud Console view which is much more interesting:

Cloud Console: Workloads “multi-domain”

and the L7 configured:

Cloud Console: Discovery & Load-balancing

I challenge anyone to find an easier way to program GCP L7s than with Kubernetes Ingress resources.

All that remains is to add these domains to our DNS records.

Cloud DNS

If you’re using Cloud DNS, it’s relatively trivial to script these additions using the Cloud SDK (aka “gcloud”). Before you proceed, you may wish to grab a snapshot of your current DNS Zone configuration. Just in case:

We’ll use transactions and note that this is effected through the creation of a transaction.yaml script:

Here’s the transaction.yaml that results when I run this command:

I believe (!?) that the curious addition and deletion of seemingly the same entry is because the entries do differ subtly. In my case version (!?) “4” is deleted and “5” is created. I will confirm this behavior.

Now we simply iterate over our 10 host-domains and add them as A(lias) records pointing to the IP of the Load-balancer that was created by the Ingres:

and finally commit the changes:

and, you should be able to confirm that the changes have been effected:

Test

You can confirm that the changes are effective for your client’s DNS lookup with:

and, you should be able to curl the endpoints:

Conclusion

Those Ingress resources can be gnarly but here was one that supported 10 (and you can’t get more than that [today]) domains. Its creation was facilitated by a wander through the wonders that are Jsonnet.

The SRE folks at Google try to avoid “toil” and oftentimes you can avoid toil by automating. Sometimes to automate, you have to learn new tools. Jsonnet is powerful and I’m confident that, if I continue to use it, I’ll flex my knowledge of it and it’ll be a useful addition to my toolset.

Cloud DNS is great. It hasn’t always been the easiest beast to program. But, as you saw, it’s getting better.

Tidy-up

You should revert your DNS records:

Running our prior test should return zero:

If anything untoward arises, you followed my advice and took a backup, so you may restore that.

I create clusters per task and delete them just as often:

But, if you want to tidy-up, you can just delete all the wonderful things we created:

If you’d like to (irrevocably) delete the GCP project — it’s irrevocable — you may:

That’s all folks!

--

--