Host-less TLS Ingress resources in BKPR

First of all, let me state that what is described in this article is a proof of concept (PoC) intended for Bitnami Kubernetes Production Runtime (BKPR). As such, I am describing ideas and a plausible implementation for them while, at the same time, seeking feedback to understand whether this could be a useful feature.


The Bitnami Kubernetes Production Runtime (BKPR) is a collection of services that make it easy to run production workloads in Kubernetes. The services are ready-to-run and pre-integrated with each other, so they work out of the box.

One of the goals of BKPR is reducing the complexity required to deploy and run not just Kubernetes itself, but the applications and services running on top of it.

One such idea I have always missed from a plain Kubernetes cluster is simpler Ingress resources. Take for instance the simplest descriptor for a TLS-protected Ingress in Kubernetes, which may look like this:

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
labels:
name: cafe
name: cafe
spec:
rules:
- host: cafe.example.com
http:
paths:
- backend:
serviceName: tea-svc
servicePort: 80
path: /tea
- backend:
serviceName: coffee-svc
servicePort: 80
path: /coffee
tls:
- hosts:
- cafe.example.com
secretName: cafe-tls

The rules section consists of a list of paths that are front-ended (and protected by TLS) by this Ingress. From this manifest: /tea and /coffee.

The tls section contains two relevant fields:

  • secretName: references the Kubernetes Secret which holds the private key to the X.509 certificate that will be used to protect this resource over TLS.
  • hosts: a list of fully-qualified DNS domain names that the Ingress Controller will accept in order to front-end the resources described in the rules section.

The combination of the tls and rules sections produces a routing map that allows you to reach the Kubernetes Service named tea-svc under https://cafe.example.com/tea and the Kubernetes Service named coffee-svc under https://cafe.example.com/coffee.

Now imagine that your company manages two production Kubernetes clusters: example.com and test.com. Next, imagine that you want to deploy this simple Web application in these two productions clusters. The manifest showed before can be deployed on the Kubernetes example.com Kubernetes cluster, but can’t be deployed as-is on the test.com cluster. For more complex applications, this problem becomes more apparent.

It would be great if BKPR could add the correct hostnames for you, automatically, as BKPR already has configuration related to your domain. That would allow us to (re-)write a simplified version of the previous manifest as:

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
labels:
name: cafe
name: cafe
spec:
rules:
- http:
paths:
- backend:
serviceName: tea-svc
servicePort: 80
path: /tea
- backend:
serviceName: coffee-svc
servicePort: 80
path: /coffee
tls:
- secretName: cafe-tls

Note that the hostnames are gone now, removing cluster-specific information from it. In theory, you can deploy this manifest unmodified onto the example.com and test.com clusters. In reality, if you ever tried to deploy this manifest on a plain Kubernetes cluster, you would get an error from the Kubernetes API:

The Ingress “cafe” is invalid: spec.tls[0].hosts: Invalid value: “”: a DNS-1123 subdomain must consist of lower case alphanumeric characters, ‘-’ or ‘.’, and must start and end with an alphanumeric character (e.g. ‘example.com’, regex used for validation is ‘[a-z0–9]([-a-z0–9]*[a-z0–9])?(\.[a-z0–9]([-a-z0–9]*[a-z0–9])?)*’)

I had a look to mutating admission webhooks and thought they might be a good fit for this use case.

A mutating admission webhook is part of the cluster control-plane and is essentially an HTTP callback that receives admission requests and may change them to enforce custom defaults. A mutating admission webhook requires that the Kubernetes cluster:

  • Runs v1.9 or newer
  • Has MutatingAdmissionWebhook and ValidatingAdmissionWebhook admission controllers enabled
  • Has admissionregistration.k8s.io/v1beta1 API is enabled.

So I wrote a mutating admission webhook that is called back for every API request to create or update a Kubernetes Ingress resource. For such requests, it ensures that valid, fully-qualified DNS hostnames are automatically inferred and the request modified accordingly.

When my mutating admission webhook is deployed on my BKPR cluster, configured to use example.com as its domain name, you can deploy this manifest just fine:

ingress.extensions/cafe configured

And if you describe the Ingress resources:

$ kubectl describe ingress cafe
Name: cafe
Namespace: default
Address: ...
Default backend: default-http-backend:80 (<none>)
TLS:
cafe-tls terminates cafe.example.com
Rules:
Host Path Backends
---- ---- --------
*
/tea tea-svc:80 (<none>)
/coffee coffee-svc:80 (<none>)
...

You can see that the Ingress resource was automatically patched to have the Ingress resource configured to terminate requests for cafe.example.com, which has been inferred from the Ingress name and BKPR’s DNS suffix.

Beautiful, isn’t it?


Now, about the inner workings.

When I mentioned that the Ingress resource was automatically patched, what happens under the surface is that its create request is dispatched to a mutating admission webhook. This webhook is deployed automatically when BKPR is installed and is configured to intercept all API requests that update or create an Ingress resource. For every such request, this webhook inspects the Ingress object and applies the following logic:

  • When there are no hosts specified in the tls section, the webhook will patch in a hosts entry that consists of a single entry, derived from the Ingress name and BKPR’s DNS suffix.
  • When there are hosts specified, each item is inspected and any unqualified name is qualified by appending BKPR’s DNS suffix.

This webhook is associated with a MutatingWebhookConfiguration object that describes what API operations will be intercepted and how to deliver those to a Kubernetes Service that will handle them:

---
apiVersion: admissionregistration.k8s.io/v1beta1
kind: MutatingWebhookConfiguration
metadata:
name: ingress-webhook-cfg
namespace: kubeprod
labels:
app: ingress-webhook
webhooks:
- name: ingress-webhook.kubeprod.io
clientConfig:
service:
name: ingress-webhook-svc
namespace: kubeprod
path: "/mutate"
caBundle: ...
rules:
- operations:
- "CREATE"
- "UPDATE"
apiGroups:
- "extensions"
apiVersions:
- "v1beta1"
resources:
- "ingresses"

The Kubernetes Service:

---
apiVersion: v1
kind: Service
metadata:
name: ingress-webhook-svc
labels:
app: ingress-webhook
spec:
ports:
- port: 443
targetPort: 443
selector:
app: ingress-webhook

And finally, the Deployment:

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ingress-webhook-deployment
labels:
app: ingress-webhook
spec:
replicas: 1
template:
metadata:
labels:
app: ingress-webhook
spec:
containers:
- name: ingress-webhook
image: falfaro/ingress-webhook:latest
imagePullPolicy: Always
args:
- -dnsSuffix=example.com
- -tlsCertFile=/etc/webhook/certs/cert.pem
- -tlsKeyFile=/etc/webhook/certs/key.pem
- -alsologtostderr
- -v=4
- 2>&1
volumeMounts:
- name: webhook-certs
mountPath: /etc/webhook/certs
readOnly: true
volumes:
- name: webhook-certs
secret:
secretName: ingress-webhook-certs

The tlsCertFile and tlsKeyFile command-line arguments to the webhook are required because the actual webhook implementation is an HTTP server that listens for API requests that pass the MutatingWebhookConfiguration over HTTP/S.

The code that handles the process of mutating a create or update request for an Ingress resource lives in:

func (whsvr *webhookServer) mutate(ar *v1beta1.AdmissionReview) *v1beta1.AdmissionResponse {
...
var patch []patchOperation
...
return &v1beta1.AdmissionResponse{
Allowed: true,
Patch: patchBytes,
}
}

The return value states that the API request is allowed — after all, a mutation admission webhook is a superset of a validating admission webhook — and returns the serialized patch that has to be applied to the original API request.


As you can see, mutation admission webhooks are an extremely powerful way of extending and customizing Kubernetes. In the specific case of BKPR, as illustrated above, they remove complexity by inferring certain attributes from BKPR’s configuration.

The entire source code for this is available here: https://github.com/falfaro/ingress-webhook.