Getting Started to Write Your First Kubernetes Admission Webhook Part 1✨

developer-guy
Published in
11 min readApr 7, 2021

--

Kubernetes Admission Controllers concept is very popular these days, especially dynamic ones:
MutatingAdmissionWebhook and ValidatingAdmissionWebhook. 🌟

Before jump into the details of how we can write one of these, let's explain a little bit about, what are they, what we can do with them.

An admission controller is a piece of code that intercepts requests to the Kubernetes API server prior to the persistence of the object, but after the request is authenticated and authorized. There are lots of admission plugins shipped with the Kubernetes, you can check the list to get more detail about them. It is worth noting that they are compiled into “kube-apiserver” binary, they may only be configured by the cluster administrator and they also come with their own logic, for example, “NamespaceLifecycle” admission controller enforces that a Namespace that is undergoing termination cannot have new objects created in it, and ensures that requests in a non-existent Namespace are rejected, so, we need a way to extend or enhance the capabilities of Kubernetes Admission Controllers, at this point, “Dynamic Admission Controller” comes into the picture.

We all know Kubernetes highly configurable and extendable by default, this extensibility feature is another power of the Kubernetes. There are lots of extensible points and components in the Kubernetes, if you want to get more detail about extending the Kubernetes, you can follow this link.

So, “Dynamic Admission Controller” is one of them. They can be developed as extensions and run as webhooks configured at runtime. So, there are two special kinds of Kubernetes Admission Controller’s that are responsible for them. These are “ValidatingAdmissionWebhook” and “MutatingAdmissionWebhook” Admission Controllers. These are special and the capabilities are limitless because they do not implement any policy decision logic themselves like any other admission plugin compiled into the “kube-apiserver” binary. Instead, the respective action is obtained from a REST endpoint (a webhook) of a service running inside the cluster.

I think we gave enough information about them, so, the purpose of this post is not to explain them. There are lots of good documentation available about them. I will drop the links for most of them in the references section of this post.

To get started writing an admission webhook of your own can be tedious and hard. Because it should talk with API Server through TLS, so, we should handle some kind of TLS management for our webhook. Also, we should register our webhook to the cluster. In order to do that we should manage one more additional Kubernetes resource called MutatingWebhookConfiguration or ValidatingWebhookConfiguration.

In a nutshell, our webhook is only a plain HTTP server that we can develop with Go and implements its own logic, exposes this logic behind an endpoint, and registers itself as a Mutating or Validating webhook.

The simplest way of getting started about writing an admission webhook of your own is using some kind of project scaffolding tools such as “Kubebuilder” or “Operator SDK”. We might be familiar with these tools about writing Kubernetes Operators, but they are also good choices for writing admission webhooks as well. We are going to use Operator SDK for this demo, so, let's give a brief introduction to this tool.

Operator SDK is for building Kubernetes applications. It provides high-level APIs, useful abstractions, and project scaffolding. The Operator SDK makes it easier to build Kubernetes native applications, a process that can require deep, application-specific operational knowledge. It does all the things by using controller-runtime under the hood.

Now, we all know what admission webhooks are and what the Operator SDK is. So, let's jump into the demo section to make our hands dirty. 👨‍💻

Demo

In this demo, we are going to develop“MutatingAdmissionWebhook” that checks the size of our Memcached Custom Resource, then, if the size equal to zero, we will set the size as 3 by default. But before creating our webhook, we should create our Memcached Operator first, so, in this demo, we are going to create an operator, then, we will create a webhook for the custom resource managed by the operator.

Operator SDK is going to use a cert-manager CA injector in order to handle TLS management for our webhook, so, we are going to install a cert-manager to our cluster too.

cert-manager is a native Kubernetes certificate management controller. It can help with issuing certificates from a variety of sources, such as Let’s Encrypt, HashiCorp Vault, Venafi, a simple signing key pair, or self-signed.

Let’s go! 🏎️🏎️

Prerequisites

  • Minikube v1.18.1
  • Operator SDK v1.5.0
  • kubectl v1.20.5
  • Go v1.16.3

I’m going to do this demo on the macOS environment, so, you can use “brew” which is a package manager for macOS to install all the tools above.

Memcached Operator

A Kubernetes Operator is an application-specific controller that extends the functionality of the Kubernetes API to create, configure, and manage instances of complex applications on behalf of a Kubernetes user.

Kubernetes Operator implements its own control-loop against the Custom Resource it creates. A custom resource is the API extension mechanism in Kubernetes. A custom resource definition (CRD) defines a CR and lists out all of the configurations available to users of the operator.

We are going to follow the quickstart guides from official Redhat pages about creating Go-based operators.

Let’s start with creating folders that will hold our project codebase.

$ mkdir -p memcached-operator
$ cd memcached-operator

Then, initialize the project through Operator SDK init command.

$ operator-sdk init --repo=github.com/developer-guy/memcached-operator
Writing scaffold for you to edit…
Get controller runtime:
$ go get sigs.k8s.io/controller-runtime@v0.7.2
Update dependencies:
$ go mod tidy
Next: define a resource with:
$ operator-sdk create api
# Let's look at the folder structure after the project initialized, you should see similar output like the following:
$ tree -L 2 .
.
├── Dockerfile
├── Makefile
├── PROJECT
├── config
│ ├── certmanager
│ ├── default
│ ├── manager
│ ├── prometheus
│ ├── rbac
│ └── scorecard
├── go.mod
├── go.sum
├── hack
│ └── boilerplate.go.txt
└── main.go

After the project initialized, we should create our own API to be able to manage Custom Resource, Memcached API in this case.

# Let's watch the filesystem changes by using fswatch command utility before running "operator-sdk create api" command.
$ fswatch . | xargs -n 1 -I {} echo {}
...
...
...
# Open a second terminal and run this command below, and watch changes on the filesystem through first terminal
$ operator-sdk create api --group cache --version v1 --kind Memcached --resource=true --controller=true
Writing scaffold for you to edit...
api/v1/memcached_types.go
controllers/memcached_controller.go
Update dependencies:
$ go mod tidy
Running make:
$ make generate
go: creating new go.mod: module tmp
Downloading sigs.k8s.io/controller-tools/cmd/controller-gen@v0.4.1
go get: added sigs.k8s.io/controller-tools v0.4.1
/Users/batuhan.apaydin/workspace/projects/personal/poc/operator-sdk-examples/memcached-operator/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..."

Operator SDK heavily uses Kustomize and Makefile to manage operations such as building, pushing images, deploying operators, and generating manifests. So, if you need to do something with the project, you should always refer to the Makefile targets, for example, if you want to build and push your project image, you can run “docker-build docker-push” targets within the Makefile, or if you want to make some changes on your Memcache CR such as adding or updating fields on it by changing the code at “api/memcached_types.go”, you should run “make” to update the Custom Resource Definition spec too.

Let’s build and push our project’s image.

# don't forget to add IMG variable, it should specify your DockerHub user id, mine is devopps.
$ make docker-build docker-push IMG=devopps/memcached-operator:v1
...
...
...
The push refers to repository [docker.io/devopps/memcached-operator]
7d8ae67ed609: Pushed
1a5ede0c966b: Layer already exists
v1: digest: sha256:9f017578347d1f861bce531e852d2d11293ee961bc072ee00f4c1f019a4bd858 size: 739
# let's check if image successfully pushed
$ crane ls devopps/memcached-operator
v1
$ crane digest devopps/memcached-operator:v1
sha256:9f017578347d1f861bce531e852d2d11293ee961bc072ee00f4c1f019a4bd858 <-- it should be same hash with the hash of docker push command output.

🔔 crane is a tool for managing container images. It is developed by Google.

Now, we are ready to deploy our operator, we should start our local Kubernetes cluster through Minikube before deploying the operator.

$ minikube start
😄 minikube v1.18.1 on Darwin 10.15.7
✨ Using the virtualbox driver based on user configuration
👍 Starting control plane node minikube in cluster minikube
🔥 Creating virtualbox VM (CPUs=3, Memory=8192MB, Disk=20000MB) ...
🐳 Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
▪ Generating certificates and keys ...
▪ Booting up control plane ...
▪ Configuring RBAC rules ...
🔎 Verifying Kubernetes components...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v4
🌟 Enabled addons: storage-provisioner, default-storageclass
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
$ make install
...
customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.my.domain created
# let's check if our custom resource definition created
$ kubectl get customresourcedefinitions.apiextensions.k8s.io
NAME CREATED AT
memcacheds.cache.my.domain 2021-04-05T07:23:44Z
$ make deploy IMG=devopps/memcached-operator:v1
...
namespace/memcached-operator-system created
customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.my.domain configured
serviceaccount/memcached-operator-controller-manager created
role.rbac.authorization.k8s.io/memcached-operator-leader-election-role created
clusterrole.rbac.authorization.k8s.io/memcached-operator-manager-role created
clusterrole.rbac.authorization.k8s.io/memcached-operator-metrics-reader created
clusterrole.rbac.authorization.k8s.io/memcached-operator-proxy-role created
rolebinding.rbac.authorization.k8s.io/memcached-operator-leader-election-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/memcached-operator-manager-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/memcached-operator-proxy-rolebinding created
configmap/memcached-operator-manager-config created
service/memcached-operator-controller-manager-metrics-service created
deployment.apps/memcached-operator-controller-manager created
# check if controller are working on namespace memcached-operator-system
$ kubectl get pods --namespace memcached-operator-system
NAME READY STATUS RESTARTS AGE
memcached-operator-controller-manager-6c9655978-7smfl 2/2 Running 0 72s

Let’s test the operator by applying our Custom Resource manifest.

# this is our custom resource
$ cat config/samples/cache_v1_memcached.yaml
apiVersion: cache.my.domain/v1
kind: Memcached
metadata:
name: memcached-sample
spec:
# Add fields here
foo: bar
size: 0
$ kubectl apply -f config/samples/cache_v1_memcached.yaml
memcached.cache.my.domain/memcached-sample created
# now, we can get our CR like any other Kubernetes resource such as Deployment, Pod etc.
$ kubectl get memcacheds.cache.my.domain
NAME AGE
memcached-sample 63s
$ kubectl get memcacheds.cache.my.domain memcached-sample -oyaml
apiVersion: cache.my.domain/v1
kind: Memcached
metadata:
name: memcached-sample
spec:
# Add fields here
foo: bar
size: 0

Okay, now we have completed part of the creating Kubernetes Operator, let's move on with creating a Kubernetes Admission Webhook for our CR.

MutatingAdmissionWebhook for Memcached Custom Resource

In this section we are going to generate a code template for our webhook, then, we will add a new field called “size” to our custom resource, then, in the webhook logic, we will check the value of the size field, if it is equal to zero, we will update the size as three.

Let’s generate the code template for our webhook

# --defaulting: This flag will scaffold the resources required for a mutating webhook# --programmatic-validation: This flag will scaffold the resources required for a validating webhook$ operator-sdk create webhook --group cache --version v1 --kind Memcached --defaulting --programmatic-validation
Writing scaffold for you to edit...
api/v1/memcached_webhook.go

If you take a look at the “api/v1/memcached_webhook.go” file, you should notice that some boilerplate code was generated for us to implement Mutating and Validating webhook logic.

// Default implements webhook.Defaulter so a webhook will be registered for the type
func (r *Memcached) Default() {
memcachedlog.Info("default", "name", r.Name)

// TODO(user): fill in your defaulting logic.
}
// ValidateCreate implements webhook.Validator so a webhook will be registered for the type
func (r *Memcached) ValidateCreate() error {
memcachedlog.Info("validate create", "name", r.Name)

// TODO(user): fill in your validation logic upon object creation.
return nil
}

// ValidateUpdate implements webhook.Validator so a webhook will be registered for the type
func (r *Memcached) ValidateUpdate(old runtime.Object) error {
memcachedlog.Info("validate update", "name", r.Name)

// TODO(user): fill in your validation logic upon object update.
return nil
}

// ValidateDelete implements webhook.Validator so a webhook will be registered for the type
func (r *Memcached) ValidateDelete() error {
memcachedlog.Info("validate delete", "name", r.Name)

// TODO(user): fill in your validation logic upon object deletion.
return nil
}

These are the functions above that we can use to implement our webhook logic, but here we are just going to edit the Default function because we are not interested in the validation part. So, let’s edit the file by adding the following code:

if r.Spec.Size == 0 {
r.Spec.Size = 3
}

Once our webhook is implemented, all that’s left is to create the WebhookConfiguration manifests required to register your webhooks with Kubernetes:

$ make manifests

The last thing that we should do here is enabling and deploying the cert-manager, in order to do that we should edit the “config/default/kustomization.yaml” file by uncommenting the sections marked by [WEBHOOK] and [CERTMANAGER] comments.

Let’s add install-cert-manager target to the Makefile like the following to deploy the cert-manager:

🔔 Don’t forget to add jetstack Helm repository to your repositories
$ helm repo add jetstack https://charts.jetstack.io

.PHONY: install-cert-manager ## Deploy cert-manager to the cluster
install-cert-manager:
helm install \
cert-manager jetstack/cert-manager \
--namespace cert-manager \
--version v1.2.0 \
--create-namespace \
--set installCRDs=true
.PHONY: install-cert-manager ## Deploy cert-manager to the cluster
install-cert-manager:
helm install \
cert-manager jetstack/cert-manager \
--namespace cert-manager \
--version v1.2.0 \
--create-namespace \
--set installCRDs=true

Let’s install then.

$ make install-cert-manager
$ kubectl get pods --namespace cert-manager
NAME READY STATUS RESTARTS AGE
cert-manager-85f9bbcd97-fq8wn 1/1 Running 0 26s
cert-manager-cainjector-74459fcc56-z2h8b 1/1 Running 0 26s
cert-manager-webhook-57d97ccc67-466mq 1/1 Running 0 26s

Because we changed the code, we should build and push our image again, this time using the v2 tag.

$ make docker-build docker-push IMG=devopps/memcached-operator:v2
...
The push refers to repository [docker.io/devopps/memcached-operator]
b09d21af0c50: Pushed
1a5ede0c966b: Layer already exists
v2: digest: sha256:69fbcaaf3bc3bb00c542b47e5eca1e96586c5e22123d2edc0067c4904a73ba8d size: 739

Now, everything is working as expected, the final step is deploying the operator.

$ make deploy IMG=devopps/memcached-operator:v2
...
namespace/memcached-operator-system unchanged
customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.my.domain configured
serviceaccount/memcached-operator-controller-manager unchanged
role.rbac.authorization.k8s.io/memcached-operator-leader-election-role unchanged
clusterrole.rbac.authorization.k8s.io/memcached-operator-manager-role configured
clusterrole.rbac.authorization.k8s.io/memcached-operator-metrics-reader unchanged
clusterrole.rbac.authorization.k8s.io/memcached-operator-proxy-role unchanged
rolebinding.rbac.authorization.k8s.io/memcached-operator-leader-election-rolebinding unchanged
clusterrolebinding.rbac.authorization.k8s.io/memcached-operator-manager-rolebinding unchanged
clusterrolebinding.rbac.authorization.k8s.io/memcached-operator-proxy-rolebinding unchanged
configmap/memcached-operator-manager-config unchanged
service/memcached-operator-controller-manager-metrics-service unchanged
service/memcached-operator-webhook-service created
deployment.apps/memcached-operator-controller-manager configured
certificate.cert-manager.io/memcached-operator-serving-cert created
issuer.cert-manager.io/memcached-operator-selfsigned-issuer created
mutatingwebhookconfiguration.admissionregistration.k8s.io/memcached-operator-mutating-webhook-configuration created
validatingwebhookconfiguration.admissionregistration.k8s.io/memcached-operator-validating-webhook-configuration created

In order to check the webhook if it is working, we should apply our CR manifest with the value zero for the size field, then, we should see the value of the size field as three on the created object side.

$ cat << EOF | k apply -f -
apiVersion: cache.my.domain/v1
kind: Memcached
metadata:
name: memcached-sample
spec:
# Add fields here
foo: bar
size: 0
EOF
memcached.cache.my.domain/memcached-sample created
$ kubectl get memcacheds.cache.my.domain memcached-sample -oyaml | kubectl neat
apiVersion: cache.my.domain/v1
kind: Memcached
metadata:
name: memcached-sample
namespace: default
spec:
foo: bar
size: 3 <-- you should see the size as 3.

Yeah, we demonstrated that it is working as we expected. 🎉🎉🎉

Conclusion

Without using these scaffolding tools, our life could get much harder. By using them, we can create Kubernetes Admission Webhooks easily and quickly.

🔔 Known limitations for the creating webhook part with the Operator SDK is we can only create a webhook for our CRs, so, we cannot create a webhook for the existing Kubernetes such as Deployment, Pod etc by default. But we can do this through the following guide. So, in the second part of this blog post, I will show you how we implement this kind of webhooks againts core types such as Deployment, Pod etc by using kubebuilder this time.

Hope you will enjoy it, thank you. 🙏

References

--

--

developer-guy
Trendyol Tech

🇹🇷KCD Turkey Organizer🎖Best Sigstore Evangelist🐦SSCS Twitter Community Admin✍️@chainguard_dev Fan📦Container Addict📅Organizer at @cloudnativetr•@devopstr