Mutating Kubernetes resources with Gatekeeper

Lachlan Evenson
6 min readAug 21, 2021

--

Photo by Braňo on Unsplash

Gatekeeper is a Kubernetes policy controller that allows you to define policy to enforce which fields and values are permitted in Kubernetes resources. It operates as a Kubernetes admission controller and utilizes Open Policy Agent as its policy engine. Up until recently, Gatekeeper could only validate Kubernetes resources. This meant it was on the user to update resources to be compliant with policy before they can be successfully created.

Gatekeeper has recently introduced the ability to mutate resources. Mutation means that policy can change Kubernetes resources based on different criteria. A common example of a mutation policy might be changing privileged Pods to be unprivileged, another example might be setting the imagePullPolicy to Always for all Pods. Having the ability to mutate resources server-side on behalf of the user is a really easy way to enforce best practices, or apply standard labelling, or simply apply a baseline security policy to all resources.

In this blog we are going to cover how to use mutation in Gatekeeper by sharing several examples. DISCLAIMER: The mutation feature in Gatekeeper is still in ALPHA and shouldn’t be used in production until it becomes more mature.

Installing Gatekeeper

First, we are going to install Gatekeeper with the mutation feature enabled. Please don’t do this on a production Kubernetes cluster as this may cause disrupts to your workloads. I typically use Kind to create local Kubernetes clusters to test on. Create a file called kind-config.yaml with the following content:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
featureGates:
nodes:
- role: control-plane
- role: worker

Create a Kubernetes cluster using Kind with the following command:

$ kind create cluster --image=kindest/node:v1.22.0@sha256:b8bda84bb3a190e6e028b1760d277454a72267a5454b57db34437c34a588d047 --config ~/Downloads/kind-config.yaml
Creating cluster "kind" ...
✓ Ensuring node image (kindest/node:v1.22.0) 🖼
✓ Preparing nodes 📦 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
✓ Joining worker nodes 🚜
Set kubectl context to "kind-kind"
You can now use your cluster with:
kubectl cluster-info --context kind-kindHave a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂

Next, install Gatekeeper with the following command:

NOTE: It’s not best practice to blindly install Kubernetes resources from a URL without taking the time to understand what’s being installed. I’m providing for ease of installation as to demonstrate mutation.

$ kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/v3.5.1/deploy/experimental/gatekeeper-mutation.yaml
namespace/gatekeeper-system created
customresourcedefinition.apiextensions.k8s.io/assign.mutations.gatekeeper.sh created
customresourcedefinition.apiextensions.k8s.io/assignmetadata.mutations.gatekeeper.sh created
resourcequota/gatekeeper-critical-pods created
customresourcedefinition.apiextensions.k8s.io/configs.config.gatekeeper.sh created
customresourcedefinition.apiextensions.k8s.io/constraintpodstatuses.status.gatekeeper.sh created
customresourcedefinition.apiextensions.k8s.io/constrainttemplatepodstatuses.status.gatekeeper.sh created
customresourcedefinition.apiextensions.k8s.io/constrainttemplates.templates.gatekeeper.sh created
mutatingwebhookconfiguration.admissionregistration.k8s.io/gatekeeper-mutating-webhook-configuration created
customresourcedefinition.apiextensions.k8s.io/mutatorpodstatuses.status.gatekeeper.sh created
serviceaccount/gatekeeper-admin created
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/gatekeeper-admin created
role.rbac.authorization.k8s.io/gatekeeper-manager-role created
clusterrole.rbac.authorization.k8s.io/gatekeeper-manager-role created
rolebinding.rbac.authorization.k8s.io/gatekeeper-manager-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/gatekeeper-manager-rolebinding created
secret/gatekeeper-webhook-server-cert created
service/gatekeeper-webhook-service created
Warning: spec.template.metadata.annotations[container.seccomp.security.alpha.kubernetes.io/manager]: deprecated since v1.19; use the "seccom
apiVersion: v1
pProfile" field instead
deployment.apps/gatekeeper-audit created
deployment.apps/gatekeeper-controller-manager created
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/gatekeeper-controller-manager created
validatingwebhookconfiguration.admissionregistration.k8s.io/gatekeeper-validating-webhook-configuration created

You can confirm that Gatekeeper is up and running with the following command:

$ kubectl get pods -n gatekeeper-system
NAME READY STATUS RESTARTS AGE
gatekeeper-audit-54c9759898-pdqqh 1/1 Running 0 40m
gatekeeper-controller-manager-6fdb4b697c-5ll8g 1/1 Running 0 40m
gatekeeper-controller-manager-6fdb4b697c-6gqc6 1/1 Running 0 40m
gatekeeper-controller-manager-6fdb4b697c-qzvfb 1/1 Running 0 40m

Types of mutation policies

There are two types of mutation policies:

  • AssignMetadata — changes to resource metadata
  • Assign — changes to all fields except resource metadata

The AssignMetadata mutation policy is much more strict than the Assign policy. The metadata section of a resource is very sensitive and allowing changes to all metadata fields could result in unintended consequences. Given the sensitivity, mutation is restricted to only the addition of labels and annotations.

Anatomy of a mutation policy

Each mutation policy is made up of three parts:

  • mutation scope— which resources are to be mutated
  • intent — the field and value to be mutated
  • conditions — conditions to apply the mutation

Mutation scope defines which resources to apply the mutation and allows filtering as follows:

applyTo:
- groups: [""]
kinds: ["Pod"]
versions: ["v1"]
match:
scope: Namespaced | Cluster
kinds:
- APIGroups: []
kinds: []
labelSelector: []
namespaces: []
namespaceSelector: []
excludedNamespaces: []

Intent defines the field to be mutated. In the following example, a container with the name test will have the securityContext.privileged field set to false.

  location: "spec.containers[name:test].securityContext.privileged"
parameters:
assign:
value: false

There are two types of conditions for mutation resources.

  • path test — mutate the resource if the path exists/doesn’t exist
  • value test — mutate the resource if the value is/is not in the list of values
parameters:
pathTests:
- subPath: "spec.containers[name:foo]"
condition: MustExist
- subPath: spec.containers[name:foo].securityContext.capabilities
condition: MustNotExist
assignIf:
in: [<value 1>, <value 2>, <value 3>, ...]
notIn: [<value 1>, <value 2>, <value 3>, ...]

Defining mutation policies

Now that we understand the basics of mutation policies, lets work through some examples.

In this first example we are going create an AssignMetadata policy that adds a label location with a value of europeto all resources.

$ cat <<EOF | kubectl create -f -
apiVersion: mutations.gatekeeper.sh/v1alpha1
kind: AssignMetadata
metadata:
name: label-location
spec:
match:
scope: Namespaced
location: "metadata.labels.location"
parameters:
assign:
value: "europe"
EOF
assignmetadata.mutations.gatekeeper.sh/label-location created

Now that we’ve created the policy, let’s create a Pod. Note that the Pod has no labels.

$ cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
EOF
pod/nginx created

Confirm that the labels were successfully applied using the following command:

$ kubectl get pods nginx -o=jsonpath='{.metadata.labels}'
{"location":"europe"}

In the next example we will create an Assign mutation policy that set the securityContext.privilegedfield to false in all namespace except the kube-system namespace.

$ cat <<EOF | kubectl create -f -
apiVersion: mutations.gatekeeper.sh/v1alpha1
kind: Assign
metadata:
name: set-privileged-false
spec:
applyTo:
- groups: [""]
kinds: ["Pod"]
versions: ["v1"]
match:
scope: Namespaced
kinds:
- apiGroups: ["*"]
kinds: ["Pod"]
excludedNamespaces: ["system"]
location: "spec.containers[name:*].securityContext.privileged"
parameters:
assign:
value: false
EOF

Let’s create a Pod that has the securityContext.privilege field set to false.

$ cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
securityContext:
privileged: true
EOF

Confirm that the Assign mutation policy has been successfully applied.

$ kubectl get pods nginx -o=jsonpath='{.spec.containers[*].securityContext}'
{"privileged":false}

Using Gatekeeper mutation with Pod Security Admission

In a recent blog I took a hands on look at Pod Security Admission. One of the biggest changes to Pod Security Admission from its successor Pod Security Policy is that it is validation only (mutation is not supported). Because mutation is not supported, users must modify resources that aren’t compliant. I wanted to explore auto-remediation using Gatekeeper mutation so that resources do not violate Pod Security Admission policy. Let’s again use the Pod from the previous example that has securityContext.privilegedfield to true.

apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
securityContext:
privileged: true

We are going to label the default namespace so that Pod Security Admission enforces the baseline security policy. The above Pod will violate that policy.

$ kubectl label --overwrite ns default \
pod-security.kubernetes.io/enforce=baseline \
pod-security.kubernetes.io/enforce-version=v1.22
namespace/default not labeled

Let’s try creating the Pod

$ cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
securityContext:
privileged: true
EOF
Error from server (Failure): error when creating "STDIN": privileged (container "nginx" must not set securityContext.privileged=true)

We get an error because the Pod violated the security policy. Now let’s create an Assign policy that mutates securityContext.privileged=true

$ cat <<EOF | kubectl create -f -
apiVersion: mutations.gatekeeper.sh/v1alpha1
kind: Assign
metadata:
name: set-privileged-false
spec:
applyTo:
- groups: [""]
kinds: ["Pod"]
versions: ["v1"]
match:
scope: Namespaced
kinds:
- apiGroups: ["*"]
kinds: ["Pod"]
excludedNamespaces: ["system"]
location: "spec.containers[name:*].securityContext.privileged"
parameters:
assign:
value: false
EOF

Now that this Assign mutation policy is created, well try to create the Pod again

$ cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
securityContext:
privileged: true
EOF
pod/nginx created

Finally, let’s confirm that the securityContext.privileged is indeed set to false

$ kubectl get pods nginx -o=jsonpath='{.spec.containers[*].securityContext}'
{"privileged":false}

Conclusion

Having the ability to mutate resources is a powerful way to enforce policy, best practices, security standards, on behalf of the end user. Having a way to predictably apply mutations via Gatekeeper looks extremely promising and I’m excited to see this feature mature. If you have any feedback on this feature, I suggest creating an issue on the Gatekeeper GitHub repository.

--

--

Lachlan Evenson

Husband | Father of three | Youtuber | Containers @Azure | 🇦🇺 | Time Traveller | CloudNative Ambassador + Mercenary | CKA | Opinions are my own.