Admission Control On Kubernetes

Bouwe Ceunen
Axons
Published in
5 min readAug 21, 2019

Admission control on Kubernetes can be very helpful to maintain control over your Kubernetes cluster. It will ensure that no deployments, jobs, pods, etc are scheduled without being compliant to your constraints and rules. Gatekeeper is specifically designed to do this on your Kubernetes cluster and is part of a broader admission/authorization control system called Open Policy Agent.

Open Policy Agent has several other tools, such as Kafka authorization and API authorization. We will focus on Kubernetes admission control with Gatekeeper. There are several things that need to be explained before we can dive deeper into how admission control works on Kubernetes. Gatekeeper uses a language called Rego to define its constraints.

All examples can also be found on my GitHub page. Policies are defined here which will enforce explicitly defining a namespace, enforce resource requests/limits and enforce the usage of an ‘app’ label.

Understanding Rego

First, it’s needed to understand which language to use in order to define policies. Like the documentation about Rego states, its queries are assertions on data stored in OPA. It is an extension of Datalog which is a subset of Prolog. It’s thus a declarative language. You define rules in Rego which, if valid, will trigger a constraint violation which will block the ongoing action.

violation[{"msg": msg}] {
container := input.review.object.spec.containers[_]
mem_orig := container.resources.limits.memory
mem := canonify_mem(mem_orig)
max_mem_orig := input.parameters.limits_memory
max_mem := canonify_mem(max_mem_orig)
mem > max_mem
msg := sprintf("container <%v> memory limit <%v> is higher than the maximum allowed of <%v>", [container.name, mem_orig, max_mem_orig])
}

If we look at some Rego you will see that this policy enforces that the memory limit is less than a predefined amount. The variables are retrieved from either the input parameters or from the input object. How this works exactly with custom parameters will be explained later on. The ‘canonify_mem’ function will make sure that e.g. ‘512Mi’ will be quantified to the same base units, regardless of which unit you use. If each line of this violation definition is correct and evaluates to ‘true’, the message will be triggered and the ongoing operation will be blocked. The ‘[_]’ annotation will traverse all containers defined in the template.

You need to define what needs to be true in order for the violation to be valid.

Kubernetes webhooks

In order for each Kubernetes API request to pass Gatekeeper for validation, a webhook has to be in place. This webhook will be called by the Kubernetes API server and Gatekeeper will validate each request based on the policies defined. This webhook will be created when you deploy Gatekeeper onto your cluster. Don’t forget to remove this when you don’t want to use Gatekeeper anymore. You can get this webhook by executing following kubectl command. There is a CRD called ValidatingWebhookConfiguration which will handle the inner workings for you. There is also a MutatingWebhookConfiguration but this is out of scope for this post.

kubectl get ValidatingWebhookConfiguration validation.gatekeeper.sh -o yaml

Defining constraints

Constraints can be defined by creating a CRD (CustomResourceDefinition) with the template of the constraint you want. Let’s look at a template which enforces that nothing should be deployed in the ‘default’ namespace and thus enforcing that you explicitly define a namespace because the default namespace is, you can guess, ‘default’.

apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: k8srequirednamespace
spec:
crd:
spec:
names:
kind: K8sRequiredNamespace
listKind: K8sRequiredNamespaceList
plural: k8srequirednamespace
singular: k8srequirednamespace
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8srequirednamespace
violation[{"msg": msg}] {
namespace := input.review.object.metadata.namespace
namespace == "default"
msg := "you must provide a namespace other than default"
}

It will get the namespace and check if it’s the namespace ‘default’. If this is true, the violation block continues and the violation is triggered with corresponding message. Another policy template will enforce the usage of certain labels on your Kubernetes entities.

apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: k8srequiredlabels
spec:
crd:
spec:
names:
kind: K8sRequiredLabels
listKind: K8sRequiredLabelsList
plural: k8srequiredlabels
singular: k8srequiredlabels
validation:
# Schema for the `parameters` field
openAPIV3Schema:
properties:
labels:
type: array
items: string
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8srequiredlabels
violation[{"msg": msg, "details": {"missing_labels": missing}}] {
input.review.object
provided := {label | input.review.object.metadata.labels[label]}
required := {label | label := input.parameters.labels[_]}
missing := required - provided
count(missing) > 0
msg := sprintf("you must provide labels of root object metadata: %v", [missing])
}

If you look closely to the rego definition we notice that a first check is done to ensure input.review.object exists. The provided and required labels are gathered and a condition is checked whether there are labels omitted from the object trying to enter Kubernetes. If this is the case, the violation is triggered with the according message.

Go to my GitHub page for more complex versions of the constraints.

Enforcing constraints

These constrains which you defined now need to be enforced. This is done by defining a constraint which will use the template specified earlier.

apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredNamespace
metadata:
name: namespace-policy
spec:
match:
kinds:
- apiGroups: ["batch", "extensions", "apps", ""]
kinds: ["Deployment", "Pod", "CronJob", "Job", "StatefulSet", "DaemonSet", "ConfigMap", "Service"]

This policy will use the CRD “K8sRequiredNamespace” which we had already defined. Matching is done by specifying on which API group it has to be enforced. All kinds of Kubernetes entities and API groups can be found by executing the following kubectl command.

kubectl api-resources -o wide

Now let’s look at the policy template which will enforce the usage of certain labels on your Kubernetes entities. These labels can be passed down to the corresponding template, which in this case is “K8sRequiredLabels” which we defined above. The corresponding constraint will enforce the usage of the ‘app’ label. It is possible to add more labels to this list.

apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredLabels
metadata:
name: labels-policy
spec:
match:
kinds:
- apiGroups: ["batch", "extensions", "apps", ""]
kinds: ["Deployment", "Pod", "CronJob", "Job", "StatefulSet", "DaemonSet"]
parameters:
labels: ["app"]

TL;DR

Kubernetes admission control ensures that you control everything that enters the cluster. Gatekeeper is specifically designed to do this on your Kubernetes cluster and is part of a broader admission/authorization control system called Open Policy Agent. Policies are defined on my GitHub page which will enforce explicitly defining the namespace, enforce resource requests/limits and enforce the usage of the ‘app’ label.

--

--