Image signing validation on K8s

Brian Davis
9 min readJul 5, 2022

--

In the previous post, we talked about using AWS KMS with Cosign to sign images before pushing them to an untrusted container registry and the importance that signed images have on the security of the software supply chain.

Sigstore created Policy-controller, (previously named cosigned) which is a Kubernetes admission controller used to validate image signatures before allowing a container to run on the cluster. One of the nice features of the policy-controller is that it does not require pulling the image to verify the signature for the image. Since the signature is a verification of the image digest and the digest is immutable that means we only need to download the signature, which is about 1KB, to verify that the image is the correct image.

For the demo we will generate 2 new test keys for use with the test images. One will be used so that we have a signed image that we can trust and another signed image that we do not trust (simulate a container signed by someone else). During a production setup, you want to make sure a production key is created using a Hardware Security Module as referenced in the previous post.

Generating test keys and signing images

We need to generate some test keys and sign some test images so that we can validate them against our local cluster. Note: You can skip this step if you want to leverage the repository below to test using the images that I have signed since the public keys are included in the repo.

$ cosign generate-key-pair                                                                                                                                            
Enter password for private key:
Enter password for private key again:
Private key written to cosign.key
Public key written to cosign.pub
mv cosign.pub cosign-trusted.pub
mv cosign.key cosign-trusted.key
$ cosign generate-key-pair
Enter password for private key:
Enter password for private key again:
Private key written to cosign.key
Public key written to cosign.pub
mv cosign.pub cosign-untrusted.pub
mv cosign.key cosign-untrusted.key

Now that we have some signing keys, we can leverage some open source test images and just pull and retag the images then sign and push them to our repo so that we can verify them later in a Kubernetes cluster.

$ docker pull gcr.io/kuar-demo/kuard-amd64:blue
$ docker pull gcr.io/kuar-demo/kuard-amd64:green
$ docker pull gcr.io/kuar-demo/kuard-amd64:purple
$ docker tag gcr.io/kuar-demo/kuard-amd64:blue slimm609/cosign-demo:kuard-signed1$ docker tag gcr.io/kuar-demo/kuard-amd64:green slimm609/cosign-demo:kuard-signed2$ docker tag gcr.io/kuar-demo/kuard-amd64:purple slimm609/cosign-demo:kuard-unsigned$ docker push slimm609/cosign-demo:kuard-signed1
$ docker push slimm609/cosign-demo:kuard-signed2
$ docker push slimm609/cosign-demo:kuard-unsigned
$ cosign sign --key cosign-trusted.key slimm609/cosign-demo:kuard-signed1
$ cosign sign --key cosign-untrusted.key slimm609/cosign-demo:kuard-signed2

Demo Repository

To assist with testing a cosign demo repo was created with files and manifests to help make it easier without having to host the images in your repo. Note: you will need kubectl and helm installed to complete this demo.

$ git clone https://github.com/slimm609/cosign-demo.git$ cd cosign-demo

Kind Cluster

Kind (Kubernetes IN Docker) is a tool for running Kubernetes clusters locally on any system that supports Docker. This provides the ability to quickly and easily stand up a Kubernetes cluster on a local system without a large number of resources. If you do not have Kind installed, follow the installation instructions to set it up on your machine. Once installed you can verify that it’s properly installed and in your user path.

$ kind version kind v0.13.0 go1.18 darwin/amd64

Now that we know the kind binary is working and have cloned the repo, let’s start a kind cluster and apply the base manifests to create namespaces.

$ kind create cluster --name cosign-demo --config manifests/kind-config.yamlCreating cluster "cosign-demo" ...
✓ Ensuring node image (kindest/node:v1.24.0) 🖼
✓ Preparing nodes 📦 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
✓ Joining worker nodes 🚜
Set kubectl context to "kind-cosign-demo"
You can now use your cluster with:
kubectl cluster-info --context kind-cosign-demoHave a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂$ kubectl config use-context kind-cosign-demo$ kubectl apply -f manifests/basenamespace/cosign-system created
namespace/ingress-nginx created
namespace/kuard-ns1 created
namespace/kuard-ns2 created
namespace/kuard-ns3 created

Deploy Ingress Controller

Since we are going to deploy an application with a web interface that we can test, we want to also deploy an ingress controller so that we can access the demo application from a browser. There are several ways to test this without an ingress controller but this makes it a bit easier so why not?

$ kubectl apply -f manifests/ingress-controllernamespace/ingress-nginx configured
serviceaccount/ingress-nginx created
serviceaccount/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
configmap/ingress-nginx-controller created
service/ingress-nginx-controller created
service/ingress-nginx-controller-admission created
deployment.apps/ingress-nginx-controller created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created

It will take a minute or two for the ingress controller to start up but we can easily check if it's up and running with curl.

$ curl http://localhost  

<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>

Now that we have a working ingress controller for our application, we can deploy Policy Controller to the cluster. It returns a 404 because we have not configured any ingress records but we can at least verify that nginx is running.

Deploy Policy Controller

Now that we have an ingress controller deployed, we can deploy the sigstore policy controller with the admission webhook to validate that images are signed by the proper signature.

$ kubectl create secret generic cosignpub -n cosign-system --from-file=cosign.pub=./test-keys/cosign-trusted.pubsecret/cosignpub created$ helm repo add sigstore https://sigstore.github.io/helm-charts"sigstore" has been added to your repositories$ helm repo updateHang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "sigstore" chart repository
Update Complete. ⎈Happy Helming!
$ helm install policy-controller -n cosign-system sigstore/policy-controller --set cosign.secretKeyRef.name=cosignpubNAME: policy-controller
LAST DEPLOYED: Sat Jul 2 09:10:38 2022
NAMESPACE: cosign-system
STATUS: deployed
REVISION: 1
TEST SUITE: None

We have to wait for the policy-controller to become healthy before applying the image policy. This can be checked with “kubectl get pod -n cosign-system” to ensure that both pods show 1/1 for the READY state. Once both are running, we can apply the clusterImagePolicy. Note: I included using the clusterImagePolicy CRD as that is the new way of leveraging Custom Resources instead of helm values but due a bug, the cosigned key is still required in the chart at the time of writing. Once the bug is resolved, this should work with just the CRD by excluding “--set cosign.secretKeyRef.name=cosignpub” from the helm install

$ kubectl apply -f manifests/imagePolicy.yamlclusterimagepolicy.policy.sigstore.dev/cosign-demo created

Now that policy controller has been enabled, we can check that it is running and see what is all installed.

$ kubectl get pod -n cosign-system

NAME READY STATUS RESTARTS AGE
policy-controller-policy-webhook-66f8497895-2mdnp 1/1 Running 0 50s
policy-controller-webhook-6565b8d75d-sxh46 1/1 Running 0 50s
$ kubectl get validatingwebhookconfigurationNAME WEBHOOKS AGE
ingress-nginx-admission 1 52m
policy.sigstore.dev 1 79s
validating.clusterimagepolicy.sigstore.dev 1 79s
$ kubectl get mutatingwebhookconfigurationNAME WEBHOOKS AGE
defaulting.clusterimagepolicy.sigstore.dev 1 83s
policy.sigstore.dev 1 83s

We can see that there is both a validating webhook and a mutating webhook. The mutating webhook will resolve the image tag and update it to the image digest. The validating webhook will verify the signature of the image digest.

Deploy Demo Application

Let’s deploy the Kuard demo application to a few different namespaces with different values for each one. This will allow us to test a few different scenarios and demonstrate some common use-cases.

  1. Deploy a signed application that we trust
  2. Deploy a signed application that we do not trust
  3. Deploy a signed application where we do not validate the trust
  4. Deploy an unsigned application where we do not validate the trust

First we will deploy a signed application that we have included the public key (#1) in the clusterImagePolicy so that we can verify against that key.

We can make sure that the namespace is configured to verify signatures by checking that the label “policy.sigstore.dev/include”:”true” is on the namespace

$ kubectl get ns kuard-ns1 -o jsonpath='{.metadata.labels}'{"kubernetes.io/metadata.name":"kuard-ns1","policy.sigstore.dev/include":"true"}

We can see that image signing is turned on so we can now deploy the application.

$ helm install kuard-ns1 --atomic -n kuard-ns1 ./manifests/kuard-demo -f manifests/kuard-signed1.yamlRelease "kuard-ns1" has been upgraded. Happy Helming!
NAME: kuard-ns1
LAST DEPLOYED: Mon Jul 4 10:53:44 2022
NAMESPACE: kuard-ns1
STATUS: deployed
REVISION: 4
TEST SUITE: None
$ kubectl get ingress -n kuard-ns1NAME CLASS HOSTS ADDRESS PORTS AGE
kuard-demo-1 nginx kuard-demo-1.localtest.me localhost 80 34m
$ curl http://kuard-demo-1.localtest.me<!doctype html>...

</body>
</html>

We can see that it properly verified the signature and that application is running and if we visit http://kuard-demo-1.localtest.me, we can see the web page is working.

Next, we can try to deploy a signed application that we have NOT included the public key (#2) in the clusterImagePolicy. This is a signed image but we do not trust the signer. We can check the namespace to ensure signature verification is turned on.

$ kubectl get ns kuard-ns2 -o jsonpath='{.metadata.labels}'{"kubernetes.io/metadata.name":"kuard-ns2","policy.sigstore.dev/include":"true"}

Let’s now deploy the next same application but with a different image in a different namespace.

$ helm install kuard-ns2 --atomic -n kuard-ns2 ./manifests/kuard-demo -f manifests/kuard-signed2.yamlError: INSTALLATION FAILED: release kuard-ns2 failed, and has been uninstalled due to atomic being set: admission webhook "policy.sigstore.dev" denied the request: validation failed: failed policy: cosign-demo: spec.template.spec.containers[0].image
index.docker.io/slimm609/cosign-demo@sha256:b45e6382fef12c72da3abbf226cb339438810aae18928bd2a811134b50398141 failed to validate public keys with authority authority-0 for index.docker.io/slimm609/cosign-demo@sha256:b45e6382fef12c72da3abbf226cb339438810aae18928bd2a811134b50398141: no matching signatures:
failed to verify signature
$ kubectl get pod -n kuard-ns2No resources found in kuard-ns2 namespace.

So, as we can see in the error returned from the signature validation webhook, the signature could not be validated because we do not have a trusted entry for the key used to sign the image.

Now that we have validated that we can deploy an image where we have an explicit trust for the key used to sign the image and also failed to deploy an image that we do not trust, we can now see if we can deploy an image we trust in a namespace where we do not enforce signature validation (#3).

$ kubectl get ns kuard-ns3 -o jsonpath='{.metadata.labels}'{"kubernetes.io/metadata.name":"kuard-ns3"}

As we can see in the output, image signature verification is not turned on for the kuard-ns3 namespace.

$ helm install kuard-ns3 --atomic -n kuard-ns3 ./manifests/kuard-demo -f manifests/kuard-signed-noverify.yamlNAME: kuard-ns3
LAST DEPLOYED: Mon Jul 4 13:40:01 2022
NAMESPACE: kuard-ns3
STATUS: deployed
REVISION: 1
TEST SUITE: None
$ kubectl get ingress -n kuard-ns3NAME CLASS HOSTS ADDRESS PORTS AGE
kuard-demo-3 nginx kuard-demo-3.localtest.me 80 23s

Let’s also install an unsigned image (#4) in the same namespace.

$ helm install kuard-ns4 --atomic -n kuard-ns3 ./manifests/kuard-demo -f manifests/kuard-unsigned.yamlNAME: kuard-ns4
LAST DEPLOYED: Mon Jul 4 13:42:31 2022
NAMESPACE: kuard-ns3
STATUS: deployed
REVISION: 1
TEST SUITE: None
$ kubectl get ingress -n kuard-ns3NAME CLASS HOSTS ADDRESS PORTS AGE
kuard-demo-3 nginx kuard-demo-3.localtest.me localhost 80 4m54s
kuard-demo-4 nginx kuard-demo-4.localtest.me localhost 80 2m24s

We were able to still use signed and unsigned images when validation is turned off for a particular namespace.

If we look at the differences between a signed and unsigned image on a running pod, we can see that the admission webhook changed it from the tag to the sha hash of the image.

$ kubectl get pod -n kuard-ns1 kuard-demo-1-64b9847b9f-lzszp -o jsonpath='{.spec.containers[].image}'index.docker.io/slimm609/cosign-demo@sha256:1ecc9fb2c871302fdb57a25e0c076311b7b352b0a9246d442940ca8fb4efe229$ kubectl get pod -n kuard-ns3 kuard-demo-3-6d7f4c5d76-4cblj -o jsonpath='{.spec.containers[].image}'slimm609/cosign-demo:kuard-signed1

They both referenced the image slimm609/cosign-demo:kuard-signed1 in the helm chart but because image tags are not immutable, the admission webhook for policy-controller modifies the image to be the hash of the referenced image so that it is immutable.

Summary

In the previous article (linked at the top), we covered why and how to sign images but without actually taking the steps to ensure that the signatures are verified provides no benefits. Policy-controller from sigstore makes it easy and seamless to verify signed images. Between cosign and policy-controller, you can start signing and verifying images on Kubernetes in just a matter of minutes.

Unlike notary, which requires running additional web services and a database to maintain for storing and verifying signatures, policy-controller utilizes your existing image registry for storing, pulling, and verifying the signatures.

Shoutout to the sigstore team for making signing and verifying images so quick and easy!

--

--

Brian Davis

Distinguished Engineer, VMware by Broadcom | Security & Compliance | System Design | views are mine https://www.linkedin.com/in/bdavis001/