Secure your Kubernetes production cluster

Almost an year ago Red Hat introduced something called Container Image Signing (or Simple Signing). You can read more about it in this article.
Today I’m going to show how you can leverage Container Image Signing with CRI-O to secure your Kubernetes cluster.

DISCLAIMER: this post is meant to show the overall concepts and practices about image signing, it’s not meant as a production ready tutorial.

Image signing in a nutshell

Simply speaking, container image signing means: digitally sign a container image with a GPG key generating its detached signature, put the signature where it can be retrieved and verified and finally validate it when someone requests the image back on a host.

The story behind all this is pretty simple: if the signature for a given image is valid, the node is allowed to pull the image and run your containerwith it. Otherwise, your node rejects the image and fail to run your container.

Let’s go through a pratical example:

  • Company A has
  • Company A signed the image above and stored its signature (on a registry or on a lookaside server or on disk on each node)
  • Company A has an host which runs containers with Kubernetes through CRI-O
  • Company A sets up CRI-O to only run images signed by company A’s GPG key on
  • Company A runs a pod with a container — the pull is successful and CRI-O runs the container with said image
  • Company A nows tries to run a container but the pull fails because that image isn’t allowed by the policy configured in CRI-O on Company A’s node

The points are an hight view of what’s currently happening. I believe the best way to understand what’s going on is trying to setup container image signing on a cluster with CRI-O and Kubernetes.


For this example we need:

  • a host configured with CRI-O 1.0.x or later and Kubernetes
  • a container registry (our production registry)
  • a build and sign host with skopeo and docker (our build and signing server)

For simplicity, we’ll run all the above in just one host.

There are many ways you can get up and running with CRI-O and Kubernetes but I suggest going with kubeadm as it’s easy and straightforward. This post does not include a tutorial on how to setup CRI-O and Kubernetes as it’s out of scope for this post.

skopeo can be installed directly on your host:

Signing your images

First thing first, we need to have sign our images. We’re going to use skopeo for this. One advantage of using skopeo is that you can easily plug it into your own images build pipeline (e.g. Jenkins) and have image signed after the build.

Now, we need to configure some files in order to play with image signing. Container image signing with GPG or, as we call it, Simple Signing has two main files that must be configured in order to start signing images and enforcing policies on them.

The first one is /etc/containers/registries.d/default.yaml. This file specifies, mainly, where signatures should be stored after they’re generated (sigstore-staging), and the place where to retrieve them for validation (sigstore). The keys in the yaml file are:

  • sigstore
  • sigstore-staging

Since we’re obviously using the docker:// transport, as you can see below in the post, we’re configuring the sigstore and sigstore-staging for thedefault-docker section:

# sigstore: file:///var/lib/atomic/sigstore
sigstore-staging: file:///var/lib/atomic/sigstore

On the “build and signing host”, we’re commenting out the sigstore-staging yaml key as it will be the place skopeo will store signatures when generating them. 
The sigstore yaml key will be commented down below in the post when we’ll configure CRI-O to look for signatures at the path configured there.

To read more about the registry configuration file you can follow this link.

You may be asking why we need to store signatures on the host, or on a separate webserver (lookaside). The reason is the docker registry doesn’t support detached signatures stored on the registry itself, it doesn’t have an API for that.
The OpenShift integrated registry on the other hand, has native support for storing and retrieval of signatures. When you push an image with skopeo, it’ll directly store the generated signature on the OpenShift registry. When you pull an image with skopeo or CRI-O, you also get the signature from the registry so that skopeo or CRI-O can verify it against the image.

Let’s now generate generate signatures with skopeo and inspect those signatures in the sigstore-staging path.

First, we need a GPG key to sign our images with:

$ gpg2 --gen-key
$ gpg --list-keys
pub rsa2048 2017-11-26 [SC]
uid [ultimate] Production key <>
sub rsa2048 2017-11-26 [E]
$ gpg --armor --export > myproductionkey.gpg

Now that we have our GPG key, we export its public key to be placed on kube nodes for image verification. We’re now ready to sign our images with skopeo and push them to our production registry. We do have an nginx image in our docker storage. We’ll take it, sign it and push it to our production registry:

# docker images
nginx alpine bf85f2b6bf52 3 days ago 15.5MB
# skopeo copy --dest-creds USER:PASS --sign-by docker://
Getting image source signatures
Copying blob sha256:16174e87921f30e394dbcde33e3b9939b81c461bbb7b501dacec9869880f4297
4.04 MB / 4.04 MB [========================================================] 0s
Copying blob sha256:9a993208f0b099bcbcbf05e259889a7b49709b55741595adcd4f5894c019b319
11.10 MB / 11.10 MB [======================================================] 1s
Copying blob sha256:723c6421bcfc62af4478871d31ecb777f0ab1e31ce6de6b749d14e109d116d19
3.50 KB / 3.50 KB [========================================================] 0s
Copying blob sha256:6f403372b09b01cfb6a82c45731c59b987fcf6815698ee34c31509eb3fd2912d
4.50 KB / 4.50 KB [========================================================] 0s
Copying config sha256:bf85f2b6bf524b45639798dd525580bdf7ca7d673ac64e6c9b8faaced3cfbae5
0 B / 8.16 KB [------------------------------------------------------------] 0s
Writing manifest to image destination
Signing manifest
Storing signatures

With just one skopeo command we were able to:

  • sign the nginx image
  • store the signature on our build host
  • push the image on our production registry with the same name/tag

Let’s now check our signatures are in place on the host:

# ls -la /var/lib/atomic/sigstore/nginx@sha256\=72c35f5bb1e00e48d74466bedffe303911d460efcc93aa70c141dd338cfac98d/
total 12
drwxr-xr-x 2 root root 4096 Nov 26 15:49 .
drwxr-xr-x 3 root root 4096 Nov 26 15:49 ..
-rw-r--r-- 1 root root 596 Nov 26 15:49 signature-1

Let’s also verify that the image landed on the production registry:

# docker pull
alpine: Pulling from nginx
Digest: sha256:72c35f5bb1e00e48d74466bedffe303911d460efcc93aa70c141dd338cfac98d
Status: Downloaded newer image for

Our nginx image is now there along with its GPG detached signature under /var/lib/atomic/sigstore/.... We can grab the signature store and move it on our production host now and setup CRI-O to have a production-crafted policy for images to be verified against the signature store.
Simple signing also allows you to store your signature store on a remote HTTP server and have CRI-O fetch your signatures when a pull happen. This post won’t cover that but you can find more information in the official documentation.

For this example, we’ll configure CRI-O to run only images signed by that are stored on and reject all other images.

Setup the policy on your nodes

The policy for a node is managed by a file usually located at /etc/containers/policy.json.
For a detailed documentation about the policy configuration file, you can read the official documentation here.

For our example, we’ll configure a policy configuration to allow only images from that have been signed by All other images will be rejected on pull. We need to import myproductionkey.gpg into our node as well so cp it to evey node that does signatures check, just the public key!

"default": [
"type": "reject"
"": [{"type":"insecureAcceptAnything"}]
"docker": {
"": [
"type": "signedBy",
"keyType": "GPGKeys",
"keyPath": "/root/myproductionkey.gpg"

If you have a multi-node cluster where you run containers with Kubernetes, you need to move that policy to each node along with the gpg public key.

Now, enable the sigstore in /var/containers/registries.d/default.yaml:

sigstore: file:///var/lib/atomic/sigstore
#sigstore-staging: file:///var/lib/atomic/sigstore

Check it out and play!

assuming you have a kube cluster with CRI-O as its container runtime, let’s verify what we went through:

root@ubuntu0:~# cat nginx-
nginx-trusted-image-pod.yaml nginx-untrusted-image-pod.yaml
root@ubuntu0:~# cat nginx-*
apiVersion: v1
kind: Pod
name: nginx-trusted-image
- name: nginx
- containerPort: 80
apiVersion: v1
kind: Pod
name: nginx-untrusted-image
- name: nginx
image: nginx:alpine
- containerPort: 80
# kubectl get pods
nginx-untrusted-image 0/1 ImagePullBackOff 0 4m
nginx-trusted-image 1/1 Running 0 3m
# kubectl describe pod nginx
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m default-scheduler Successfully assigned nginx to ubuntu0.vm
Normal SuccessfulMountVolume 4m kubelet, ubuntu0.vm MountVolume.SetUp succeeded for volume "default-token-f6nrw"
Normal Pulling 3m (x4 over 4m) kubelet, ubuntu0.vm pulling image "nginx:alpine"
Warning Failed 3m (x4 over 4m) kubelet, ubuntu0.vm Failed to pull image "nginx:alpine": rpc error: code = Unknown desc = Source image rejected: Running image docker://nginx:alpine is rejected by policy.
Warning FailedSync 2m (x10 over 4m) kubelet, ubuntu0.vm Error syncing pod
Normal BackOff 2m (x6 over 4m) kubelet, ubuntu0.vm Back-off pulling image "nginx:alpine"

You can see above in the logs:

kubelet, ubuntu0.vm  Failed to pull image "nginx:alpine": rpc error: code = Unknown desc = Source image rejected: Running image docker://nginx:alpine is rejected by policy.

Which means the node isn’t allowed to pull the docker hub image but it can instead run our production nginx image!

Today, the only downside of Simple Signing is that we only enfore signatures on pull. That means that if you already have an image on the host, when run the container, it doesn’t enforce the signature. We’re working on that, so stay tuned!