Automation of building, signing and verifying docker images. Kaniko + Cosign + Kyverno

Trapezin Andrey
11 min readDec 8, 2023

--

Introduction

In the world of software development, delivering quality software quickly, safely, and reliably is crucial. Docker images, which simplify the deployment process, can help a lot. But, at the same time, they can come with their own set of challenges. In this article, based on our hands-on work at the biotech company BIOCAD, I will explore how to automate these procedures to maximize their benefits.

Building Docker images by hand can take a lot of time and may not always give consistent results. Tools like Kaniko help solve this problem by creating Docker images directly within Kubernetes, saving time and improving security.

Signing and verifying Docker images is another important task. It ensures that the images haven’t been tampered with and they are sourced from a safe location. Cosign is a great tool for this, but it can be time-consuming to sign and verify images one by one.

That’s where automation tools like GitlabCI come to our rescue. They allow us to automate these tasks so we can build, sign, and verify Docker images easily and reliably.

We also use Kyverno, a useful tool that helps us manage Docker images correctly in our Kubernetes setup.

In this article, we’ll explain how to automate these tasks using these tools. This way, you can work faster, have a safer workflow, and focus more on software development. Let’s get started!

Build Docker image

Kaniko, developed by Google, is a popular tool in the software development world. It’s great for building Docker images, but unlike Docker, it doesn’t need a Docker daemon. This makes it useful in environments that don’t or can’t use Docker, like standard Kubernetes clusters. Moreover, it’s much more secure because it reduces potential risks in the building environment.

Kaniko is also very reliable. It can build accurate Docker images just like Docker would. It does this by executing each command in the Dockerfile in userspace, making a new layer in the filesystem for each command.

So, using Kaniko in your Docker image building workflow can make it more accurate, secure, and efficient. It’s also simple to include in your GitLab CI/CD pipeline. Now, let’s set up the build stage using Kaniko in GitLab CI.

---
.image.Build.Kaniko:
stage: "build"
image:
name: "gcr.io/kaniko-project/executor:v1.14.0-debug"
entrypoint: [""]
artifacts:
name: "digest"
when: on_success
expose_as: 'Image digest'
paths:
- digest.txt
script:
- | # --- -- -
# ✏️ Add extra build args
if [ -z "${BUILD_DST:-}" ]; then
export BUILD_DST=$( \
printf "%s\n" \
"${CI_REGISTRY_IMAGE}${IMAGE_SUFFIX:+/$IMAGE_SUFFIX}:${IMAGE_TAG:+$IMAGE_TAG-}${CI_COMMIT_SHORT_SHA}" \
"${CI_REGISTRY_IMAGE}${IMAGE_SUFFIX:+/$IMAGE_SUFFIX}:${IMAGE_TAG:-latest}" \
"${CI_REGISTRY_IMAGE}${IMAGE_SUFFIX:+/$IMAGE_SUFFIX}:${IMAGE_TAG:+$IMAGE_TAG-}${CI_COMMIT_TAG:-$CI_COMMIT_REF_SLUG}" \
)
fi
for i in ${BUILD_DST:-}; do
export BUILD_ARGS="--destination=${i} $BUILD_ARGS"
done

- | # --- -- -
# 📄 Generate kaniko docker config
cat <<EOF > /kaniko/.docker/config.json
{
"auths": {
"$CI_REGISTRY": {
"username": "$CI_REGISTRY_USER",
"password": "$CI_REGISTRY_PASSWORD"
},
"$CACHE_REGISTRY": {
"username": "$CACHE_REGISTRY_USER",
"password": "$CACHE_REGISTRY_PASSWORD"
}
}
}
EOF

- | # --- -- -
# 🔨 Build image with kaniko
time /kaniko/executor \
${CI_DEBUG_TRACE:+--verbosity=debug} \
--image-fs-extract-retry=5 \
--cache=true \
--cache-repo="${CACHE_REGISTRY}/${CI_REGISTRY_IMAGE}" \
--cache-dir="${CI_BUILDS_DIR}/.cache" \
--registry-mirror="your-registry-mirror.com" \
--image-name-with-digest-file="${CI_PROJECT_DIR}/digest.txt" \
--context="${CI_PROJECT_DIR}/${BUILD_PATH:-}" \
${BUILD_FILE:+--dockerfile="$BUILD_FILE"} \
${BUILD_TARGET:+--target="$BUILD_TARGET"} \
--label org.opencontainers.image.vendor="${IMAGE_VENDOR:-$GITLAB_USER_EMAIL}" \
--label org.opencontainers.image.title="${CI_PROJECT_NAME}${IMAGE_SUFFIX:+/$IMAGE_SUFFIX}${IMAGE_TAG:+:$IMAGE_TAG}" \
--label org.opencontainers.image.source="${CI_PROJECT_URL}" \
--label org.opencontainers.image.ref.name="${CI_COMMIT_REF_NAME}" \
${CI_COMMIT_TAG:+--label org.opencontainers.image.version="$CI_COMMIT_TAG"} \
--label org.opencontainers.image.revision="${CI_COMMIT_SHA}" \
${BUILD_ARGS}

This GitLab CI configuration file defines a build job that uses Kaniko's executor image to build Docker images.

The job script contains three script sections:

  1. The first section sets BUILD_DST. If it's not defined, it uses variables CI_REGISTRY_IMAGE, IMAGE_SUFFIX, IMAGE_TAG, CI_COMMIT_SHORT_SHA and CI_COMMIT_TAG variables to create the destination for the built image in the registry. If BUILD_DST is defined, it iteratively constructs BUILD_ARGS to include the destinations.
  2. The second section generates the Kaniko Docker configuration file in JSON format, referencing gitlab ci predefined variables CI_REGISTRY, CI_REGISTRY_USER, and CI_REGISTRY_PASSWORD to set up authorization for the registry.
  3. The third section executes the build process with Kaniko using several arguments.

Also, note the use of artifacts in this job. The built image's digest is stored as an artifact in a file called digest.txt. This helps to identify the image uniquely.

Let’s look at an example of using this extension.

Build Image:
extends: .image.Build.Kaniko

Build Second Image:
extends: .image.Build.Kaniko
variables:
BUILD_FILE: second.dockerfile
BUILD_PATH: ./src/test
BUILD_ARGS: >
--build-arg FOO=bar
--someFlag

In the "Build Image" stage, the build job is executed with the defaults defined in .image.Build.Kaniko.

The “Build Second Image” stage extends the same .image.Build.Kaniko but with overridden variables BUILD_FILE, BUILD_PATH, and BUILD_ARGS. This flexibility allows you to customize each build job according to the needs of your Docker image.

Variables used in script:

  • IMAGE_SUFFIX: This variable is optional. It lets you specify a subdirectory path for the image on the GitLab Registry. An example path would be gitlab.example.com/group/project/${IMAGE_SUFFIX}.
  • IMAGE_TAG: This variable is used for image tagging. By default, the SHA commit or commit-tag is used. If manually defined, the 'latest' tag is not used.
  • BUILD_PATH: This variable points to the build directory from the root of the repository. By default, it is set to the current directory.
  • BUILD_FILE: This helps Kaniko locate the Dockerfile within the repository. By default, it is set to $CI_PROJECT_DIR/Dockerfile, which points to the Dockerfile in the project's root directory.
  • BUILD_TARGET: This optional variable allows for the building of a specific stage in a multi-stage build process.
  • BUILD_ARGS: This variable lets you specify additional arguments for the build command.

Sign Docker Image

Source: https://tenor.com/ru/view/autograph-signature-can-i-have-your-autograph-spongebob-spongebob-square-pants-gif-11668386

Cosign is an open-source tool used to sign, verify, and store containers in an OCI registry. It’s part of the Sigstore project, designed to make the software supply chain more secure.

Cosign plays an essential role in maintaining the safety of Docker images. It helps confirm where the Docker image came from and that it hasn’t been changed since it was signed. This ensures safe use of Docker images.

Cosign’s strength lies in its simplicity and how easily it fits into existing workflows. It doesn’t require significant changes to your current process, making it easy to adopt. It’s useful in a CI/CD environment as it can automate signing as part of the pipeline.

Getting Cosign ready starts with creating a key pair. You need a pair of private and public keys to sign your Docker images and check their signatures later. To create the key pair, you use the cosign generate-key-pair command.

You can store the keys safely in GitLab’s protected environment variables or an external secrets manager like HashiCorp Vault. These tools let you securely store and use keys in your pipelines without exposing them in your scripts or logs.

Now, with the keys created and stored safely, we can set up a GitLab CI job to use Cosign for image signing.

---
.image.Sign.Cosign:
image: bitnami/cosign:2.2.0-debian-11-r31
stage: sign
secrets:
COSIGN_SECRET_KEY:
vault: service/gitlab/COSIGN_SECRET_KEY@my-kv
file: true
COSIGN_PUBLIC_KEY:
vault: service/gitlab/COSIGN_PUBLIC_KEY@my-kv
file: true
COSIGN_PASSWORD:
vault: service/gitlab/COSIGN_PASSWORD@my-kv
file: false
variables:
COSIGN_YES: "true" # Used by Cosign to skip confirmation prompts for non-destructive operations
TARGET_IMAGE: "$CI_REGISTRY_IMAGE"
script:
- | # --- -- -
# 📄 Generate cosign docker config
cat <<EOF > /.docker/config.json
{
"auths": {
"$CI_REGISTRY": {
"username": "$CI_REGISTRY_USER",
"password": "$CI_REGISTRY_PASSWORD"
}
}
}
EOF

- | # --- -- -
# 🔑📝 Sign image
cosign sign ${COSIGN_SECRET_KEY:+--key="$COSIGN_SECRET_KEY"} -y $TARGET_IMAGE

- | # --- -- -
# 🔒✅ Verify signed image
cosign verify ${COSIGN_PUBLIC_KEY:+--key="$COSIGN_PUBLIC_KEY"} $TARGET_IMAGE

This GitLab CI YAML script defines a sign job that uses the bitnami/cosign image.

First, it loads the three essential secrets, which are used throughout the script, reading from a Vault instance:

  • COSIGN_SECRET_KEY: The Cosign’s private key.
  • COSIGN_PUBLIC_KEY: The Cosign’s public key.
  • COSIGN_PASSWORD: Password for the private key.

After loading the secrets, a variables block sets two environment variables:

  • COSIGN_YES is set to true, which allows Cosign to bypass any confirmation prompts.
  • TARGET_IMAGE, where the built image’s registry path is set, which will be signed and verified.

The script consists of three sections:

  1. Generate Docker config: Generates a Docker configuration file.
  2. Sign Image: Signs the target image using Cosign. The command refers to the COSIGN_SECRET_KEY for signing the TARGET_IMAGE.
  3. Verify Signed Image: Verifies the signed Docker image to ensure its authenticity. Here is theCOSIGN_PUBLIC_KEY variable used for verification.

The below YAML snippet demonstrates how to extend the default .image.Sign configuration:

Sign Image:
sign-image:
extends: .image.Sign
variables:
TARGET_IMAGE: registry.com/test/repo:latest

The job "Sign Image" is defined with the template .image.Sign, with an overridden TARGET_IMAGE variable pointing to the Docker image intended for signing.

It is recommended to use the Docker image digest instead of tags in TARGET_IMAGE. Tags can be reassigned, leading to potential inconsistencies, whereas the digest for a particular image always refers to the same image content.

Combining Build and Sign

include:
# --- Container Images ---
- /extensions/image/kaniko.yml
- /extensions/image/cosign.yml

stages:
- image:build
- image:sign

build-image:
extends: .image.Build.Kaniko
stage: image:build

sign-image:
extends: .image.Sign
stage: image:sign
needs:
- build-image
before_script:
- | # --- -- -
# 🔍 Get builded image digest
TARGET_IMAGE="$(head -n 1 digest.txt)"

This GitLab CI pipeline is a compact and efficient process that runs the build-image job that uses .image.Build.Kaniko. Then it runs the sign-image job that uses .image.Sign. It relies on build-image using needs: to ensure that signing is performed after the build. The TARGET_IMAGE variable is then updated to the first line of digest.txt, which is the result of the build stage.

To maximize efficiency and reusability, this pipeline configuration can be triggered as a downstream from another configuration file:

.trigger.Build:
trigger:
include: /pipelines/pipeline-build.yml
strategy: depend

The depend strategy means that the external pipeline will only run if this current pipeline succeeds.

Verify signed image

Kyverno is a tool made for Kubernetes that helps manage clusters using certain rules, or policies. It’s useful for ensuring you’re following best practices in your Kubernetes setting.

In particular, for Docker images, Kyverno offers the valuable feature of checking image signatures within Kubernetes. It allows you to set a policy that only allows properly signed images to be used.

A significant benefit of Kyverno lies in its Kubernetes-native design. Unlike other tools, Kyverno works directly with Kubernetes resources.

To start using Kyverno’s features, you’ll first need to install it onto your Kubernetes cluster. A common way to do this is using Helm, a tool for managing Kubernetes applications, through Argo CD, a tool for continuous delivery.

Here is how you can proceed with the setup.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: kyverno
namespace: argocd
spec:
project: cluster
source:
# https://artifacthub.io/packages/helm/kyverno/kyverno/3.0.8
chart: kyverno
repoURL: https://kyverno.github.io/kyverno/
targetRevision: 3.0.8
helm:
values: |
existingImagePullSecrets:
- regcred
admissionController:
replicas: 3
backgroundController:
replicas: 2
cleanupController:
replicas: 2
reportsController:
replicas: 2
resources:
limits:
memory: 1Gi
requests:
cpu: 100m
memory: 256Mi
destination:
server: https://kubernetes.default.svc
namespace: kyverno
syncPolicy:
automated:
prune: true
selfHeal: true

Please pay attention on attribute existingImagePullSecrets. If you're using a private Docker registry for storing images, this field comes in handy:

existingImagePullSecrets:
- regcred

This attribute specifies that a pre-existing Kubernetes Secret named regcred of type docker-registry is utilized for authentication when pulling Docker images. It essentially stores the credentials necessary to access your private Docker image repository.

Once Kyverno is installed on your Kubernetes cluster, the next step is to define the ClusterPolicy. This is a Kubernetes Custom Resource Definition (CRD) provided by Kyverno, and it allows you to specify rules that apply to resources across the entire cluster.

The YAML snippet you provided illustrates how you could set up a ClusterPolicy to verify images:

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: check-image
spec:
validationFailureAction: enforce
background: false
webhookTimeoutSeconds: 30
failurePolicy: Fail
rules:
- name: check-image
match:
any:
- resources:
kinds:
- Pod
verifyImages:
- imageReferences:
- "registry.com/test/repo*"
attestors:
- count: 1
entries:
- keys:
publicKeys: |-
-----BEGIN PUBLIC KEY-----
YOUR_COSIGN_PUBLIC_KEY
-----END PUBLIC KEY-----

Here’s the configuration breakdown:

  • validationFailureAction: enforce - This means violating the policy will block the resource operation.
  • background: false - This ensures the policy is checked only when the resource is modified.
  • webhookTimeoutSeconds: 30 and failurePolicy: Fail - The admission webhook request will fail if it does not return within 30 seconds.
  • A rule named check-image is defined, matching any resource kinds of type Pod. In this rule, the policy dictates that the Pod's images should be verified.
  • imageReferences: lists the Docker images to be verified. The glob pattern registry.com/test/repo* means any images under that pattern will be subjected to verification.
  • attestors: - Service for verifying signatures. In this case, Cosign is used.

This policy guarantees that any newly created or updated Pod in the cluster, which uses images following the specified pattern, is verified by the key before it's allowed to run.

Result

Let’s put our efforts to the test by creating Kubernetes pods.

First, we will attempt to deploy a pod using a properly signed Docker image:

kubectl run po --image=registry.com/test/repo:d42de8df

This command should return a successful creation message:

pod/po created
Source: https://tenor.com/ru/view/meme-approved-meme-knuckles-stamp-gif-17137660

So far so good! The pod has been scheduled successfully, indicating that our signed image was verified and authorized by Kyverno as per our ClusterPolicy.

Now, let’s try to run a pod with an unsigned Docker image:

kubectl run po --image=registry.com/test/repo:c4c8a73a

This time, we encounter an error message:

Error from server: admission webhook "mutate.kyverno.svc-fail" denied the request:
resource Pod/test/po was blocked due to the following policies
check-image:
check-image: |-
failed to verify image registry.com/test/repo:c4c8a73a: .attestors[0].entries[0].keys: no matching signatures:
invalid signature when validating ASN.1 encoded signature
invalid signature when validating ASN.1 encoded signature

Kyverno checks the image against the policy and since registry.com/test/repo:c4c8a73a is not signed, admission is denied, and the pod creation is blocked.

By observing these results, we’ve confirmed that our setup correctly permits the use of signed images and prevents the deployment of unsigned ones, resulting in a secure Kubernetes setup.

Conclusion

In a nutshell, we’ve managed to create and execute a pipeline that automates the process of building and signing Docker images, integrating important security practices directly into our day-to-day work.

First, the pipeline uses GitLab CI and Kaniko to build Docker images right within our current Kubernetes setup. This adds an extra layer of security, helping us avoid Docker-in-Docker builds or giving extra permissions to our CI/CD pipeline.

Once the image is built, we add in Cosign to sign Docker images. This helps confirm that our images are genuine and haven’t been meddled with since they were signed, enabling more belief in our Docker image path.

Lastly, we boost the security even more by adding in Kyverno into our cluster. We set up a ClusterPolicy that only allows signed Docker images to be deployed, making sure we’re following top security practices within our Kubernetes environment.

So overall, we’ve designed an automatic, secure, and effective pipeline that not only takes care of building and signing Docker images but ensures these signed images are used in Kubernetes. Indeed, this isn’t just theory — we’re putting it into practice at BIOCAD. By using these methods and tools in our everyday work, we’ve seen improvements in efficiency, security, and consistency in handling Docker images. This shows that this approach can really help others looking to make their software deployment process smoother and safer.

Feedback

If you have any ideas or advice to make this story better, please feel free to write them in the comments section, or ping me on LinkedIn. I truly value your engagement and contribution. If you enjoyed this article, don’t forget to follow me for regular news about my latest posts.

--

--