KYC — Know your Container(image) with SLSA, SBOM and Binary Authorization

Daniel Strebel
Google Cloud - Community
13 min readNov 2, 2023

The term KYC usually stands for “know your customer” which are established processes used by financial institutions and other regulated industries to collect information and verify the identity of their customers. Businesses follow KYC processes in order to mitigate counterparty risk and limit their exposure to potentially harmful business practices of their customer base such as money laundering or identity theft. Last but not least the regulatory bodies of many institutions and jurisdictions require businesses to collect information about their customers and information about the legitimacy of their business activities.

In this article we are not going to look at KYC in the sense of “know your customer” but instead I want to highlight how the same theme of transparency and provenance applies to knowing your container images. Historically, containers have been considered somewhat of a black box for a long time just like financial institutions considered their business to be independent of that of their customers. Many developers appreciated the seemingly encapsulated nature and more or less clearly defined interaction contract of a container-based microservices architecture. Unfortunately though, the reality is quite a bit different and many of us learned one way or another that even containerized applications can pose a non-trivial security risk to the application and the broader runtime infrastructure. With this in mind, the industry has established a range of best practices with regard to how we need to build and deploy containers. In recent years we have also seen an increase in regulatory interest in gaining transparency in how software is built. This manifests itself in government mandated initiatives like the Executive Order on Improving the Nation’s Cybersecurity in the US and the Cyber Resiliency Act in the EU.

In this blog post we want to take a practical spin at container security and look at different ways you can address the transparency and attestation requirements for your container images. With the processes outlined here you will be able to prove the provenance of your container image and answer questions such as:

  • How it was built by creating SLSA Level 3 build provenance
  • Which libraries went into building it by automatically generating a Software Bill of Materials (SBOM)
  • Provide our own cryptographic attestation that we can assert via Binary Authorization when we are going to deploy that image to GKE or Cloud Run.

The following steps will explain the motivation behind each of these steps and give the instructions for building them into your CI/CD pipeline for Google Cloud. The commands are intended to be run in Cloud Shell or the default Cloud Workstation image. You can obviously also execute them locally if you have the necessary tooling e.g. docker, gcloud and skaffold available in your environment.

Starting with a simple Application

Throughout the steps we will use the very simple Golang application that is Cloud Run Hello. This application is basically a static website that shows the Cloud Run Unicorn and some meta data about the Cloud Run Service.

Cloud Run Hello Service

For this we first clone the repository from GitHub:

git clone https://github.com/GoogleCloudPlatform/cloud-run-hello
# Optional in case future changes in the repo break the examples described here:
# git checkout 74911220ee2bf88459bf7d83f0ea9cfc1e99bdc3
cd cloud-run-hello

We then prepare our Google Cloud project configuration by setting our specific project ID and primary GCP region

export GCP_PROJECT_ID=<SET HERE>
export GCP_REGION=europe-west1

We then enable the first set of services and create and configure the container image artifact registry:

gcloud services enable run.googleapis.com artifactregistry.googleapis.com --project $GCP_PROJECT_ID

gcloud artifacts repositories create default \
--location $GCP_REGION \
--repository-format=docker \
--project $GCP_PROJECT_ID

gcloud auth configure-docker \
$GCP_REGION-docker.pkg.dev

With the services enabled and the artifact registry in place we can start to work on our first version of the Cloud Run application. For this we start simple and perform a local Docker build of the container image with a tag that we can later use to push to the artifact registry and indicate that the image was built locally.

docker build . -t "$GCP_REGION-docker.pkg.dev/$GCP_PROJECT_ID/default/cloud-run-hello:local"

To deploy that image to Cloud Run push it to the repository and finally deploy it as a new service.

docker push $GCP_REGION-docker.pkg.dev/$GCP_PROJECT_ID/default/cloud-run-hello:local

gcloud run deploy cloud-run-hello \
--image $GCP_REGION-docker.pkg.dev/$GCP_PROJECT_ID/default/cloud-run-hello:local \
--region $GCP_REGION \
--allow-unauthenticated \
--project $GCP_PROJECT_ID

In the command line output of the last command you will find the service’s public URL which we can open in a new browser tab to see the unicorn dance party that we expect from the Cloud Run Hello service.

That’s great! Job done. The unicorn is dancing. What else do we want?!

How about SLSA Build Provenance?

When we look at our image that we’re now running on Cloud Run. How do we know how it was built? How do we record what the build pipeline looked like at the time the build was created? And how do we even know which infrastructure tooling was used to build the image in the first place?

This is where the SLSA [pronounced like the spicy sauce — salsa] framework comes in. It provides a series of levels that provide integrity guarantees about the build provenance of an artifact. Our goal here is essentially to create a verifiable origin story of a particular container image that includes

  • A link to the source code assets and content hash that went into this build
  • The builder and build recipe used to build this image
  • Timestamps of the build

The good news is that in order to achieve SLSA level 3 in Google Cloud we only need to move our container build from our untrusted local build environment to the trusted build environment that is Cloud Build. For this we enable the necessary APIs for Cloud Build and Container Analysis:

gcloud services enable cloudbuild.googleapis.com containeranalysis.googleapis.com containerscanning.googleapis.com --project $GCP_PROJECT_ID

We then create a cloudbuild.yaml manifest that specifies the image to be built and sets the requested verify option such that the verification is also performed on regional builds.

cat <<EOF>cloudbuild.yaml
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', '$GCP_REGION-docker.pkg.dev/$GCP_PROJECT_ID/default/cloud-run-hello:\$_TAG', '.' ]
images: ['$GCP_REGION-docker.pkg.dev/$GCP_PROJECT_ID/default/cloud-run-hello:\$_TAG']
options:
requestedVerifyOption: VERIFIED
EOF

Finally we can run the the Cloud Build from the local source like so:

gcloud builds submit . \
--project $GCP_PROJECT_ID \
--region $GCP_REGION \
--substitutions "_TAG=cloud-build"

Once the build is completed we can inspect the container image and its attestation. The SLSA summary can be seen directly from the Cloud Build that we just executed. On the build details page you can click on Build Artifacts:

If you click the VIEW button for the image you have built you should see your SLSA level 3 indication.

We can also get access to the SLSA level together with the full provenance with the gcloud CLI via our tagged image if we run:

gcloud artifacts docker images describe \
"$GCP_REGION-docker.pkg.dev/$GCP_PROJECT_ID/default/cloud-run-hello:cloud-build" \
--show-provenance

The output contains the full SLSA provenance including information about the builder and a reference to the cloud storage bucket that holds the source code that was used to build the artifact.

For the image we built locally and then manually pushed to Artifact Registry we do not have a SLSA level because the local image build doesn’t automatically provide us the provenance of how it was executed. You can verify this by running the same command for the locally built image:

gcloud artifacts docker images describe \
"$GCP_REGION-docker.pkg.dev/$GCP_PROJECT_ID/default/cloud-run-hello:local" \
--show-provenance

Understand the Dependencies with an SBOM

Having a provenance gives us some additional information and assurance about an image that we have in our registry. However, even with the provenance in place, our image is still somewhat of a black box. Thanks to the provenance we now know where the image came from but we still have to go back to the source that is linked in the manifest if we want to understand which dependencies are used as part of the build.

To get the transparency on the dependencies we want to add our image provenance with a Software Bill of Material (SBOM). SBOMs have become increasingly relevant in releasing and shipping software and are also subject to many emerging government mandates to enhance cybersecurity.

We run the following command to export the SBOM for the new image with the following command:

gcloud artifacts sbom export \
--uri "$GCP_REGION-docker.pkg.dev/$GCP_PROJECT_ID/default/cloud-run-hello:cloud-build"

The SBOM is now available for inspection in Google Cloud Storage as well as in a pretty table render in the Dependencies Tab in Artifact Registry:

SBOM export in the Google Cloud Console

The link for the SBOM storage object in Google Cloud Storage can also be retrieved by running the following gcloud command:

gcloud artifacts docker images describe \
"$GCP_REGION-docker.pkg.dev/$GCP_PROJECT_ID/default/cloud-run-hello:cloud-build" \
--show-sbom-references --format json | jq -r '.image_summary.sbom_locations[0]'

Zero trust for Containers with Binary Authorization

Now that we have a container image in our artifact registry that contains SLSA Level 3 provenance and has an SBOM for all of its dependencies, we want to turn our focus to the deployment of the image. Ultimately we want to ensure that from now on every image that we run in our Cloud Run service adheres to this requirement.

To achieve this we make use of Binary Authorization which is a deploy-time check on the container image that verifies the image’s authenticity via a cryptographic signature.

To prepare our project and existing Cloud Run service for Binary Authorization we need to run the following commands to enable the required APIs and to create the key ring and attestor.

gcloud services enable binaryauthorization.googleapis.com cloudkms.googleapis.com --project $GCP_PROJECT_ID

gcloud kms keyrings create binauth-keys --location="$GCP_REGION"

gcloud kms keys create bin-auth-demo \
--keyring="binauth-keys" --location="$GCP_REGION" \
--purpose asymmetric-signing --default-algorithm="ec-sign-p256-sha256"

cat > ./create_note_request.json << EOF
{
"attestation": {
"hint": {
"human_readable_name": "Demo attestation authority"
}
}
}
EOF

curl -X POST \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
--data-binary @./create_note_request.json \
"https://containeranalysis.googleapis.com/v1/projects/$GCP_PROJECT_ID/notes/?noteId=demo-attestor-note"

PROJECT_NUMBER=$(gcloud projects describe "$GCP_PROJECT_ID" --format="value(projectNumber)")

BINAUTHZ_SA_EMAIL="service-$PROJECT_NUMBER@gcp-sa-binaryauthorization.iam.gserviceaccount.com"

cat > ./iam_request.json << EOF
{
'resource': 'projects/${PROJECT_ID}/notes/${NOTE_ID}',
'policy': {
'bindings': [
{
'role': 'roles/containeranalysis.notes.occurrences.viewer',
'members': [
'serviceAccount:${BINAUTHZ_SA_EMAIL}'
]
}
]
}
}
EOF

curl -X POST \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
--data-binary @./iam_request.json \
"https://containeranalysis.googleapis.com/v1/projects/$GCP_PROJECT_ID/notes/demo-attestor-note:setIamPolicy"


gcloud container binauthz attestors create demo-attestor \
--attestation-authority-note=demo-attestor-note \
--attestation-authority-note-project=$GCP_PROJECT_ID


gcloud beta container binauthz attestors public-keys add \
--attestor="demo-attestor" \
--keyversion="projects/$GCP_PROJECT_ID/locations/$GCP_REGION/keyRings/binauth-keys/cryptoKeys/bin-auth-demo/cryptoKeyVersions/1"

To enable binary authorization on our Cloud Run project we run the following commands which essentially require all images that are allowed to be used in the project to have an attestation from the demo-attestor that we created above:

cat > ./binauth-policy.yaml << EOF
defaultAdmissionRule:
enforcementMode: ENFORCED_BLOCK_AND_AUDIT_LOG
evaluationMode: REQUIRE_ATTESTATION
requireAttestationsBy:
- projects/$GCP_PROJECT_ID/attestors/demo-attestor
globalPolicyEvaluationMode: ENABLE
name: projects/$GCP_PROJECT_ID/policy
EOF

gcloud container binauthz policy import ./binauth-policy.yaml --project $GCP_PROJECT_ID

gcloud run services update cloud-run-hello --binary-authorization=default --region $GCP_REGION --project $GCP_PROJECT_ID

To test if the binary authorization is working as expected let us first try to deploy our previous image with SLSA Level 3 and SBOM to our Cloud Run service:

gcloud run deploy cloud-run-hello \
--image "$GCP_REGION-docker.pkg.dev/$GCP_PROJECT_ID/default/cloud-run-hello:cloud-build" \
--region $GCP_REGION \
--set-env-vars "COLOR=darkcyan" \
--project $GCP_PROJECT_ID

This is expected to fail with an error message similar to the one below:

ERROR: (gcloud.run.deploy) Revision 'cloud-run-hello-...' 
is not ready and cannot serve traffic. Container image
'europe-west1-docker.pkg.dev/.../default/cloud-run-hello@sha256:061a...'
is not authorized by policy.
Image europe-west1-docker.pkg.dev/.../default/cloud-run-hello@sha256:061a...
denied by attestor projects/.../attestors/demo-attestor:
No attestations found that were valid and signed by a key trusted
by the attestor

The error message’s section of “denied by attestor projects/…/attestors/demo-attestor: No attestations found that were valid and signed by a key trusted by the attestor” is clearly indicating that the missing attestation is preventing the image from being deployed to the Cloud Run service.

We can now add our attestation to the image which signals to the policy enforcer that the image is approved to be run within this context. Because the image attestation is enforced at a digest rather than a tag level we first need to get the fully qualified image URL with the content SHA before we can sign and attach the attestation. To deploy the image to Cloud Run we execute the following commands (Note that we changed our background color from white to green to visually verify that the running service is updated):

IMAGE_URL="$(gcloud artifacts docker images describe "$GCP_REGION-docker.pkg.dev/$GCP_PROJECT_ID/default/cloud-run-hello:cloud-build" --format="value(image_summary.fully_qualified_digest)")"

gcloud beta container binauthz attestations sign-and-create \
--project="$GCP_PROJECT_ID" \
--artifact-url="$IMAGE_URL" \
--attestor="demo-attestor" \
--keyversion="projects/$GCP_PROJECT_ID/locations/$GCP_REGION/keyRings/binauth-keys/cryptoKeys/bin-auth-demo/cryptoKeyVersions/1"

gcloud run deploy cloud-run-hello \
--image "$IMAGE_URL" \
--region $GCP_REGION \
--set-env-vars "COLOR=darkcyan" \
--project $GCP_PROJECT_ID

Because the image now has the demo-attestor attestation the deployment succeeds and we can see the new version running in our browser.

Grand Finale: Automate Everything

In the previous steps we explored how we can layer transparency onto our image. We’ve added SLSA level 3 attestation to add provenance to our image, generated an SBOM and now required that only images that have a cryptographic attestation are allowed to be deployed to our Cloud Run service.

Getting here took quite a few manual steps that we definitely should automate in a repeatable build pipeline such that we don’t have to painfully repeat them for every build we perform. We again use Cloud Build to automate our build and deploy steps. For this we create a second Cloud Build manifest called cloudbuild-cicd.yaml. The purpose of this build is to orchestrate all the steps that we did manually before:

  1. Define the image’s unique image tag that we can later turn into a digest. For simplicity here we use the $BUILD_ID of the orchestration build.
  2. Submit the Cloud Build we’ve been using before to create a SLSA Level 3 image in artifact registry
  3. Export the SBOM for the image that was created just now
  4. Extract the fully qualified image URL and add an attestation
  5. Deploy to Cloud Run and change the color to make the new change visual
cat <<EOF>cloudbuild-cicd.yaml
steps:
- id: Build and Push Container Image
name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- '-c'
- |
gcloud builds submit . --region \$_GCP_REGION --substitutions "_TAG=built-by-\$BUILD_ID"

- id: SBOM Export
name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- '-c'
- |
gcloud artifacts sbom export --uri "\$_GCP_REGION-docker.pkg.dev/\$PROJECT_ID/default/cloud-run-hello:built-by-\$BUILD_ID"

- id: Attest Image
name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- '-c'
- |
IMAGE_URL="\$\$(gcloud artifacts docker images describe "\$_GCP_REGION-docker.pkg.dev/\$PROJECT_ID/default/cloud-run-hello:built-by-\$BUILD_ID" --format="value(image_summary.fully_qualified_digest)")"
gcloud beta container binauthz attestations sign-and-create \
--artifact-url="\$\$IMAGE_URL" \
--attestor="demo-attestor" \
--keyversion="projects/\$PROJECT_ID/locations/\$_GCP_REGION/keyRings/binauth-keys/cryptoKeys/bin-auth-demo/cryptoKeyVersions/1"

- id: Deploy to Cloud Run
name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- '-c'
- |
IMAGE_URL="\$\$(gcloud artifacts docker images describe "\$_GCP_REGION-docker.pkg.dev/\$PROJECT_ID/default/cloud-run-hello:built-by-\$BUILD_ID" --format="value(image_summary.fully_qualified_digest)")"
gcloud run deploy cloud-run-hello \
--image "\$\$IMAGE_URL" \
--region \$_GCP_REGION \
--set-env-vars "COLOR=palegreen" \
--project \$PROJECT_ID
EOF

We then give our Cloud Build service account the necessary permissions to perform the steps above by assigning it the required roles.

gcloud services enable serviceusage.googleapis.com --project $GCP_PROJECT_ID

PROJECT_NUMBER=$(gcloud projects describe "$GCP_PROJECT_ID" --format="value(projectNumber)")

gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \
--member serviceAccount:$PROJECT_NUMBER@cloudbuild.gserviceaccount.com \
--role roles/cloudkms.signerVerifier

gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \
--member serviceAccount:$PROJECT_NUMBER@cloudbuild.gserviceaccount.com \
--role roles/containeranalysis.notes.attacher

gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \
--member serviceAccount:$PROJECT_NUMBER@cloudbuild.gserviceaccount.com \
--role roles/serviceusage.serviceUsageViewer

gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \
--member serviceAccount:$PROJECT_NUMBER@cloudbuild.gserviceaccount.com \
--role roles/binaryauthorization.attestorsEditor

gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \
--member serviceAccount:$PROJECT_NUMBER@cloudbuild.gserviceaccount.com \
--role roles/run.developer

gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \
--member serviceAccount:$PROJECT_NUMBER@cloudbuild.gserviceaccount.com \
--role roles/iam.serviceAccountUser

Finally we trigger our end to end build and deployment to our Cloud Run service.

gcloud builds submit --config cloudbuild-cicd.yaml \
--substitutions=_GCP_REGION=$GCP_REGION \
--project $GCP_PROJECT_ID \
--region $GCP_REGION

The Cloud Run service should now be updated and look similar to the screenshot below:

Where to go from here?

If you look closely at our application you will see that the unicorn is partying a little harder than it did when we started our journey. The unicorn now knows that the container it runs on came from a trusted source with a verifiable provenance and a clear list of dependencies.

In this blog post we started with a very basic container build and then layered the provenance features one step at a time to be able to discuss the value of each step. If you are interested in adding the described features to your project then your next steps to productionize this setup could include:

  • Instead of manually enabling the required Google Cloud APIs, creating the auxiliary resources and assigning permissions to the service accounts you might want to leverage IaC and turn all of these into a terraform configuration.
  • Instead of using docker builds directly you could use skaffold to build our image such that you can align our inner and outer development loop. With skaffold you also have the opportunity to retrieve the SHA digest of our built images directly such that you can then feed it into the processes of exporting the SBOM and ultimately binary authorization.
  • You should also move away from the manual gcloud builds submit invocations and instead use a Cloud Build trigger to start a Cloud Build on source code repository push events.
  • For the deployment task you can leverage Cloud Deploy in order to get more fine granular control over the rollout of the application and promote it across SDLC stages. Cloud Deploy is based off of skaffold so the previous step aligns very well with extending the deployment.

--

--