Centrally Managing Artifact Registry Container Image Vulnerabilities on Google Cloud: Part One

Dan Peachey
Google Cloud - Community
6 min readFeb 4, 2021

Both Artifact Registry and its predecessor, Container Registry provide image scanning to detect vulnerabilities within the image. The results of these scans are stored with the associated container image at the project level. As such, any security engineers responsible for managing vulnerabilities in an organization require access to each project. This can be in conflict with an organizations security policies regarding separation of duties and least privilege.

Many enterprise organizations will have a centralized solution such as Security Command Center, or their own on-premise SIEM, for centrally managing all of their threats, vulnerabilities and security incidents. Ideally we can aggregate all image vulnerabilities into a single location for centralized management.

In part one of this two part article, we will look at utilizing Pub/Sub and Cloud Functions to store project level container image vulnerabilities in a centralized service or location. For demonstration purposes we’ll use a Google Cloud Storage bucket. In part two, we will expand on this to see how we can use custom Security Command Center sources to create and manage vulnerability findings directly in SCC as well as explore other possibilities such as Big Query, or a third party SIEM.

Getting Started

Firstly, this post is designed for Google Cloud organizations and not personal GCP accounts. The difference is that organization accounts have an organization node at the root, under which all folders and projects exist. An organization is related to a single domain (e.g. my-company.com). For personal cloud accounts, projects exist independently and are not under a parent organization.

You can create a free organization account with any domain name you own by following the setup instructions for the free edition of Google Cloud Identity.

This post also assumes that you have basic Google Cloud Platform knowledge and are comfortable with creating projects and using the Cloud Shell and command line tools.

Create the Projects and GCS Bucket

We will create two projects, a shared project that will hold the GCS bucket where we will store all vulnerabilities and a source project that will contain the images that get scanned for vulnerabilities. In reality, you would have multiple source projects that will all write to the shared project as shown in the diagram below. Additionally, you wouldn’t typically use GCS as the storage mechanism in production, but for this demonstration it suits our purpose.

Open Cloud Console, create two projects and set up billing for them, note the ids that you use for each one. We will refer to the shared project id as <project-id-shared> and the source project as <project-id-source>. Next we create the GCS bucket in the shared project where we will store the vulnerabilities. Open Cloud Shell and run the following command, replacing <project-id-shared> with your project id and choosing a globally unique value for <bucket-name>.

gsutil mb -l us-central1 -p <project-id-shared> gs://<bucket-name>

Set up the Source Project, Service Account, Artifact Registry and Pub/Sub

The source project is where are images will be stored and scanned. We will utilize a Pub/Sub topic that is automatically created by the vulnerability scanner, which then triggers a Cloud Function that reads the vulnerability and writes it into the bucket in the shared project.

In Cloud Shell run the following commands with your <project-id-source> value to enable the required services on the source project.

gcloud config set project <project-id-source>gcloud services enable cloudresourcemanager.googleapis.comgcloud services enable artifactregistry.googleapis.comgcloud services enable cloudbuild.googleapis.comgcloud services enable containerscanning.googleapis.com

Let’s create the service account that the Cloud Function will run under. It will need permissions to read container vulnerabilities in the source project and permissions to write to the bucket in the shared project. Provide your own service account value for <service-account-source>.

gcloud iam service-accounts create <service-account-source> \
--description="Service Account to create process image scan vulnerabilities" \
--display-name="Image Vulnerability Processor"

We add permissions to read vulnerabilities with the Container Analysis Occurrences Viewer role. Note the format of the email that needs both the <service-account-source> and <project-id-source> values replaced. We will also set the GCS Object Creator role on the bucket so we can write to the bucket. To comply with the principle of least privilege, the service account does not need to read or delete objects, so we limit the permissions to create only.

gcloud projects add-iam-policy-binding <project-id-source> --member=serviceAccount:<service-account-source>@<project-id-source>.iam.gserviceaccount.com --role=roles/containeranalysis.occurrences.viewergsutil iam ch serviceAccount:<service-account-source>@<project-id-source>.iam.gserviceaccount.com:objectCreator gs://<bucket-name>

Next, let’s create an Artifact Registry in the source project that will store the container images. Provide your own value for <repo-name>.

gcloud artifacts repositories create <repo-name> --location=us-central1 --repository-format=docker --project=<project-id-source>

When Docker images are pushed to the repository, they will automatically get scanned by the Container Scanning API that we enabled earlier. In addition to showing the found vulnerabilities in the console, the Container Scanning API also publishes each vulnerability occurrence to auto-created Pub/Sub topics.

These topics get created when the first image gets scanned, which causes a problem for us. To trigger a Cloud Function from a topic, the topic must first exist. In this case we would not be able to attach the Cloud Function to the topic until we push our first image and have it scanned. Luckily, the solution is simple. We can pre-create the topic ourselves enabling us to create and attach the Cloud Function. When an image is scanned it will detect that the topic already exists and will not try to recreate it.

gcloud pubsub topics create container-analysis-occurrences-v1 --project=<project-id-source>

Create the Cloud Function

We will now create the Cloud Function that will be triggered by the container-analysis-occurrences-v1 topic and will write the vulnerability to the bucket in the shared project. While this can be accomplished using the command line, it is much easier to use the console.

In the console, make sure you are in the source project, and navigate to Pub/Sub. Click on “container-analysis-occurrences-v1” and then click “+Trigger Cloud Function” at the top of the screen.

Change the function name to “image-vuln-cf-trigger”, leave everything else as default and click Save.

Expand the Variables, Networking and Advanced Settings section. For the Service Account, select the service account that we created earlier, then select the environment variables tab and add a runtime environment variable named “BUCKET_NAME”. The value should be the <bucket-name> you created earlier.

Click next and select Python 3.7 for the code runtime. Paste in the following code:

Set the entry point to be “image_vuln_pubsub_handler”.

Next, select the requirements.txt file and paste in the following:

google-cloud-securitycentergoogle-cloud-containeranalysis

Click Deploy.

Push an Image to Artifact Registry and Trigger the Scan

We’re now ready to test. We’ll create a simple Docker image with some known vulnerabilities and push this to our Artifact Registry in our source project. This will trigger the vulnerability scan, and any found occurrences of vulnerabilities will be written to the Pub/Sub topic, trigger our Cloud Function and will be written to our bucket in our shared project.

In Cloud Shell, open the code editor and create a file called DOCKERFILE with the following code:

FROM nginx

On the command line run:

gcloud auth configure-docker us-central1-docker.pkg.devdocker build --tag nginx .docker tag nginx us-central1-docker.pkg.dev/<project-id-source>/<repo-name>/nginx-test:stagingdocker push us-central1-docker.pkg.dev/<project-id-source>/<repo-name>/nginx-test:staging

Now we can go to the console and view the pushed image in Artifact Registry in the source project. Click on the image name and notice the vulnerabilities found (the scan may be in progress).

You can click through and examine each of the vulnerabilities found.

Next, change to the shared project and go to storage to browse your bucket. Inside, you will now see a folder structure for your project that contains all of the vulnerability occurrences.

Typically, you would repeat this for each of your projects that uses the Container Scanning API to aggregate all of your image vulnerabilities into a central location.

In part two of this article, we will look at how we can use custom SCC sources to publish image vulnerability findings into Security Command Center as well as discuss how you would approach integrating with your own SIEM.

--

--