On Amazon EKS and Inspector

Dirk Michel
12 min readSep 14, 2023

--

The field of container vulnerability management is varied and full of options, and the cloud-native security ecosystem continues to innovate and grow. On the one hand, developers aim to regularly choose and update 3rd party open-source dependencies as part of application security workflows. Updating base images and operating system packages is also part of the supply chain security stream. We create transparency about these hardening efforts by attaching software bills of materials (SBOM) to our container images that catalogue their 3rd party contents. Additionally, we cryptographically sign container images and SBOMs, indicating that the publisher has created and approved their release. On the other hand, consumers increasingly mandate software inventory transparency and the use of signatures to enable verification of provenance and integrity.

Meeting organisational supply chain security objectives and aligning with customer expectations can be time-intensive, but we can often look to AWS-managed security services to help offload undifferentiated heavy lifting.

Amazon ECR and its integration with Amazon Inspector can support our container hardening efforts and help meet customer transparency and compliance requirements.

Over time, AWS native security services have increasingly incorporated container and Kubernetes features that help secure Elastic Kubernetes Service (Amazon EKS) clusters and their surrounding AWS services, such as the Amazon Elastic Container Registry (Amazon ECR). The concept of security in the cloud extends to and includes private or public container registries, which are the essential point of exchange between software publishers and consumers: Publishers push and release container images into the registry. The customers pull and download container images from them.

The transparency mechanism of Amazon Inspector SBOM files is based on Amazon S3. Alternatively, SBOMs can be added to a container image attestation and stored as an OCI-compliant artefact within Amazon ECR, consolidating the artefacts into one registry system. The consumers, such as Kubernetes clusters, pull the artefacts from a centralised registry and perform automated admission, verification and deployment workflows, as illustrated in the reference diagram below.

Container SCA scan and SBOM file generation with Amazon Inspector

For those on a tight time budget: The TL;DR of the following sections is to show how the Amazon Inspector managed service for vulnerability management can be introduced to provide software compositional analysis (SCA) and SBOM repositories for container images to drive supply chain security efforts forward. The SCA scan results help identify vulnerable dependencies we may need to update or replace, and the bill of material data accelerates our ability to react to new vulnerability notifications and locate container images that contain them. The final section shows how SBOMs can be externalised as attested OCI-compliant artefacts to meet compliance requirements and customer expectations.

To follow along, you can install Anaconda as your Python virtual environment, cosign, syft, finch or your favourite container management tools, and the AWS Cloud Development Kit (CDK). CDK lets us use supported programming languages to write compact code that generates AWS CloudFormation. The snippets provide working code to illustrate some of the configuration options.

Let’s do it.

1. Container vulnerability scans with Amazon Inspector

The Amazon ECR service is structured around the concepts of registries and repositories. Registries are regional resources and expose registry-level options, features and configuration parameters, such as pull-through cache and registry replication. Registries have container image repositories, each holding tagged container image versions. Repositories also expose their respective options, features and configuration parameters.

Amazon ECR is integrated with the Amazon Inspector vulnerability management service, and configuration takes place at the registry level, which is then applied to selected repositories via repository filters.

The two compositional vulnerability scan options are Basic Scanning and Advanced Scanning. The basic scan uses the “built-in” Amazon ECR scanning feature that provides manual and automatic scan triggers: A once-a-day manual scan quota per image and an automatic one-off scan on push. The advanced scan leverages Amazon Inspector, a purpose-built specialised vulnerability management service. The integration with Amazon Inspector also opens up supplementary use cases based on its integration with other AWS security services, such as AWS Security Hub.

Enabling Advanced Scanning at the Amazon ECR registry level initiates the creation of resources within Amazon Inspector, at which point charges apply. Amazon Inspector executes the scans and delivers the results back to Amazon ECR, from where they can be accessed. Notably, Amazon Inspector supports various operating systems and programming languages, providing improved findings and relevance compared to scanners backed by generic or un-curated vulnerability databases.

The following AWS CDK stack snippet illustrates the Amazon ECR registry-level properties for enhanced scanning.

"""Provision Amazon ECR enhanced scanning with Amazon Inspector"""
from constructs import Construct
from aws_cdk import (
Duration,
Stack,
aws_iam as iam,
aws_kms as kms,
aws_ecr as ecr,
custom_resources as cr,
RemovalPolicy,
ArnFormat,
)

class EcrInspectorStack(Stack):

def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
super().__init__(scope, construct_id, **kwargs)

### 1. Update ECR private registry-level properties
# a. Define the input dictionary content for the putRegistryScanningConfiguration AwsSdk call

onUpdateRegistryParams = {
"scanType": 'ENHANCED',
"rules": [
{
'scanFrequency': 'CONTINUOUS_SCAN',
'repositoryFilters': [
{
'filter': 'string',
'filterType': 'WILDCARD'
},
]
},
]
}

# b. Define a custom resource to make an putRegistryScanningConfiguration AwsSdk call to the Amazon ECR API

registry_cr = cr.AwsCustomResource(self, "EnhancedScanningEnabler",
on_create=cr.AwsSdkCall(
service="ECR",
action="putRegistryScanningConfiguration",
parameters=onUpdateRegistryParams,
physical_resource_id=cr.PhysicalResourceId.of("Parameter.ARN")),
policy=cr.AwsCustomResourcePolicy.from_sdk_calls(
resources=cr.AwsCustomResourcePolicy.ANY_RESOURCE
)
)

# c. Define a IAM permission policy for the custom resource

registry_cr.grant_principal.add_to_principal_policy(iam.PolicyStatement(
effect=iam.Effect.ALLOW,
actions=["inspector2:ListAccountPermissions", "inspector2:Enable", "iam:CreateServiceLinkedRole"],
resources=["*"],
)
)

### 2. Provision a new Amazon ECR private repository
# Create the ECR container image repository with the ECR construct

repository = ecr.Repository(self, "my-ecr-image-repository",
repository_name="my-ecr-image-repository",
image_scan_on_push=True,
image_tag_mutability=ecr.TagMutability.IMMUTABLE,
encryption=ecr.RepositoryEncryption.KMS,
)

# Apply a life cycle rule to the repository we just provisioned

repository_lcr = repository.add_lifecycle_rule(
max_image_age=Duration.days(30)
)

### 3. Create a KMS asymmetric signing key with an alias

key = kms.Key(self, "MyCosignSigningKey",
key_spec=kms.KeySpec.RSA_4096,
key_usage=kms.KeyUsage.SIGN_VERIFY,
alias="signingkey"
)

If you still need to do so, enable the AWSServiceRoleForAmazonInspector2 Amazon IAM service-linked role by activating the Amazon Inspector Service before running the CDK stack.

The first block of the CDK Stack defines the registry-level properties we need to apply for enhanced scanning. The ECR Construct Library does not model registry-level properties, such as the putRegistryScanningConfiguration action we need; hence, we define our own CDK custom resources to define AWS SDK API calls via CDK. AwsSdkCall is great for small patches or configuration changes that AWS CloudFormation doesn’t support. The ECR API call for putRegistryScanningConfiguration takes a dictionary as input that defines our desired registry configuration: Scan Type is set to enhanced scanning that uses Amazon Inspector, Scan Frequency is set to continuous scanning that automatically triggers re-scans for a definable period and a Repository Filter to select which repositories should be scanned.

The continuous scanning option of Amazon Inspector is an important feature, as it is this mechanism that helps identify container images that are affected by newly disclosed vulnerabilities. Images re-scans are triggered when the Amazon threat intelligence feeds and vulnerability databases receive updates.

Sidebar: The continuous scan duration can be changed using the Amazon Inspector settings, not the Amazon ECR repository settings. Supported scan durations are Lifetime (default), 180 days, and 30 days.

The second CDK block provisions the Amazon ECR repository, courtesy of the ECR Construct Library, and we declare some of the available configuration options, such as tag immutability. Turning on tag immutability for a repository stops image tags from being replaced and overwritten: Image tags are otherwise mutable and do not uniquely identify an image version because they can be replaced with another image bearing the same tag. Therefore, tag immutability helps ensure that the same version that was scanned and verified by vulnerability scanning is, in fact, the same one.

The third and final block provisions an Amazon KMS asymmetric key, which we can use to sign container images. Using an Amazon KMS key alias over a key ID has many advantages, including being a user-friendly name and enabling seamless key rotation, as we can change the KMS key associated with an alias at any time.

The following snippet uses finch to build and push a sample container image to the Amazon ECR repository we created with CDK, and then we retrieve the scan results through the AWS CLI. Any Dockerfile will do.

$ finch init vm
$ cd path/to/dockerfile
$ finch build .
[+] Building 3.8s (5/5)
[+] Building 3.9s (5/5) FINISHED
$ finch tag fab04ffa12dd $ACCOUNT.dkr.ecr.$REGION.amazonaws.com/$REPOSITORY:0.01
$ aws ecr get-login-password --region $REGION | finch login --username AWS --password-stdin $ACCOUNT.dkr.ecr.$REGION.amazonaws.com
Login Succeeded
$ finch push $ACCOUNT.dkr.ecr.$REGION.amazonaws.com/$REPOSITORY:0.01
pushing as a reduced-platform image
$ aws ecr describe-image-scan-findings --repository-name $REPOSITORY --image-id imageTag=0.01 --region $REGION
{
"imageScanFindings": {
"findings": [],
"imageScanCompletedAt": "2023-09-13T19:30:13+01:00",
"vulnerabilitySourceUpdatedAt": "2023-09-13T11:13:09+01:00",
"findingSeverityCounts": {}
},
"registryId": "123456789012",
"repositoryName": "myrepo",
"imageId": {
"imageDigest": "sha256:fab04ffa12dd20b67c4cad7aa47b153efbe606ab68e281c500f1518b81ad2b71",
"imageTag": "0.01"
},
"imageScanStatus": {
"status": "COMPLETE",
"description": "The scan was completed successfully."
}
}
(
$ aws ecr get-login-password --region $REGION | cosign login --username AWS --password-stdin $ACCOUNT.dkr.ecr.$REGION.amazonaws.com
$ cosign sign --key awskms:///alias/signingkey $ACCOUNT.dkr.ecr.$REGION.amazonaws.com/$REPOSITORY:0.01
$ cosign verify --key awskms:///alias/signingkey $ACCOUNT.dkr.ecr.$REGION.amazonaws.com/$REPOSITORY:0.01
[+] The following checks were performed on each of these signatures:
- The cosign claims were validated
- Existence of the claims in the transparency log was verified offline
- The signatures were verified against the specified public key

That’s it. The container image scan was completed on push, the describe-image-scan-findings resulted in a clean scan with empty findings and findingSeverityCounts, and the container image was signed with our Amazon KMS private signing key and verified with our Amazon KMS public key.

Using the Sigstore cosign utility with Amazon KMS signing keys helps avoid the challenges of generating, storing, securing, and managing public key infrastructure (PKI) ourselves. We can distribute our Amazon KMS public key to enable consumer-side signature verification workflows: For example, Kubernetes admission controllers can verify signatures before admitting container images into the cluster. Kyverno, a CNCF incubating project, deploys as an admission controller and supports cosign image verification through its cloud-native declarative policy-as-code implementation. The Amazon KMS public key can be stored within the cluster as a Kubernetes Secret, which can then be referenced by a Kyverno image verification policy to verify incoming container images.

Sidebar: Alternatively, we can use the cosign keyless signing option, which generates ephemeral key pairs for every singing action, exchanges them for short-lived signing certificates from the Sigstore certificate authority, creates the image signature file, and pushes it to Amazon ECR. The signing process concludes with an entry of the public certificate to the append-only ledger of the Sigstore transparency log, making the signatures and public keys discoverable and verifiable beyond the expiration and deletion of the short-lived signing certificate and private key.

Amazon Inspector findings are then used to inform vulnerability assessment reviews and remediation cycles. Amazon Inspector accelerates remediation work by decorating its findings with additional descriptions, adding its own Amazon Inspector score and severity level, and providing curated remediation guidance. The bulk of the remediation work typically involves upgrading 3rd party packages and libraries to an appropriate version that addresses the vulnerability. Programming languages handle dependency management differently, but the general theme remains broadly the same. Then, the application artefacts are re-compiled with the updated dependencies, containers are re-built, pushed to Amazon ECR and re-scanned. The cycle repeats on a schedule or when new vulnerabilities are disclosed.

Sidebar: Individual scan results can also be accessed from Amazon Inspector, and its console user interface provides supplementary dashboarding, aggregation, and reporting functionality across all scans.

2. Container SBOM repository with Amazon Inspector

In addition to accelerating vulnerability analysis workflows, Amazon Inspector can generate SBOM files for container images residing on Amazon ECR. Amazon Inspector exports container image SBOM files into Amazon S3, creating an SBOM repository from which they can be further analysed, processed, and searched.

The AWS CDK stack snippet creates the pre-requisite Amazon S3 bucket for Amazon Inspector SBOM exports.

""" This stack provisions an S3 bucket for Amazon Inspector SBOM exports """
from constructs import Construct
from aws_cdk import (
Duration,
Stack,
aws_iam as iam,
aws_s3 as s3,
aws_kms as kms,
RemovalPolicy,
)

class SbomS3Stack(Stack):

def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
super().__init__(scope, construct_id, **kwargs)

### Create Sbom S3 bucket
# Provision the S3 bucket resource with desired configuration options

bucket = s3.Bucket(self, "SbomBucket",
bucket_name="sbom-bucket",
auto_delete_objects=True,
versioned=True,
bucket_key_enabled=True,
removal_policy=RemovalPolicy.DESTROY,
block_public_access=s3.BlockPublicAccess.BLOCK_ALL,
enforce_ssl=True,
encryption=s3.BucketEncryption.KMS,
intelligent_tiering_configurations=[
s3.IntelligentTieringConfiguration(
name="my_s3_tiering",
archive_access_tier_time=Duration.days(90),
deep_archive_access_tier_time=Duration.days(180),
prefix="prefix",
tags=[s3.Tag(
key="key",
value="value"
)]
)],
lifecycle_rules=[
s3.LifecycleRule(
noncurrent_version_expiration=Duration.days(7)
)
],
)

# Create S3 bucket policy for inspector sbom export permissions

add_s3_policy = bucket.add_to_resource_policy(
iam.PolicyStatement(
effect=iam.Effect.ALLOW,
actions=["s3:GetObject","s3:PutObject","s3:AbortMultipartUpload"],
resources=[bucket.arn_for_objects("*")],
principals=[iam.ServicePrincipal("inspector2.amazonaws.com")],
conditions={
"ArnLike": {
"aws:SourceArn": "arn:aws:inspector2:"
+ "eu-west-1"
+ ":"
+ "123456789012"
+ ":report/*"
},
"StringEquals": {
"aws:SourceAccount": "123456789012"
}
},
)
)

The first section of the stack uses the AWS CDK S3 Construct Library to define the S3 bucket resource and some of its property options, such as versioning, encryption details, and lifecycle management definitions. The second block of the stack creates a S3 Bucket Policy and attaches it to the bucket we are provisioning in the first block. Replace the SourceAccount before you tie this stack into your CDK application.

Now, we can have Amazon Inspector export SBOM files into our bucket in either Spdx_2_3-compatible (Json) or Cyclonedx_1_4 (Json) format. The export definition requires the SBOM format choice, an Amazon S3 bucket destination URI, an Amazon KMS Key ARN, and an optional filter identifying the Amazon ECR repositories for which we want the SBOM exports.

At this point, the SBOM export functionality is relatively new and still needs to be reflected in CDK Constructs, but invariably, the available configuration options tend to grow as new features mature and increase in adoption. For now, we can trigger SBOM exports on demand via the Amazon Inspector API CreateSbomExport call, as shown below.

$ aws inspector2 create-sbom-export --report-format=SPDX_2_3 --s3-destination bucketName=$BUCKET,keyPrefix=sbom,kmsKeyArn=$KMSKEYARN --region=$REGION

Remember to confirm that the Amazon KMS Customer Managed Key used to encrypt the Amazon S3 bucket has a Key Policy attached that allows Amazon Inspector to use it; otherwise, the SBOM files cannot be written into the bucket.

Calls to the Inspector API can be used to trigger one-off SBOM exports. Still, we can look towards Amazon EventBridge to help automate SBOM export triggers, for example, upon Amazon ECR image push complete notification events.

Sidebar: The SBOM repository can also help establish if container images on our Amazon EKS clusters contain potentially vulnerable 3rd party dependencies. The Kubernetes API server keeps track of the deployed container image tags, which can be extracted and then cross-referenced to their image content catalogue of the SBOM repository.

With SBOM files being exported into Amazon S3, we can search them for vulnerable dependency versions identified by CVE Advisory and Alert notification services, for example, from MITRE or through Amazon Inspector scan result findings.

3. Transparency and Compliance

SBOM repositories based on Amazon S3 can effectively accelerate internal vulnerability management processes for software publishers, but they are not optimal for providing external access. Amazon S3 SBOM repositories simply contain the exported SBOM Json files, which in most cases won’t be sufficient to achieve the desired degree of compliance.

We’d instead want to distribute SBOM files alongside container images on Amazon ECR. Consumers can then pull container images and SBOMs directly from Amazon ECR repositories and integrate them into their verification and security workflows. One way to achieve that is to embed SBOMs as metadata into an In-Toto Attestation and attach it to its corresponding container image as an OCI-compliant artefact.

Sidebar: As described, cryptographic signatures prove that the holder of a matching private key signed the artefact. However, signatures alone do not provide additional metadata and intent that compliance frameworks such as the Supply Chain Levels for Software Artefacts (SLSA) require. SLSA is a compliance standard often adopted and referenced for its incremental compliance levels and milestones, allowing us to make directional improvements towards higher compliance maturity.

We can generate signed attestations that provide verifiable information required for SLSA with the Syft utility. The actions that Syft performs include: Pulling an image from Amazon ECR, scanning the layers to catalogue the operating system and application dependency package names, versions, licensing, and copyright information, producing the SBOM output, embedding it into an attestation, signing the attestation, and pushing it up to Amazon ECR as an OCI-compliant artefact. The below snippet illustrates this.

$ aws ecr get-login-password --region $REGION | syft login --username AWS --password-stdin $ACCOUNT.dkr.ecr.$REGION.amazonaws.com
$ syft $ACCOUNT.dkr.ecr.$REGION.amazonaws.com/$REPOSITORY:0.01 --scope all-layers -o spdx-json=sbom.spdx.json
$ syft attest --key /path/to/cosign.key -o spdx-json $ACCOUNT.dkr.ecr.$REGION.amazonaws.com/$REPOSITORY:0.01

Syft leverages the Sigstore cosign project to generate attested and signed SBOM files. Attested SBOMs can also be generated from within a build pipeline.

Conclusions

Adopting Amazon ECR with enhanced scanning can significantly accelerate teams operationalising their supply chain security efforts. Amazon Inspector is the underlying managed service for software compositional analysis and provides the inputs into an effective vulnerability management workstream. Development teams receive curated scan results that reduce noise and false positives as part of the remediation cycles and can focus on updating or replacing 3rd party dependencies that address the essential findings.

Amazon Inspector can be a valuable contributor to creating SBOM repositories. Acting on new vulnerability notifications is a critical use case, and SBOM repositories enable quick identification of images containing a specific named package version.

Externalising SBOMs with Amazon Inspector has yet to be a managed option, but SBOM generators such as Syft can help achieve that by embedding SBOMs into attestations and storing them as OCI-compliant artefacts on Amazon ECR.

--

--

Dirk Michel

SVP SaaS and Digital Technology | AWS Ambassador. Talks Cloud Engineering, Platform Engineering, Release Engineering, and Reliability Engineering.