Tips and Traps with Amazon Inspector v2

Matt Gillard
8 min readJan 2, 2022

--

Update 28 Jan 2022: The issues raised are mostly corrected now. Current state of play is documented in the summary at the end of the article.

Originally this was going to be a collection of tips on how the new and improved AWS Inspector works in a large AWS organisation. Unfortunately, as it turns out, there are more traps than tips, in fact I would go as far to say the service has some shortcomings that makes it unsuitable for deployment in the enterprise at this time.

Photo by Elizaveta Dushechkina on Unsplash

What is Inspector?

For a few years now, AWS has had a service called Amazon Inspector. The original service provides an agent that you install on your supported EC2 instances and this allows scanning for known security vulnerabilities and misconfigurations based on CIS benchmarks. It also has an agentless component called Network Reachability to provide a report on potential mis-configuration. Unfortunately it also had no integration with AWS Organizations, no container integration and it seemed like it was a stagnant product, left to wither on the AWS vine.

With the re-launch during re:Invent, Amazon Inspector v2 is now the new Amazon Inspector (AWS actions inspectorv2:*), while the old Inspector still exists and is now known as Amazon Inspector Classic (AWS actions inspector:*).

The remainder of this post is discussing the new relaunched Amazon Inspector only.

Inspector v2 Overview

The new service almost feels like AWS started from scratch. There is full Organizations integration, SecurityHub integration, continuous scanning of ECR repositories (replacing the less useful, but free, basic scanning capability), agent-less scanning of EC2 instances, and a dashboard detailing scan results and highlighting risk areas.

When configuring an account for use with Amazon Inspector you select whether you want ECR and/or EC2 scanning. You can choose either or both on a per account basis.

EC2 Scanning

Amazon Inspector v2 has a requirement that your EC2 instances have the SSM Agent running, your EC2’s have the appropriate role configured to report back to Systems Manager (eg: using managed policy AmazonSSMManagedInstanceCore), and you have setup Systems Manager in your AWS accounts.

When EC2 scanning is selected, it is important to note that no agent is installed on your instances. The SSM agent is leveraged. The service adds a new state manager association that pulls out a software inventory using the SSM agent and uses that as the basis for determining if any CVE’s are present in your instances. This association is configured to run every 30 mins to get up to date information to ensure the CVE list is accurate.

ECR Scanning

If you choose ECR scanning, there is a new Enhanced Scanning mode:

By default ECR has had Basic Scanning for a while with an optional Scan on push option. When you enable the new Inspector service on your ECR repositories, continuous scanning is enabled which gives instant protection across all your container repositories. Keep in mind that all versions of your containers are chargeable. So if you do do this — ensure you are removing old redundant versions when they are not required anymore to keep your costs down.

Configuring Amazon Inspector

This is pretty straightforward. The instructions are pretty clear on both standalone and multi-account options. With multi-account you follow the standard process of designating an administration account (DA) within your AWS Organizations management account. Then you can switch to the DA to enable either EC2 or ECR scanning on your member accounts. Here is the first problem. The AWS doc under multi-account environment says:

From the Account Management page, you can choose Enable scanning for all accounts from the top banner

However — this is not accurate. There is no such option.

If you log a ticket with AWS Support — request them to send you their CLI script which allows you to enable for your member accounts. It is a lot faster than clicking checkboxes in the console.

Centralised Dashboard

Update 6/Jan/22 — this issue is now confirmed as corrected!

Or — should I say — lack of a centralised dashboard. After you have enabled the scanning you need on your accounts, all the findings summaries should filter back to your delegated account.

While the “All findings” is populated. The rest are not. I even waited a weekend for “eventual consistency”. But turns out there is a bunch of nothing reported so you have zero visibility across the enterprise — short of logging into each account separately — which is less than ideal.

AWS Support tell me that the service team are looking into this and hopefully it is resolved soon.

DescribeImageScanFindings API call is broken

The way you get your image scan findings results is via this API call. It’s actually broken in two places.

Firstly, if you call it with imageTag — you get an exception:

$ aws ecr describe-image-scan-findings --repository-name test --image-id imageTag=latest

An error occurred (ScanNotFoundException) when calling the DescribeImageScanFindings operation: Image scan does not exist for the image with '{imageDigest:'null', imageTag:'latest'}' in the repository with name 'test' in the registry with id '123456789012'

Secondly, if you call with imageDigest - it actually executes, but throws a similar error while it is in the process of scanning, but then corrects itself when finished (so it is usable and the official workaround for now):

$ aws ecr describe-image-scan-findings --repository-name test --image-id imageDigest=sha256:203fe11646adae86937bf04db0079adef295f4da68a92b40e3b181f337dab726

{
"imageScanStatus": {
"status": "SCAN_NOT_FOUND",
"description": "Failed."
},
"repositoryName": "test",
"registryId": "123456789012",
"imageId": {
"imageDigest": "sha256:203fe11646adae86937bf04db0079adef295f4da68a92b40e3b181f337dab726"
},
"imageScanFindings": {
"imageScanCompletedAt": 1640171520.215,
"vulnerabilitySourceUpdatedAt": 1640171520.215,
"findingSeverityCounts": {
"HIGH": 2,
"CRITICAL": 1,
"LOW": 1,
"MEDIUM": 5
}
}
}

Then when the scan is completed it seems ok:

{
"imageScanStatus": {
"status": "ACTIVE",
"description": "Continuous scan is selected for image."
},
"repositoryName": "test",
"registryId": "111122223333",
"imageId": {
"imageDigest": "sha256:203fe11646adae86937bf04db0079adef295f4da68a92b40e3b181f337dab726"
},
"imageScanFindings": {
"imageScanCompletedAt": 1640171520.773,
"vulnerabilitySourceUpdatedAt": 1640171520.773,
"findingSeverityCounts": {
"HIGH": 44,
"CRITICAL": 8,
"LOW": 1,
"MEDIUM": 47
}
}
}

This is also logged with AWS Support and the service team are aware of this bug.

Another tip — if you are using the cli aws ecr describe-image-scan-findings command - ensure the role has the action inspectorv2:ListFindings so it can retrieve the findings results - otherwise the command fails when inspectorv2 is enabled:

An error occurred (AccessDeniedException) when calling the DescribeImageScanFindings operation: User: arn:aws:sts::123456789012:assumed-role/xxx is not authorized to perform: inspector2:ListFindings on resource: arn:aws:inspector2:ap-southeast-2:123456789012:/findings/list

Inconsistent CVE findings

I did some basic testing with the stock standard docker getting-started repo https://github.com/docker/getting-started. I found that with Amazon inspector v2 enabled — I got no findings:

aws ecr describe-image-scan-findings --repository-name test --image-id imageDigest=sha256:3e693a2423e46d9cbe377017fb6f63a1d7f4d99cca428cddac868234116b34c6
{
"imageScanFindings": {
"findings": []
},
"registryId": "123456789012",
"repositoryName": "test",
"imageId": {
"imageDigest": "sha256:3e693a2423e46d9cbe377017fb6f63a1d7f4d99cca428cddac868234116b34c6"
},
"imageScanStatus": {
"status": "ACTIVE",
"description": "Continuous scan is selected for image."
}
}

But with the old ECR default behaviour with inspector v2 disabled which forced the original ECR basic scanning behaviour, I got a single LOW vulnerability:

$ aws ecr describe-image-scan-findings --repository-name test --image-id imageTag=latest
{
"imageScanFindings": {
"findings": [
{
"name": "CVE-2020-28928",
"uri": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28928 ",
"severity": "LOW",
"attributes": [
{
"key": "package_version",
"value": "1.2.2-r3"
},
{
"key": "package_name",
"value": "musl"
},
{
"key": "CVSS2_VECTOR",
"value": "AV:L/AC:L/Au:N/C:N/I:N/A:P"
},
{
"key": "CVSS2_SCORE",
"value": "2.1"
}
]
}
],
"imageScanCompletedAt": "2021-12-22T10:40:43+11:00",
"vulnerabilitySourceUpdatedAt": "2021-12-22T03:11:45+11:00",
"findingSeverityCounts": {
"LOW": 1
}
},
"registryId": "123456789012",
"repositoryName": "test",
"imageId": {
"imageDigest": "sha256:3e693a2423e46d9cbe377017fb6f63a1d7f4d99cca428cddac868234116b34c6",
"imageTag": "latest"
},
"imageScanStatus": {
"status": "COMPLETE",
"description": "The scan was completed successfully."
}
}

I also found that Inspector v2 CVE results in general were not accurate.

For example on a fully patched RedHat 8 Enterprise Linux Inspector EC2 scanning reported the CVE-2021–38645 — omi vulnerability (among 80 or so other patches), even though the latest package was installed which fixes this issue:

sh-4.4$ rpm -qa |grep omi
omi-1.6.8-1.x86_64

Inspector patch summary shows the summary (including the Critical OMI CVE):

You can also get this information with the command line:

$ aws inspector2 list-findings 
{
"findings": [
{
"awsAccountId": "123456789012",
"description": "An issue was discovered in the Linux kernels Userspace Connection Manager Access for RDMA. This could allow a local attacker to crash the system, corrupt memory or escalate privileges.",
"findingArn": "arn:aws:inspector2:ap-southeast-2:123456789012:finding/00e846594ca19b13f8e2fc6417cc8b16",
"firstObservedAt": "2022-01-02T17:39:48.825000+11:00",
"inspectorScore": 7.8,
"inspectorScoreDetails": {
"adjustedCvss": {
"adjustments": [],
"cvssSource": "REDHAT_CVE",
"score": 7.8,
"scoreSource": "REDHAT_CVE",
"scoringVector": "CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H",
"version": "3.1"
}
},
"lastObservedAt": "2022-01-02T17:39:48.825000+11:00",
"packageVulnerabilityDetails": {
"cvss": [
{
"baseScore": 7.8,
"scoringVector": "CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H",
"source": "REDHAT_CVE",
"version": "3.1"
},
[ ... rest of output removed ... ]
]
}
}

But only minor patches were really outstanding when you do a check on the system itself:

sh-4.4$ sudo yum updateinfo list security all
Updating Subscription Management repositories.

Last metadata expiration check: 3:14:29 ago on Mon 20 Dec 2021 09:44:40 PM UTC.
RHSA-2021:5082 Important/Sec. libsmbclient-4.14.5-7.el8_5.x86_64
RHSA-2021:5082 Important/Sec. libwbclient-4.14.5-7.el8_5.x86_64
RHSA-2021:5082 Important/Sec. samba-client-libs-4.14.5-7.el8_5.x86_64
RHSA-2021:5082 Important/Sec. samba-common-4.14.5-7.el8_5.noarch
RHSA-2021:5082 Important/Sec. samba-common-libs-4.14.5-7.el8_5.x86_64
RHSA-2021:5082 Important/Sec. samba-common-tools-4.14.5-7.el8_5.x86_64
RHSA-2021:5082 Important/Sec. samba-libs-4.14.5-7.el8_5.x86_64

And also, AWS’s own System Manager Patch Manager accurately showed zero outstanding patches:

Maybe the Inspector team needs to chat with the Patch manager team to ensure consistency 😀.

This appears to be a problem only with RedHat 8 but your milage may vary. Amazon Linux 2 appeared to be more consistent with reporting accurate CVE information.

Summary

In this post I described a number of issues that need to be resolved with the new Amazon Inspector service:

  • The delegated account not aggregating any data from enabled accounts — so you do not get a central view of potential vulnerabilities (now corrected as of 6/Jan/22)
  • The DescribeImageScanFindings API using imageTag doesn’t work and throws a SCAN_NOT_FOUND error (workaround: use imageDigest with this API instead for now) (now corrected as of 26/Jan/22)
  • Issues with the CVE reporting both for containers and RedHat 8 Enterprise Linux (at least — other platforms may also have issues, but are untested by me) (now partially corrected as of 26/Jan/22 — except for inactive kernel CVE’s which is a feature request in with the service team)
  • Deploying across the organization is challenging without the AWS support script workaround or writing your own AWS CLI script

As much as I love the idea of this re-launch with a much improved service — at this time I think waiting until the problems I highlighted in this blog are resolved before deploying in large enterprise environments is probably the right way to go. I have logged a number of support calls with AWS so as the issues are resolved I will come back and update this post.

--

--

Matt Gillard

Principal Cloud Solutions Architect / AWS Ambassador / AWS Community Builder / Digital Transformation / co-host Cloud Dialogues podcast