Testing docker CVE scanners. Part 2.5Exploiting CVE scanners

Gabor Matuz
10 min readSep 2, 2020

--

After spending weeks with the CVE scanners, it would be hard to look in the mirror if I had not tried exploiting them! I the 2.5th (eventually because of fixing times the 4th) episode of my trilogy on Docker CVE scanners: Let’s take a look at how secure the scanners are. In this article I’ll focus more on the consequences and design level analysis, if you are interested in the nitty-gritty details and PoCs, head on over to the repository.

All the issues have been disclosed to vendors and have already been fixed or considered not a security issue. I’m also including my experience with their responsible disclosure process which could give you a feeling, how they think about the security of their product.

Furthermore please do not consider this as a comprehensive security audit of the products. I wanted to get a feeling about their security and have some fun!

Is security even relevant?

Most of these scanners started their career as command line tooling. In fact, I’d expect that is still how most of them are used. In that case security is less of a concern. You would not expect that the tool is going to defend itself against malicious input. You understand running any build related tools on code you have not looked at potentially means getting owned. It is practically curl to bash.

However, the use of the tools are getting more prolific: some would move the scanning to a service to speed up you CI scans. Maybe they hook into your docker Registry and automatically scan every image you push. You might connect them up with a Kubernetes Admission Controller or run them to check your running containers. All these integrations alter the security model/attack surface.

I’d argue best if you still treat them as build tools. You should not trust that they defend themselves; you should expect that the part than runs the scan will get owned. Well, you should do this even if they try to defend themselves, but do they?

These scanners do complicated and error prone things. They are dealing with docker, extracting layers/archives, interacting with package managers or parsing different formats. Defending them, while trying to accommodate all the use-cases for developers, is very hard. Let’s see how the different tools attempt and manage to do it:

The score on the responsible disclosure reflect my personal opinion: I believe it is important for software providers to be, responsive to security issues reported to them, be honest and transparent about vulnerabilities, to make sure people using their products are properly informed to make decisions about updating. This includes most importantly information that an update has security relevant changes, opening a CVE to track and communicate over the issue and potentially notifying their clients. I would think this is especially reasonable to assume if the product is about CVEs — providing information about vulnerabilities in software. Additionally I’m reassured by quick response, reasonable fixing times and open communication with the person disclosing the attack.

I highlighted if the original tool is most likely used in with a CLI. In this case I believe the attacking it is less relevant. I have to highlight some of these tools (e.g. Snyk or WhiteSource) have non cli versions as well which likely changes their security context. However, I was unable to test them.

A number of these tools directly use package managers to resolve dependencies. This makes them particularly hard to defend. Some dependency managers have configuration files that allow inclusion of shell code. Examples are gradlew for gradle or the podfile for cocoapods.

Even if somehow these straightforward ways are dealt with, calling these package managers will inevitably mean shelling out. This, to put it mildly, does not make defending the application easier.

Specifics of scanners

Anchore: Since the tool is designed to run as a standalone service, it does try to protect itself. It does not directly use package managers. However, it shell out to call skopeo to manipulate docker images. Getting around the slightly spotty shell escaping blacklist, one only has to find an input that is incorporated in the image to execute commands in the context of the analyzer.

The command execution I found (not particularly easy to exploit) was fixed within a day. The new version was already available within a week. Great response. That said, if I were you I’d run Anchore itself (or at least the scanner workers) in a sandbox and, according to their recommendation, not as root.

What made me feel more confident about them, is their response to the disclosure. They were honest and forthcoming in their communication, not trying to downplay the issue. They specifically made sure to open a CVE, which is only fair for a company with a product that searches for CVEs.

Clair: having a service setup it seemed to have been designed with security in mind. It is written in go (pretty hard to shoot your self in the foot), uses shell only for git/tar and does not run package managers. One of the scanners I would trust for security.

FOSSA: is a bit of detour since it was not included in the earlier parts. They focus largely on software dependency checking, without an offering for Docker Image scanning. Unfortunately for them, I found out about them at the time I was doing this part of the research. As part of trying it out I also jumped into looking at their security.

FOSSA much like Snyk runs their tool also as a service or you can run a scan locally with their CLI tool. Similarly to Snyk, parts of the product are open source. Looking through their code their architectural decisions also seem to come close to Snyk. You can see that from the pervasive use of direct call to packager managers and lot of shelling out. Indeed, it was not hard to find code execution, through gradlew or through Podfiles.

After raising the issue they responded swiftly (within hours), stating they do not consider this an issue. They effectively (and I would argue correctly) expect that it will be possible to run commands in the context of the scanner even if this is directly not supported.

With this in mind they hardened the service, I would recommend you do the same if you run FOSSA yourself.

Snyk: Nearly all plugins shell out to call package managers. As you would expect using the gradlew file with arbitrary script will do the job. There are some more exotic issues in their docker image scanner plugin, where they are running docker commands and attempting to execute binaries within the images.

They are trying to make sure that the effects of executing docker command are limited by, for example, disregarding the entrypoint and removing networking. However, there is an interesting issue. They search for specific files in the image using `ls` recursively, feeding the output of the last call into the next one. I think you can see where this is going. You can create a directory name that includes shell command execution, or merely prepare a malicious docker image with `ls` changed to just giving back a shell command in the right format after the appropriate escape sequence.

This would potentially be the most interesting issue since Snyk is executing docker commands so it is essentially or directly running as root. (Un)fortunately I could not find any code path within the binary where they call this unless you specifically pass the right arguments. However, this makes the attack scenario ridiculously contorted.

Small positive thing that came out for me after hours of frustration to make the exploit work: now I know why Snyk wasn’t working on the hardened images. Not having `ls` will just breaks the run.

Contacting Snyk they responded that they do not consider these security issues and they don’t plan to defend against them within the CLI tool.

Clearly that is something Snyk itself kept in mind when setting up their service. The examples I used do not work on their own backend.

Trivy: even though it is aimed for command line use, it seems to be designed with security in mind. It does not directly run package managers and looks like a solid project all together.

WhiteSource: I looked at the unified agent which essentially also means CLI usage. WhiteSource does provide integrations when the attack model might be more interesting, I was not able to check these.

The tool extensively uses shell, the usual method works, it runs gradlew files. That is in case they fixed the bug that they had with running gradlew files… On top of that it makes some escaping mistakes in including python package names in shell call to pip. Similarly to Snyk, if we prepare an image with shell command injection in the directory names, the code gets executed.

The interesting thing is that WhiteSource Agent can also scan containers, if it has access to the docker socket. This case is a good illustration of what could potentially go wrong. If an attacker is able to create a file in the running container that is scanned by WhiteSource he could take over the host and could exploit the raw docker access.

Even though the exploit works, this is a hypothetical. Continuously scanning running containers with the agent, is not how I expect people to use WhiteSource. Nonetheless, it helps to understand trusting the content of docker images and especially containers is not a good idea. In fact even their own Kubernetes scanning setup scans images and not containers, I reckon for similar reasons.

They quickly responded that the problems I have found are considered security issues by them. They asked for the usual 90 day wait period, but even after that, going through their release notes, nothing was fixed. Similarly they have not opened a CVE, which is strange for a company aimed at making software more secure searching for CVEs. Good response in communication, not so great in following up.

Based on further correspondence with WhiteSource, they have silently fixed the issue and provided an update in version 20.6.1. That means they fixed the issue in 2 month from the report, which, I think is a reasonable fix time, based on the severity of the issue. Especially that it turned out, during the validation of PoCs, that there must have been a miscommunication and WhiteSource only fixed the issue that requires specific configuration settings to be exploitable.

Once I raised the issue of the other vulnerability that works with the default setup, they again, responded very quickly, followed up with when they had a proposed fix. Even after I found a bypass for the initial fix they eventually solved the issue within two weeks. Once again great communication throughout and following up.

However, I disagree with their practice of once again not providing any information in the release notes (fixed in version 20.8.1) about a security update or opening a CVE for the issue.

I have to say I find it strange that a CVE scanner provider does not to open a CVE for an issue in their own software. Especially with items like this on their site:

The CVE list is defined by MITRE as a glossary or dictionary of publicly available vulnerabilities and exposures, rather than a database, and as such is intended to serve as an industry baseline for communicating and dialoguing around a given vulnerability. According the MITRE’s vision, CVE documentation is the industry standard by which disparate security advisories, bug trackers and databases can obtain a uniform baseline with which to “speak” to each other, communicating and deliberating about the same vulnerability in a “common language”.

or

Unreported vulnerabilities remain hidden away in security advisory boards or issues trackers, where their discovering entity first published them. These vulnerabilities are “off the radar” for many developers who usually scout the main vulnerability databases and therefore are less likely to become known and properly patched even if patches or new version are available .

According to their response, it is not important to open a CVE in their case since their product is not open source and is used in a different context than software dependencies. Furthermore they encourage their clients to use the latest version, that is the link they add to their documentation instead of specific versions and according to their data that is what they most often use.

I both agree and disagree. I made it clear in Part 3 that my opinion is that people should update irrespective of CVEs. But they don’t. In my opinion this is exactly why we need CVEs.

In any case, if you are using WhiteSource, please update to latest version and do NOT expect to get notifications ons security related updates.

What can go wrong?

Some providers also offered their scanners as a service, running on their own infrastructure.

I have generally found that they harden these services. The attacks either don’t work on them, or even if they do, the user is pretty much limited to attack themselves. Exactly the setup I’d like to advocate for in this article.

However, I found one mistake in one case. I’m not going to name names because the point is not to rip into the provider (in fact they were very quick to follow and and fixed the issue within hours). Everybody makes mistakes. It is more to illustrate what can go wrong:

They were running the scans in Kubernetes runners with all sorts of hardening and precautions. However, they forgot to lock down the network. This is probably not good as I could reach a number of internal services. To make matters worse it was possible to access the kublet service on the Kubernetes nodes. People with some background in Kubernetes security know this is classic problem and instant game over.

Take this as a cautionary tale: whenever you include these tools in your architecture (with the exception of maybe Clair and Trivy) make sure you do not trust the runners. Isolate them in all dimensions: rights, network, persistence; reduce their capabilities and potential blast radius as much as you can.

--

--