Testing Docker CVE Scanners. Part 3: Test It Yourself/Conclusions

Gabor Matuz
The Startup
Published in
7 min readAug 10, 2020

In Part 1 and Part 2 I looked at the false negatives and detection rates of Docker image CVE scanners. This time I’m sharing the information about the vulnerable test images and detailed results from Part 1 so you can do the tests and analysis yourself. I’m also giving you my conclusion on the place for Docker image CVE scanners in the security tooling.

When I’m working in security automation I always try to avoid asking “What is this tool good for?”. I believe this approach results in inefficiency because of the false positives and false negatives brought in by the tools in areas where they are not particularly effective. More than that, it often sidesteps the question: Is this even a problem for us?

A better question i think is “What is the issue specifically and what is the best way of addressing that and — only that?”

With this in mind, in my conclusion, I will break down the different use-cases of docker image CVE scanners, to show which exact ones they add value currently.

Docker images with exploitable CVEs

As mentioned in Part 1 to evade the “is that CVE really exploitable?” question I decided to do my initial research on docker images that have proven, important, exploitable CVEs.

All the credit for these go to Vulhub, an open source collection of exploitable environments and walkthroughs. Please support them, they are doing awesome work.

Vulhub has around 100 different docker based exploitable environments, with vulnerabilities that are mostly CVEs. Sometimes they even build complicated examples with docker-compose if the exploit requires multiple components.

So all I had to do is filter their builds and adjust them a bit:

  • I removed all where the issue is not a CVE
  • Removed some where the component that has the CVE is added in some funky way (say in the entrypoint script). Not that people wouldn’t do things like this but it is natural to assume scanners will not pick these up.
  • Wherever they used multiple images in a complex docker-compose environment I extracted the image that had the CVE
  • For cases where they used volumes in docker-compose to add, for example a static web page, I adjusted the images to directly contain the files. I did this to avoid criticism that something might not be exploitable. I was so shocked by the bad results that I felt I had to make sure everything is fine. At the end it did not make any difference
  • On top of that I have created a few scripts to exploit the vulnerabilities in the images, partly to make sure they are in fact exploitable. I have also done this to be able to show to the engineers working on the specific products that in fact the versions are right and the vulnerabilities are exploitable.

If you want to run your own scans you can find the images in this registry tagged by the CVE they contain. You can also build them with this script I added to the testing github repo. As we know you should be careful scanning random images!

You can check out the list of images, with the corresponding CVEs, Dockerfiles and the results from each scanner among the results in the repo.

I think I’m not the only one who was shocked by the results so I have gone quite some length to validate them and to try to find some reason how this happened. In Part 2 I have talked about some of the ways it is hard to find the components that have the vulnerabilities, now I’m going to try to be even more pessimistic.

Dark side of CVEs

I’d assume one of the upsides people expect from CVE scanners is to be able to be more efficient and specific about picking which vulnerabilities to care for. In fact this is also the main marketing for some of the commercial tools. They spend the time doing the sorting for you. However, having reviewed some of these CVEs my opinion is that it is not possible to be selective about patching based on CVEs if you want to make sure you don’t miss anything.

I already mentioned in Part 1 that RCE type vulnerabilities will at times be classified as not High risk, so if you only go for those you will definitely miss some. For example if you look at the results, Xray rates CVE-2016–4977 as a Medium, even though it is a remote code execution without much of a setup. It is hard to blame Xray for this though, since the other scanners didn’t even find it.

But what if you or, in case you get one of the intelligent/expensive products, the engineers at these providers read the CVE description of CVE-2017–12615?

When running Apache Tomcat 7.0.0 to 7.0.79 on Windows with..

Aaand I stopped reading mine is running on Debian (and even possibly the version is say 8.5.19) so clearly I’m all good. Well not so much. You can say I’m unfair (especially if you read mandarin) because it is a bypass for the original patch. Maybe. But my point is still valid. There is no separate CVE for this one, so how were you supposed to know looking at CVEs that you need to upgrade your 8.5.19 to 8.5.20 to fix this?

Ok then let’s take maybe CVE-2017–11610. Let’s say you dig deeper this time and take a look at the exploit: https://www.exploit-db.com/exploits/42779. Ohh it says “< 3.3.2” and I’m running 3.3.2, all good. Except it is a typo and in the code it says “=< 3.3.2”

Now I’m not saying this is how you do your job, but clearly ruling out patches efficiently based on information you dig up on CVEs is not straightforward. Especially if you expect to do it a lot which means you have less time for each.

Are you saying CVE scanners are useless?

Nope.

I’m saying this:

The best is having good enough testing and rollback that you can confidently roll out patches in a way that you don’t have to triage a lot and you could rather automatically bump versions. Every time we succeeded with patching it has been done like this, be it Apple, WordPress etc…

Eventually there won’t even be any other option other than being able to apply patches without triage. The window between patch and exploit is over time getting smaller. It is largely an illusion to think that one can patch much faster when it is necessary compared to when one should.

If you don’t want to do this for some reason, this is what you should do:

CVE scanners definitely work and are useful for software dependencies. Due to the dynamism of the ecosystem software dependencies will likely have more breaking changes. You will have more of them and it is hard to keep up to date. Add to that they are described in a way that is easy to detect and resolve. You should absolutely do it.

Regarding operating system related dependencies you could use a CVE scanner. Practically any because they all do a good job looking at packages added by the OS package manager. But to be frank I think most sysadmins have gotten used to updating without a CVE scanner as well. There are fewer instances when you might break something so maybe this is not the biggest value added if you could simply keep updating things. Given that we are talking about docker images it is even easier. You can harden the images or even better use something that doesn’t come with much baggage like distroless or alpine. Finally, not to say that privilege escalation is good, but most of the vulnerabilities that can be exploited through the internet are not going to be in these packages.

That leads us to what I perceive to be the main issue: what to do with nginx, apache, elastic search etc. These are clearly not picked up by the scanners, unless you find a way to install them with package managers. On the other hand, they seem to have the issues that are readily exploitable. I don’t see CVE scanners adding a lot of value here. I think you should take special care with these. For a smaller company this might be straightforward even without a sophisticated tool. I would argue it is better if you create a list of these tools that you manually maintain and simply keep track of releases. Oldschool. Aka security with RSS Feeds. Like an OG. If you want to do it based on CVEs subscribe to CVEs specifically to those components that you are interested in. This at least removes the issues with detection. It is not really fancy tooling to for example receive a mail or telegram notification from vulners on new CVEs for your list of products. But it works. It might even be a good idea to choose not to filter on specific versions to make sure there is no miscommunication of version numbers.

These proxies, web servers, database servers, etc are the 5–10 components that are the most important ones in your infrastructure in terms of realistically getting hacked. Maybe it is worth some of your time every day being proactive about them.

and… drum roll… to start catching up to Douglas Adams, later on I’ll add a 4th episode to my trilogy on CVE scanners: Exploiting CVE scanner, coming soon.

--

--

Gabor Matuz
The Startup

Security enthusiast into all things efficiency. Current project: https://inthewild.io/