Legal threats and vulnerability research: we have to talk about this again

Here at Blaze Information Security we strongly believe that security research is the lifeblood of the information security industry. As part of our day-to-day activities, we regularly conduct security assessments of third party software on behalf of our clients, as they want to quantify the risk and potential exposure of incorporating a piece of software into their IT infrastructure. We also look for security bugs in our labs research time for the sake of keeping our bug hunting skills up-to-date and helping to improve the overall security of these products.

A few months ago we spent two weeks reviewing the security of several popular kiosk solutions. As we usually do, after first reporting the vulnerability to our clients we contacted the affected vendors, providing all technical details necessary for reproducing the vulnerability and offering our advice on how to best fix the issues, all pro-bono for the vendor and in accordance with our vulnerability disclosure policy, which follows industry standards.

This is a win-win situation for all involved parties: by improving the security of the affected products, we fulfill our mission in securing our clients, who in turn will benefit from more robust and secure software, and the vendors get to fix the vulnerabilities before other actors with less noble intentions have the chance to exploit them, potentially impacting the reputation and market share of the products. By the way, it is a well-known fact the latter was the tipping-point that led Bill Gates and Microsoft to start Trustworthy Computing back in 2002.

Back to our own story: After disclosing the issues to the relevant kiosk software vendors, all of them, but one, reacted very quickly and most of them with a positive response to our reports. All members of our team are no strangers to this and have reported and coordinated the release of vulnerability reports in the past, some since 2003. We were taken aback for the first time by an extremely uncooperative and threatening vendor.

Framing it in layman’s terms, think of a vehicle recall: someone points out to the manufacturer that the tires of a particular car model are defective (or for that cyber-matter, that the car can be hacked remotely, like Charlie Miller and Chris Valasek did). Now imagine that instead of taking the report seriously and acting promptly to fix the defect which can potentially cause numerous accidents, the car manufacturer decides to not act upon the reported defect and goes above and beyond, intimidating the person who found the defect by threatening with legal action. It is hard to think that a negligent, counterproductive and morally wrong act like this could actually happen, isn’t it? So why is it that similar stories are not unheard of and seem to happen more oftenthan not, in the IT industry?

By attempting to intimidate vulnerability researchers with legal threats and gag orders, all the vendor is doing is sweeping the dirt under the carpet. The product does not get any more secure just because a researcher is not allowed to disclose a vulnerability in it. What the vendor in question wanted with this intimidation was to create a smoke and mirrors situation that embellishes the poor security posture of their software.

While the absence of public security vulnerability reports may be a good indicator of the overall security of a product, this fact should be taken with a grain of salt to avoid a false sense of security — organizations with budget and low risk appetite should always perform a security due-diligence.

In the eternal vulnerability disclosure debate, it is not rare for vendors to quickly draw the gun and point the blame to security researchers for making the information available, in many cases armed with their legal team. The question is why this debate rarely, if ever, touches the other side: what are the responsibilities and liabilities of software manufacturers, especially when it can be proved they were negligent with the code they wrote, did not follow best practices or had no security engineering at all incorporated in the project before putting the product out in the market?

“But making vulnerability information public helps the attacker!”, they said. Please allow me to disagree…

It is a popular belief among many people that by making public information on a vulnerability in a full-disclosure fashion, the researcher only helps the attacker. At Blaze we fundamentally disagree with this point of view and offer an alternative look at it:just because someone found an issue does not mean someone else will never find the same bug. In fact, it is not uncommon to see the same vulnerability being found and reported by two or more independent researchers. Let’s face it: there are thousands of skilled security researchers and underground hackers that spend their days and nights looking for vulnerabilities in different products and just because you found something does not mean someone else did not find it already or will not find it tomorrow, when at 4:35am his fuzzer spits out a test case that triggers the same issue you found, either via fuzzing too, auditing the code or using other techniques. Your bug is not a beautiful and unique snowflake.

It is very easy to lean towards the common bias that disclosing vulnerabilities only helps the bad guys and nothing else, but people often overlook the benefits it brings: it is only by making vulnerability information available to the public that many vendors feel compelled to rush and roll out a patch, like in the classic case of HP vs. Snosoft. Some vendors may think if there’s no public information about issue nobody will bother crafting an exploit — after all, paraphrasing Microsoft TwC’s John Lambert, defense is offense’s child. This common misconception among vendors is what public disclosure helps to address.

Remember Heartbleed? It was all over the news, including media outlets for non-IT people. The media blitz on Heartbleed was so overwhelming that everyone heard about it (non-infosec friends even asked me what’s the deal about this “virus”) and consequently most system administrators acted quickly to patch their systems, considerably reducing its window of exposure when compared to other vulnerabilities that linger around IT infrastructures for months, with low priority in patching cycles.

How the situation unfolded and how we handled it

When we first contacted the vendor informing we found two vulnerabilities in their product and asked for his PGP public key so we could send the details via e-mail in an encrypted form, the immediate response was very negative and did not seem to welcome any sort of vulnerability report coming from external entities.

At first we were obviously scared and intimidated about the legal threats should we release any information about the vulnerabilities we found. Again, we did it all free of charge and wanted to give the vendor a heads up about the vulnerabilities so they could get them fixed before other actors found them and took advantage to cause harm.

After being greeted with such negative response, we did not disclose any details about the issue to the vendor and decided we would adhere to our vulnerability disclosure policy, regardless of these attempts of intimidation. The security advisory will be released in a forced disclosure mode 90 days after the grace period expires. We do not want this to be perceived as any sort of revenge but instead we believe that only by making the vulnerabilities public, the vendor will finally react to it as they may be pressured by their own clients to fix the issue.

I don’t practice santeria, I ain’t got no crystal ball but the future looks bright

Just recently the legendary L0pht original Peiter “Mudge” Zatko reappeared in the media with his new initiative of a creation of a test lab similar to the century-old Underwriters Laboratories (which, in fact, themselves have a newly established “cyber” lab). Although this idea was initially proposed by L0pht in 1999, it only seems to have materialized now cyber security is on the tip of everyone’s tongues and in the agenda of board meetings and governments. With the ubiquitousness of mobile applications and the emergence of Internet of Things, ideas like this are not only very welcomed but much needed. The increased scrutiny from independent experts and organizations helps customers to make informed decisions from a risk-based perspective about the software they buy.

As initiatives like these gain popularity and become common practice, we are expected to see vendors reacting more positively to external vulnerability reports, as independent public reports comparing the security of products bring a better insight for customers and those who do not take product security seriously will likely lose competitiveness and see their market share diminish.

Conclusion

It is a sad state of affairs that more than 15 years since the IT security community started the debate on public disclosure, some software vendors act in such counterproductive manner.

The overall consensus in the information security scene and industry is that vendors should appreciate, and even encourage, vulnerability reports coming from third-party experts and independent organizations. After all, vendors are essentially getting a free QA that will bolster the resilience and security of the affected products, and they should be thankful for that.

New initiatives that aim to establish quantifiable metrics to formalize the process of cyber-risk scoring and security quality of products are very welcome. Given the current cyber security scenario in the industry, government and society, these initiatives may bring to the table arguments favorable towards the rationalization shared by the information security community.

Hopefully we will soon leave this kind of discussion only between security researchers and product teams, far away from legal departments and courthouses.