Vulnerability Disclosure 101

“Don’t hate the finder, hate the vuln” — @k8em0

Ryan McGeehan
Starting Up Security
7 min readMay 8, 2015

--

Uh oh! Someone has revealed a vulnerability.

Time to panic!

A developer and researcher are pointing fingers at one another and we don’t know who to believe. As bystanders to a blame game, let’s understand the disclosure from all sides.

Goals

By the end of this article you should be able to critique both a fixer and a finders roles in a vulnerability disclosure to form an opinion on its strengths or where it broke down.

If you are taking part of a disclosure debate, the following are great questions to criticize each side of the story. Depending on your view of “least harm” with vulnerabilties, you’ll approach a more informed opinion on disclosure.

This WIRED article is an example of disclosure gone awry.

Assumptions

  • All software has numerous, undiscovered security vulnerabilities.
  • All developers trade between vulnerability and usefulness whether they know it or not.
  • Disclosure is impossible to get exactly right. We can cut people slack when honestly seeking “least harm”, since challenges exist in getting there for both fixer and finder.

Terms

This article describes a fixer and finder.

A fixer could be anything from a single developer, group of maintainers, a person-in-basement, a huge web company, or startup. They wrote the software that includes the discovered vulnerability and would be responsible for fixing it. Historically this role has treated their vulnerabilities as taboo.

A finder could be a security researcher, hacker, random engineer, or 5 year old. They found the vulnerability and are disclosing it to a fixer. Historically this role has been wrongly penalized for disclosure.

Ask These Questions

Is there clearly a fixer behind the bug?

When an fixer is clearly accountable, we can critique the finder in their decision to involve the fixer. We can also critique the fixer in their decision to cooperate with the finder.

However, if there is no clear owner, there is a greater burden on the finder to disclose in a way that minimizes harm to others.

Some open source projects, protocols, crypto standards, may complicate disclosure. The Kaminksy bug in 2008 is a good example, as Dan Kaminsky coordinated disclosure in a way he felt would minimize harm to others. A fixer was not clear as patching the software itself did not mean DNS was fixed across the internet. This meant Dan had to do a lot of work and decide on some disclosure risks to prevent other risks of an early leak.

Did the fixer invite vulnerability research?

A fixer could have a disclosure policy and / or bug bounty programs to actively invite research. A finding that goes through established disclosure channels to the fixer is very different from a finding that fell on deaf ears.

Developers who actively encourage vulnerability research demonstrate an honest commitment to continually improving security, rather than making unsubstantiated claims.

Best Case: The fixer invited vulnerability research and findings before it was even found.

Was the fixer accessible?

Some fixer want findings sent through a myriad of email lists, bug trackers or customer service forms. The inward receipt of bug reports between developers are largely inconsistent, but all that matters is responsiveness. The vast majority of bug reports from well intentioned finders fall on deaf ears, so we should look for responsiveness on a fixers part as a positive sign.

Ideally, an engineer will ultimately have eyeballs on a finder’s research, not a lawyer or a customer service rep who won’t know what to do with it.

Best Case: The fixer was accepting disclosures in a simple, easy to find manner.

Was the Finder a strong communicator?

Keep in mind that security and engineering teams face a signal / noise problem. At Facebook, we received many hundreds of reports a day, and stuff would fall through if there was a multi-page rant and preamble before getting to proof of concept.

We even had a formerly-legit researcher waste our time with a photoshopped XSS.

If at all possible to inspect the finding itself, you can tell if the the report should be taken seriously.

Best Case: The finder made a best effort to include a strong proof of concept and left no opportunity to be ignored. The fixer could clearly see it was a legitimate vulnerability.

Was the fixer responsive?

Once a valid submission is sent to a fixer, start a clock. If they’re a huge conglomerate with many products and reports to sift through, a reasonable lag shouldn’t be a big surprise. As examples, HackerOne suggests 30 days, CERT/CC permits 45 days, and Project Zero over at Google is a strict 90 days.

Faster is better. Below is an example of an excellent turnaround time from Twitter.

The fixer should also be strong communicators and work with the finder on timeline, the suggested fixes, etc, however possible.

Best Case: The fixer got back to the finder in a reasonable time and kept them in the loop until resolution is met (fix / won’t fix / etc). Fixer stuck to their expected response time.

Did the finder adhere to a disclosure policy?

Disclosure programs typically ask for finders to confidentially submit vulnerabilities to fixer. For instance, if a finder told all of their friends on Twitter or published a blog post before disclosing to a fixer, they aren’t entitled to any special treatment in terms of bounty or fixer recognition. They’re more or less on their own and should expect no reward from the fixer.

A public vulnerability disclosure increases the likelihood is for exploitation. This gives a meaningful opportunity for bad guys to weaponize an exploit and hunt for those who are still unpatched. A private disclosure plan (as displayed with the Kaminsky Bug or Heartbleed) help mitigate vulnerability at scale until it eventually must become public, but is typically only for internet-affecting bugs.

There should be cause for concern when a fixer attacks a non-malicious finder by any means, or if the finder has any direct disclosure to “bad guys”, or other non-structured events happen here.

Best Case: The fixer had a simple disclosure policy that protects the finder from harm and requests reasonable confidentiality for the disclosure. The finder doesn’t need to break confidentiality until the fixer resolves the issue.

Did the fixer come through?

Assuming everything goes well up to this point there should be an as-timely-as-possible fix released, the fixer has been communicating this progress to the finder, and whatever they’ve promised as far as a monetary reward, recognition, or whatever else has come through.

Best Case: The finders expectations for follow up after the disclosure are met.

Was / Is the vulnerability exploited?

For particularly nasty vulnerabilities, the fixer ideally should have a level of confidence on whether a vulnerability was taken advantage of by criminals. If the finder took advantage of it (outside of their research) then that is straight up illegal.

Since that is usually not the case, if it was indeed exploited, then everything in the case you’re looking at should simply be faster and more communicative on the fixer part. For a low severity issue that wasn’t exploited, there can be more reasonable timeframe on things.

Best Case: An actively exploited vulnerability is fixed rapidly, within hours or days (short of deployment challenges)

Severity

We should only consider a panic for the highest severity vulnerabilities. Unfortunately, many disclosures become popularized when they’re not really putting many people at risk.

There are many ways to classify the severity of a vulnerability. One common framework is the CVSS, but it’s complicated. Here’s a rule of thumb:

Probability

How simple is this to exploit, and what do I need to exploit it?

Impact

If exploited, what is the damage? Embarrassment? Stolen Credit Cards? Eavesdropping? Jail time for dissidents? Death?

Scale

How many people will be impacted by this, if exploited? Thousands? Millions? Ask about probable circumstances of exploitation.

Finders and fixers will likely debate severity with finders pushing for greater severity and fixers downplaying severity. This is healthy as it narrows us down to reality as long as each claim is fact checked.

Calibration

Let’s talk about what sorts of vulnerabilities and disclosures should be cause for concern.

No big deal

Vulnerabilities that are discovered and reported, fixed within a reasonable time with a healthy relationship between the fixer and the finder are no big deal. This happens all the time, are no big deal, and (strangely enough) are a sign of an extremely mature security program.

Improvements Needed

Vulnerabilities are discovered and reported with some delays or misunderstandings. Patches and workarounds might not work the first time. The fixer may have been hard to reach at first but eventually became somewhat responsive. The finder refused to give a proof of concept or made demands before providing it.

Totally Broken

The fixer is hostile to a vulnerability reporter. A bug is not fixed within a reasonable time. Cats and Dogs are living together. Patches are not available and people are being victimized or breaches are occurring, at an enormous scale. The finder didn’t bother to look at the disclosure policy or exploited it themselves instead of a proof of concept.

Conclusion

If you have developed technology that others depend upon — treat vulnerabilities as inevitable and have a process for resolving them promptly. Anything less is purely irresponsible. Many software developers have dismissed this responsibility in the past, but this is fortunately becoming much easier to manage.

Celebrate companies that treat vulnerabilities with respect — and defend finders that are attacked for their research.

@magoo

I’m a security dude. Former Facebook, Coinbase, Co-Founder of HackerOne, and currently an advisor and consultant for a handful of startups . Incident Response and security team building is generally my thing, but I’m mostly all over the place.

--

--