Tabletops for Bug Bounty

Improving a bug bounty program with fictional problems.

Ryan McGeehan
Starting Up Security
8 min readMar 28, 2017

--

Bug bounty and disclosure programs are pretty sensitive operations. It’s important to plan ahead for their potential problems, but this is much harder without access to shared knowledge on what could go wrong.

Let’s tabletop some potential bad situations that have happened to some longer running disclosure programs. A few of these happened to myself personally, and have shaped my opinions on disclosure.

Role playing these situations will help form your initial scope and policies. It will improve team-wide interactions with researchers and the speed of internal decision making when corner cases are discovered.

Also, If you’re generally a fan of tabletop exercises, go follow @badthingsdaily on Twitter.

A researcher is grossly overstating a “won’t fix” severity bug to the press.

Right off the bat, it’s very important to state that a false media cycle criticizing your security is your job to deal with. A communications team should not handle this themselves. You may find engineers that view this as strictly “PR’s problem” at the last minute, when you’d rather have them instantly engage with the incident. Set this precedent with a team.

You’ll want to explore your boundaries on public engagement. Would you go as far as calling a researcher a liar? Probably not. But what if they are lying? Would you be able to restrain your team from calling them out?

What if a researcher is just confused, or young, or are operating on limited information? These may seem like details and nuance, but it matters when crafting a message that will softly deny a claim without attacking someone’s ego or reputation. You could easily make a situation worse.

Do you have someone on staff who is comfortable entering debate on Reddit or HackerNews? Or talking to a reporter in non-technical terms? Do you have a PR contact who can make the relationship happen?

Or will you strictly avoid public debate altogether?

Personal story, ahead!

On several occasions, I’ve had to publicly comment on a vulnerability and refute its impact. This can suck. It’s not fun. There’s always a risk you’re wrong and you’ll embarrass yourself.

A misleading press cycle can cause increasing amounts of harm if left alone. You have to calibrate your response based on how defensible your assessment of risk is, and whether a researcher's claims are going viral.

One of my most infuriating experiences was when an Android app developer claimed he found a MITM vulnerability in my employer’s mobile product at the time. As it turned out, he forcefully installed a new root certificate authority with Charles, (a developer’s proxy), and was surprised to see plaintext traffic pass through it.

Of course, this is intended behavior when you’ve tampered with your own device. Developers do this all the time to debug HTTPS.

However, the sheer excitement of finding a bug catfished this engineer into believing they had found a critical MITM vulnerability. They disclosed this to us, fully believing they had found a critical issue.

We very kindly closed this as a wontfix and explained how he essentially hacked himself. When you have an adversary modifying your device physically or with local code execution, most bets are off.

But this didn’t stop him from generating significant “This Company’s App Is Dangerous And Delete It Now!” press later, by shopping around a disclosure blog post to the press, claiming HTTPS interception.

Imagine how you’d impulsively want to respond. Imagine restraining an entire team from responding. Instead, you have to play it cool and be respectful of everyone involved.

The only thing that stopped this press from spiraling out of control, was our ability to hit the “Public Disclosure” button on the thread. This showed evidence of the researcher’s bad faith. We had months-old evidence that he knowingly went into a press spree with a refuted vulnerability that he had acknowledged as a mistake months earlier. Most follow up press requesting comment, backed off, and the media cycle disappeared.

Additionally, I had brought in an outside mobile expert to comment on the thread because my own mobile-fu was weak. I am no mobile expert. I didn’t want to refute the researcher widely unless I was perfectly confident with my tone.

Consider the threshold that had to have been met for Moxie to step up and refute claims against the Signal protocol, publicly. Should you have policy on public responses when a wide misconception occurs about your security?

A researcher has found an extra-critical vulnerability and your maximum value does not make sense.

I was once not clear enough on Facebook’s minimum and maximum policy, and a reporter misquoted me as declaring that we had a “million dollar bounty”. Don’t do that. Have clear standards for minimum and maximum, allow for a bonus at your discretion.

My favorite part of bug bounty has always been the critical bug that comes out of left field. Sometimes you want to reward more at your own discretion. HackerOne has a bonus feature for this discretion. You may want to plan for budget and have policy set for this sort of additional reward.

  • Who gets to sign off on this bonus?
  • What is an appropriate bonus?
  • How much bonus monies do we have allocated to use?

When someone like Reginaldo comes along, you’ll have a strong urge to reconsider your maximum. Over time, you’ll want entire classes of bug to become rarer, allowing you to become more comfortable with larger payouts.

You’ve confirmed a vulnerability that will take unusually long to fix.

It’s a damned nightmare having to push a breaking security change to external developers, especially ones that literally make or break businesses built on your platform. A large refactor in legacy code can also drag things out quite a bit. Platform issues are notorious for having far longer windows for a fix than other issues.

This is a serious situation to tabletop. For communications, exercise it by practicing phrasing and pushing transparency to ensure a researcher will know exactly what will be involved with a longer fix window.

Internally, have a clear escalation policy when a roadmap breaking vulnerability comes along. If timelines will exceed public expectations, you’ll need extra help and you shouldn’t have to spend much time explaining why.

A researcher has impacted your users or customers.

Take for example, this situation with a researcher and Facebook. After several miscommunications, a researcher intentionally exploits the bug against arbitrary Facebook users’ walls (including Zuck’s) to get the security team’s attention. This is a case where mistakes may be identified on both sides of the fence. How would your policies apply?

You may also find a researcher who has unintentionally impacted others. Applications usually behave somewhat unpredictably when bug conditions are met, so some tolerance should be expected when a problem occurs here.

Who will be your judge on intent?

Before jumping to a conclusion about misbehavior, be sure to have thorough review on whether they actually violated any good faith policy you might have, and be clear on who can exercise this judgement the best. Be very careful about the precedent you set, because this will be publicly known among researchers.

A researcher may have accessed confidential data.

This is another example of a situation where intent matters quite a bit. A great example is a DFIR team getting fired up over a successful SQLi, discovered by a slow query log or a large amount of outbound data coming from a host that normally wouldn’t have exfil behavior.

They’ll want to attribute and sue the person who caused it.

The reality is that sometimes a researcher will have no clue that they’ve successfully exploited a vulnerability. The DFIR side will generally lean towards thinking an attacker knows exactly what they’re doing, and they probably should. But this isn’t always the case, and in vulnerability finding, it’s often not.

Additionally, there are cases when a researcher is genuinely surprised with the disproportionate amount of risk they’ve uncovered with a small amount of effort, and may write to you in a panic about some large amount of exposed risk. They’ll be doing so to cover themselves with your policies, so treat them well.

You also have the case where a researcher has gone much farther than your team would be comfortable with, and you’ll have to assess whether your policies would support you in rejecting this as bad behavior.

In any case —This is the type of situation to tabletop with legal counsel. Ask what the plan would be if a public researcher has come across sensitive data. Would you ask them to sign a confidentiality agreement? Would this be part of your programs policies? Would you be subject to any form of breach notification?

You’ve discovered a researcher’s finding before they could report it.

More sophisticated appsec programs will likely have canaries or tainting frameworks that will alert a security team about successfully exploited bugs.

Jump to 22 minutes in here, in Zane Lackey’s talk on appsec at Etsy, for good example scenarios where this could happen to you.

After an investigation, you may be in a situation where you have a vulnerability on your hands before it was reported to you. Your team may disagree as to whether the researcher’s contribution is important or not. The reward may also be at debate, whether they should receive the full amount.

Overall, this a good scenario to smooth those arguments out, and explore your team’s opinions, before this situation becomes reality.

A researcher has reported an 0day in an external dependency.

You may receive an issue that compels you to disclose a vulnerability upstream. This may put your organization in a place it wouldn’t normally be in, so be sure you have the ability to discuss this publicly or privately with the security team upstream.

An example, the Greenhouse bug bounty program found two CVE’s, in Solr and Rails. Uber had a vulnerability go upstream to Code42 (Crashplan).

Additionally, you may want to have an outbound disclosure policy on how you’ll react if the upstream fix does not occur. Will you be like Google, and disclose in 90 days? Or will you be more conservative/liberal in how strict you are? Who will manage these opinions on disclosure that will eventually be codified as policy?

A researcher has reported an extremely sophisticated duplicate bug.

It’s important to be strict on duplicates, but it’s also important to nurture a community that will disclose bugs to you. At some point, you may receive a duplicate bug for a sophisticated, critical, or otherwise “right where I want to receive bugs”, from a hacker who is really effective.

Will you have exceptions to a duplicate policy to encourage good researchers to come back?

A high reputation researcher is a complete and total asshole.

Sometimes you have to protect the morale and sanity of your team, and no matter how valuable the researcher, they cannot be rewarded. Consider how your triage team will escalate atrocious behavior and consider how you would ban rewards from going their way in the future.

Additionally, consider how you would need to explain the situation publicly if you were pushed to.

Conclusion

Bug bounty and disclosure programs bear fruit when you tend to them correctly. It’s useful to know how they may go wrong, so you can better manage getting it right.

@magoo

I write security stuff on medium.

--

--