Some shirts from Bugcrowd, HackerOne, and Synack — three popular bug bounty platforms
Some shirts from Bugcrowd, HackerOne, and Synack — three popular bug bounty platforms

Bug Bounty Operating Principles

Jason Puglisi
6 min readApr 29, 2022

--

Bug bounty programs let companies engage with the security community and benefit from a wide variety of skills. While operating a few different programs, I’ve thought a lot about how to be fair, generous, and grateful to researchers.

Here are my personal operating principles to help others start or improve their programs. I hope they also give researchers a glimpse into how the other side makes decisions.

1. Define clear scope, but be lenient with exclusions

Program scopes define what and how researchers may test, but submissions may tread the line of in-scope, or be clearly out-of-scope. As long as these aren’t intentional repeated violations, I’ve found it’s best to accept and process them like normal submissions.

Specifying in-scope services or attack types is easy, but it’s more difficult to exclude them. What if someone takes over an abandoned domain, hijacks a social media link, or exploits a known vulnerability in your outdated ticketing system?

I typically prohibit social engineering, denial-of-service (DoS), brute-force/spam, and other attacks that may be riskier for a third-party to perform. This is pretty standard, but it’s harder to say whether I should accept a report about a CVE in Jira. We didn’t write the code, but maybe we haven’t updated the system in months. Or maybe the CVE was released yesterday, and remediation is already in-progress.

A good rule of thumb is that if someone internally isn’t already aware of the issue, it’s worth accepting. It’s okay to reject a report and note that your team is addressing yesterday’s CVE. When someone reports an issue that’s been open for a while though, I’ll often accept it like a new issue, and thank the researcher for bringing it back to our attention. This puts pressure on us to fix things, and keeps researchers happy.

The same principle applies when someone reports a property not explicitly in-scope, like an abandoned subdomain, or a social media profile they registered that was linked on your company’s homepage. Is it worth ignoring a legitimate security issue just because it was out-of-scope? Probably not; someone still put time into that finding and helped your company by reporting it.

2. Empower researchers, and accept testing in prod

You should strive to provide the best resources possible to learn about your systems and test them effectively. But no matter how easy you make it to test in staging, you’ll be happier the sooner you accept that people will test in prod anyway.

I’ve put a ton of work into program briefs, scope documents, testing guides, and even custom staging environments for bug bounty testing. The lesson I’ve learned is that most people just want to hack. You may get lucky and see a few account signups in that special environment, but researchers will test in prod whether you like it or not. This is the nature of bug bounty testing, but it’s not bad when you realize people generally aren’t performing dangerous or destructive attacks.

Even for a small audience, I find creating those resources worth it. If just a few researchers benefit from them, you’ll end up with more impactful reports. Make it easy for everyone to test, including in prod, but empower those attentive people the most.

3. Do your own triage, and clean up reports for internal teams

Bug bounty platforms often offer triage services to reduce spam. These services can save a lot of time as first line of response, but provide less value validating issues or determining severity. External triage teams don’t have the same knowledge as your teams, and it’s worth having someone on the inside confirm issues and their impact. You may find limitations in how certain vulnerabilities can be exploited, or realize an issue is worse than reported.

Researchers also don’t have the full picture of your systems, and usually offer generic advice on root-cause and remediation. Have your internal triage person offer a concise description of the issue, root-cause, and appropriate remediation.

4. Practice gratitude, and give researchers the benefit of the doubt

Dismissing issues is easy when low-quality or vague reports come in. Challenge initial assumptions and take a second look at questionable submissions, even if just asking the researcher for more detail. If the report is still lacking, your team may need to do extra investigation, but could end up with a valid issue all the same. Knowing something is better than nothing.

Assuming positive intent usually provides the best results. Taking all reports seriously builds trust with the security community, and helps prevent embarrassing mistakes. Nobody wants to dismiss a report and see a blog post detailing not only the issue, but also the poor response to it.

5. Communicate with honesty and transparency, and on the same technical level

I’ve gotten a lot of compliments from researchers on transparency. As an engineer, it always seemed natural to provide technical details on what went wrong, how we triaged it, how we fixed it, and how we validated the fix. I never considered the possibility of many reports ending with a triaged label and reward.

Technical updates establish confidence around severity/reward decisions, and build trust with researchers. It shows you have people who understand issues and care about fixing them. Even better is sharing when a report uncovers similar issues, or has impact the researcher didn’t consider.

6. Set reward ranges based on scope maturity, and be consistent with decisions

Getting a flood of high-severity reports, and paying out large bounties for each one is a scary thought. Keeping reward amounts competitive with industry peers is a noble goal, and attracts good researchers to your program, but can spiral out of control if you’re not prepared.

Consider how much security testing has already been conducted on your systems. If it’s a lot, you can confidently set bounties high, as researchers need to work harder to find issues. Assets with little prior security testing are easier to attack, making it okay to start off with lower bounties. As reports slow down, you can raise bounties to attract more researcher effort.

No matter the ranges, it’s important to show researchers respect and stick to them. Try to use objective resources for deciding severity, like Bugcrowd’s Vulnerability Rating Taxonomy, or CVSS scores. That said, having reward ranges gives some flexibility, and it doesn’t hurt to award more for reports that are well-written, or issues that took a lot of effort to find. When downgrading severity, let the researcher appeal or come to agreement.

7. Find ways to make your program unique, and keep it fun for researchers

High bounties can get the best eyes on your program, but keeping things exciting helps as well. Think about what sets your program apart from industry competitors and everyone else on the platform. Can you send researchers IoT devices or other hardware (maybe once they’ve demonstrated a level of qualification)? Or host a live hacking session at a security conference?

Imagine letting someone take an electric scooter home if they manage to unlock it, or letting them collect a bounty from your point of sale’s cash drawer when they pop it open. The cost of this isn’t much considering the bounties you already pay. Even adding new systems to your scope and sending out announcements attracts attention, and may cause people to re-examine older stuff.

8. Work toward going public, with metrics to be proud of

Launching a public bug bounty program is great when confident in your systems’ security. Many programs start off private though in order to ease into reports and payments. You can give access to more researchers as you become comfortable with the reports coming in.

Even if this takes a while, it’s worth keeping the goal of turning your program public. Not only will this attract the widest range of skills, but also display confidence in your security, and show that your company has people who care about it.

Remember that going public makes it easier than ever for people to see and talk about your response metrics — things like how quickly you reply to submissions, pay bounties, and resolve issues. Letting these metrics slip can become an embarrassment, while keeping them low is worth bragging about. They might even add some positive pressure internally to fix issues!

Closing thoughts

My advice comes from operating bug bounty programs and reflecting on how to create better experiences for researchers.

Recording these thoughts repeatedly in company wikis made me want to share them publicly. I’ve mainly worked on small/medium-sized programs, and other experiences may conflict with mine. Hopefully at least some of this resonates with people looking for help or different perspectives.

If you have counterpoints or advice of your own, feel free to leave a comment. I’m always eager to learn more and improve my processes!

--

--