Part 5 — A Comprehensive Guide to Running a Bug Bounty Program

Julian Berton
14 min readJan 5, 2019

--

With the Year-of-the-Breach behind us (I feel like we say that every year), it’s important for businesses with publicly available assets storing sensitive data (websites, services, infrastructure) to setup a process for members of the general public to report security vulnerabilities discovered within their systems and applications.

The following post will cover:

  1. Benefits of a Vulnerability Disclosure Program (VDP).
  2. Options for running a Bug Bounty Program that allows security researchers to test your applications and collect rewards for vulnerabilities found.
  3. Lessons learnt from three years managing a VDP for SEEK Jobs and the associated risks.

What is a Vulnerability Disclosure Program?

A VDP provides a means for white-hat security researchers to safely and legally inform a company about a security issue they discover within one of your public facing web applications, services or infrastructure assets.

VDP’s can start out as a simple static website that outlines how researchers can report an issue, usually by emailing a security@company email address and is also accompanied with an internal policy on how to deal with the vulnerabilities reported. It also may includes things like what domains should not be tested, if the issue is allowed to be shared publicly once fixed and rules related to allowed testing methods.

A VDP can also get as complex as a system that rewards and provides recognition to researchers who report valid security vulnerabilities, commonly called a Bug Bounty Program (BBP).

SEEK’s Vulnerability Disclosure Program landing page.

Bug Bounty Program

The concept of a Bug Bounty Program (BBP) has been around for a while and is catching on within the industry, with over 600 public bug bounty programs currently active worldwide according to Fire Bounty. They have proven to be a great complimentary control for your software security programme, can be thought of as the last line of defence when other controls throughout the SDLC fail and is also starting to gain traction in software security maturity frameworks such as the BSIMM.

BBP’s become especially important for companies that are deploying small, incremental code changes to production systems on a daily basis or that are massive in size and have a large number of websites, services and infrastructure publicly exposed to the internet. In these environments it becomes impractical to perform manual security tasks for every code deployment (i.e. dynamic testing, code review, architecture analysis, threat modelling, etc) and instead are forced to move towards automated security controls, which if implemented effectively can identify certain classes of vulnerabilities really well. Broadly speaking however, certain automated controls will never be as good at finding bugs and will usually be too slow or false positive prone to integrate into your CI/CD build pipeline.

In most cases, if implemented well (more on that below), BBP’s provide a way to semi-continuously test your applications for security vulnerabilities by tapping into a pool of talented and skill-set diverse researchers from around the world.

Starting with a Vulnerability Disclosure Program

If you don’t have an internal policy and publicly accessible website describing your process for disclosing a vulnerably, this should be your starting point before jumping into a BBP! Both the DOJ and Bugcrowd have detailed frameworks that will get you started :)

Can I run a Bug Bounty Program?

There are some considerations to running a BBP, some have workarounds but others don’t. If your answer is “no” to most of the questions below then you might want to seek some professional advice to figure out if running a BBP is right for you:

  • Do you have security aware people that can analyse the researcher reports, determine the actual risk to the business and work with teams to fix the issue?
  • What is the security maturity of the websites you want to test? Have they been tested for security vulnerabilities in the past by a security professional? Do you have other security controls within the SDLC?
  • Could researchers test your production applications? If not do you have a testing environment that is publicly available?
  • If testing will occur on production websites, can you block researchers if they are affecting customers or degrading service to the website?
  • How fragile are your websites? Can they handle the extra traffic from the researchers without falling over?
  • Do you have budget and company by-in to fix security issues in a timely manner?

Benefits you say?

If you are asked to explain why it’s worth taking the extra risk (see risks below) here are a few upsides:

  • Incentives are different. I’ve seen obscure and complex bugs reported that would have likely taken days or weeks to find, test and report as researchers are less restricted by testing time constraints and are only paid if they find bugs, not for time spent testing.
  • Diverse skill sets. Some researchers specialise their skills on a few bug classes and become really good at finding them (i.e. XSS, OAuth, SQLi, subdomain takeovers).
  • Large pool of researchers. If you have several applications that need testing or you want to rigorously test a certain website, then scaling up researchers can be straight forward, assuming you have the right level of incentives, scope and a well managed program.
  • Being able to rotate testing attention across a large pool of researchers with fresh eyes can result in finding new issues from endpoints not discovered by other researchers, newly added functionality that was just introduced into the website or a new testing technique.
  • Having a VDP and running a public BBP is a good way to assure customers and the broader community that you take security seriously and are willing to backup your assurance with monetary rewards and a respectful response timeline.
  • Vulnerabilities reported can help drive security culture change, buy-in and trust within your development teams and the company as a whole. Sending regular updates regarding the value of the program and using the issues within your internal security training programme to prevent them from occurring in the first place.
  • Researcher traffic can look similar to real malicious traffic, so BBP’s can provide regular opportunities to validate your operational security alerts, controls and monitoring tools. Sometimes when you least expect it!

What are my options to get started?

Ok so hopefully the benefits above have sparked your interest. Now let’s look into the options:

  1. Run your own program completely from scratch like Google.
  2. Use a paid, managed service (Hackers-as-a-Service) to help you run the program (i.e. Bugcrowd or Hackerone).

For most companies it makes sense to use a HaaS as it saves a lot of time, can be quicker to get started, rollout is less error prone and will cost less to achieve the same outcome. Running a BBP from scratch would involve first think about the following:

  • How are you going to find researchers and entice then to test your site? If you are in the top 100 Alexa rankings like Facebook or Google this might be a breeze. However if you aren’t, then attracting talented researchers might require expensive marketing or higher reward payouts.
  • Once you have the researchers how do they submit reports? You will need a standardised so you can get the details to easily replicate the issue.
  • Now that you have reports coming in, you will need someone to respond to each report, test it’s validity, check it’s not a duplicate and give it a priority.
  • It’s a valid bug! Ok we need to reward the researcher… What if they are in another country? How do you pay them? What if they don’t have Paypal!

Hackers-as-a-Service

HaaS make running a BBP simpler by giving you the tools and guidance to setup and run a successful program quickly. If you can’t tell already, I highly recommend not starting from scratch! Given what most include within their service offerings:

  • A pool of researchers on tap, allowing them to instantly direct researchers to your applications without having to market or promotion your program.
  • A portal where researchers can submit bugs in a standard format to make replication and triage straight forward.
  • Security engineers that can help with issue validation, triaging, de-duping and communicating with the researchers in case you don’t have the people power in-house. Internal security aware people are still required but it’s much less effort involved.
  • Systems in place to pay researchers all around the world.
  • Experience running hundreds of programs so can advise on the best approach and strategy for your companies requirements.
  • Different engagement options to help test the waters before diving right into an ongoing program.

The main HaaS offerings as of the time of writing:

  1. A short invite only program — Approximately 50 researchers are invited with good reputation scores to test a handful of your applications. After a short period of time, normally around two weeks, the program shuts down, researchers are paid and you are given a list of valid bugs to triage and prioritise.
  2. A private ongoing program — Run continuously throughout the year. The number of researchers invited, the payout rewards and scope are regularly tweaked to keep researchers engaged and motivated to test your applications and will prevent your program from going stale.
  3. Public ongoing program —Similar to a private ongoing program but it’s announced publicly and open to all researchers. The main reason you would opt for publicly announcing your program is to get access to more researchers and in turn more eyes testing your apps.

What the brief!

Whether you go at in alone or with the help of a HaaS provider you will eventually have to write down how researchers should interact with your program, this is called a program brief. The brief is sent to researchers within the email invite or published online if you are running a public program. It‘s purpose is to tell researchers whats allowed, basically the rules of engagement, before they begin testing. The brief can include:

  • The websites you would like to put into scope of testing and which are specifically out of scope. Usually a bigger scope or attack surface for researchers will lead to better results.
  • The types of testing that is not allowed (I.e. volume based testing or automated scanning tools).
  • Details of bug types to incentive ranges and how you categorise the priority of the issues reported.

A helpful guide to writing a brief and what to include can be found here and taking a look at what other companies have in their brief is also a good way to see if you are on the right track.

Researcher incentives — bounty vs kudos

One of the decisions you will have to make is what to give researchers for the bugs they report.

Offering money to incentivise researchers to test your applications is not the only option, however I highly encourage you to use it liberally. If you want an effective program, offer cash, and lots of it, simple as that :)

Here are the main motivators to get researchers testing your applications:

  • Money — Well that’s an obvious motivator! The saying “You get what you pay for” certainly rings true for BBP’s as well.
  • Swag — As simple as t-shirts or stickers to things like free air miles.
  • Experience — Putting a list of bugs on their CV or publishing a blog post are good ways to show future employers that you know your stuff and are passionate about testing for security vulnerabilities. This may involve allowing researchers to publish their findings publicly, after the issue is fixed of course.
  • Reputation — Finding bugs improves researchers ranking and score, giving them access to elusive private programs and bragging rights!
  • Goodwill — An organisation mission statement and the work they contribute to society, might inspire researchers to help the organisation succeed by making sure they are secure, for the good of the community.
  • Challenging — Bug bounty hunting is similar to capture the flag (CTF) challenges but for real applications!

Kudos-only programs are similar to receiving a gold star sticker in primary school… You might have been able to get away with it in the past, as there were only a handful of programs offering cash rewards. Researchers now have a lot more choice, with hundreds of programs offering cold hard cash. Not offering cash rewards will lead to less researchers participating on your program and will also significantly reduce the attraction from top quality researchers. This leads to less bugs reported and wasting engineering time validating and responding to invalid or lower quality bugs.

Testing on production systems

Should you allow researchers to test on your production systems or is it better to use a publicly available test environment?

Pros

  • Setting up a test environment could require significant effort to setup, maintain and keep in sync with production.
  • Testing on production is a good way to verify that your current controls work as expected and allows you to practice your incident response playbooks, as testing traffic will emulate attackers and allow you to verify your monitoring and blocking controls.
  • If a valid bug is reported you know it works in production so you will have less false positives, if the testing environment is not identical to production.
  • Improves awareness towards security and performance within engineering teams as their alerts will often detect researchers as the number of requests might spike above the set thresholds when fuzzing API’s and will also test the API’s resilience to higher loads.

Cons

  • Researchers are testing on a live system with real customers and real customer data. If current controls cannot prevent researchers from affecting customers and their data this may be an issue. Although keep in mind that production is already a black hat hackers playground ;)
  • A test environment will have less testing restrictions so it could allow testing of sensitive actions or parts of the website that are difficult to test in production (I.e. financial transactions, account deletion, posting messages publicly).
  • Researchers live and test from all around the world. This makes it near impossible to force rules such as “testing during business hours only”. So it’s important to plan for 24/7 testing.

The Risks

When assessing the following risks, keep in mind that real hackers can already access, test and exploit issues within your production systems. Researchers are motivated by getting paid, increasing their HaaS reputation score and filling their CV’s with experience. So it’s in their best interest to stick to the rules.

The following risks mostly apply if you are testing production systems and may differ depending on your business and program implementation.

  1. A researcher could perform testing that brings down or disrupts production.
  • The program brief and most HaaS terms of service state that researchers should not perform a Denial of Service (DoS) on any of the customers assets.
  • Make sure you have controls in place to block researchers (E.g. IP address block, disable user account). Only temporary fixes, but the researcher normally gets the hint.
  • Your systems are capable of scaling up to handle the extra load.

2. A researcher could interact and effect real customers or data.

  • Researchers are told not to interact with real customers, with risks of getting banned from the service if they do.
  • Controls and processes can usually be put in place to detect when customers are being affected so they can be stopped.
  • Parts of the site that are too difficult to test without interacting with customers can be taken out of scope.

3. A researcher could exploit a vulnerability and steal sensitive data.

  • The brief states researchers should never exploit issues found and to report them immediately and that no sensitive data must be exfiltrated.
  • Vulnerabilities that allow access to sensitive data can and should have higher rewards, incentivising researchers to report the issue quickly and to stop further testing.

4. A researcher could publicly disclose an issue without permission.

  • There are heavy penalties for this type of behaviour and the researcher risks getting no reward, kicked off the service and banned from other programs.
  • Ensure that the business is capable and prepared to remediate issues (especially high risk issues) quickly. So that the risk is minimised if it does get released to the public.

Lessons learnt

Running a BBP for the last 3 years has not always been smooth sailing, with several memorable moments of learning, here are a few that come to mind:

  • Not all testers follow the rules set out in the program brief. Researchers would report issues within out of scope applications. In most cases we still accepted the issues, as we agreed it was worth knowing about them. However, it promotes bad behaviour, so often paid less and explained as much to the researcher.
  • When deciding what applications and urls will be in scope, make sure you know who owns the applications and make sure you notify them about the program (this includes cloud providers like AWS). We didn’t notify a third party provider that was hosting a few of our static pages, this lead to a short outage as their security controls blocked all customer traffic due to the researchers testing traffic.
  • It can be difficult to determine if attacks were a real threat or from a researchers testing. Their are VPN options available, so that most researcher traffic comes from a single IP address. However, this puts up a barrier to testing our applications that adds a risk of losing researchers. Instead we directed researchers to use their service specific email address, so that we could identify at least their logged in traffic from real threats.
  • Make the brief short and sweat, containing only the most relevant information and keep the language easy to understand for not native English speakers. Researchers like hacking not reading through boring instructions, so let them quickly get to what they do best!
  • Reward bonuses if you want to focus researchers attention into certain applications, URL’s or vulnerability classes. For example if the business really cares about vulnerabilities that allow access to certain types of sensitive data then increase the rewards for these issues.
  • Respond to researchers quickly, respectfully and informatively. Imagine as a researcher you spend a weekend identifying a vulnerability, writing a detailed report, submitting it to your program, then having to wait weeks for a response. This is a great way to get blacklisted by an entire community of researchers, word of a stale and unresponsive programs travels fast.
  • If you are testing on production systems, researchers could trigger operational alerts via say PagerDuty, to your on-call engineers. Prepare for this scenario, send out communications to all stakeholders, be responsive so you can help avoid similar alerts in the future. Sending regular email updates about high risk issues found by researchers can build trust and will articulate the value of the program, so the next 2am wake up call might be forgiven. We don’t want to burn bridges with the teams helping fix the issues.

Taking the first steps

Start by setting up your internal vulnerability disclosure processes and policies, including a publicly available static website, describing how the community can report vulnerabilities discovered.

Once this is in place, you can then choose to mature your VDP by paying researchers. Depending on risk appetite, culture and security buy-in, it might take some convincing to prove the value of implementing a BBP. If you are finding it difficult, there are ways of introducing a BBP at an initially reduced risk with the option to increase the risks over time. As an example, you could start by running a private, invite only program, that lasts two weeks, within a test environment, with only 5–10 researchers, scoped to one application and force testing through a VPN. Or some variation of the above.

Just keep in mind that increasing the barrier to entry and reducing the scope will limit what results you are able to achieve. Basically, the more attack surface the better!

BBP’s are like any security control or initiative you will have to weigh up the risks, cost and effort and decide if it’s worth investigating further.

Articles in this series

--

--

Julian Berton

Security Engineer @seekjobs, OWASP Melbourne chapter lead, Founder appsecday.io Tw: @JulianBerton W: julianberton.com