Bootstrapping a Bug Bounty Program

Cheston Lee
The Startup
Published in
9 min readSep 25, 2019
“Why doesn’t mine look like that??” — Homer

Sometime after I had arrived at Casper in the beginning of 2017, someone asked me why HackerOne was billing us. It surprised many that we had a Bug Bounty, wondering who owned it and what had become of it. A recently departed colleague had taken the initiative and setup a bounty but had let it languish for nearly a year. This left us in a rough state, with a pile of reports and a bad reputation. I’ve always had an interest in application security and at the time we did not have a formal security team, so I took the time to dig in and understand what we could do to get things back on track and establish our security program along the way. This article will cover what we did at Casper to revive a dead bug bounty, train the team and bring the organization with us.

Getting to Square One

This was my first time managing a Bug Bounty and though I was familiar with the arrangement, there was a lot more to do than clear out a queue of reports to get back to the starting line. First, we worked to align our organizational goals with the bounty program and with the HackerOne team to help get our metrics in order to attract the right attention from researchers. If we were going to embark on rebooting our Bug Bounty, were were going to take it seriously and that meant putting in extra effort to dig out of the hole we were in.

We went about committing to metrics and setting our first goal around improving time to report first response, which was over two weeks at that time, an abysmal rating that deterred many from engaging with our program. We also rewrote our posted policy, rewards for verified reports to meet industry standards and the projects in scope for bounty. This is very important as it sets expectations between our team and the researchers.

This was a great start, but our teams were not used to seeing these issues come into their sprints and there was no security training at that time; so there would be questions around what these issues were, how to address them and what best practices should be.

Setting the Groundwork

Working together with leadership across the Tech organization, we created processes for handling Bug Bounty reports, researcher response and delegation to individual engineering teams for resolution. As part of this roll out I lead the Security Guild that ran meetings for engineers, product managers and other team members covering topics such as “Intro to Bug Bounties”, “I was assigned an h1 ticket, now what?” on issue investigation and best practices. The Security Guild functioned as a means of education as well as a way of getting feedback from the teams. We started the meeting cadence weekly and then ramped down as teams became more comfortable.

We ran a separate meeting that we called simply “AppSec Triage” to regularly assess the status of the HackerOne queue, validate open reports, file tickets with the appropriate teams and check in on open issues from the prior sprint. We would meet prior to the team planning to ensure valid reports were turned into issues and assigned to the right team. One member of the triage meeting would be assigned as the ‘on-call triage engineer’ to check each issue that was reported to read, attempt to verify and assign an initial severity. If the issue was believed to be ‘Critical’ according to our severity chart it was to be immediately escalated to the on-call engineer of the responsible team or global engineering on-call. If it was not, than it was “ack’d” via an internal comment on the report and marked with an initial rating to be verified at triage time.

Tracking Success

With such a commitment by the organization we had to set goals. We worked to establish success metrics, associated quarterly goals and SLA’s to enforce via the AppSec Triage Team. Initially we aimed to clear the existing issue queue, triage issues once a week and decrease our response time from over two weeks to two days.

Response SLAs

  • Time to first response, 8 hours, owned by the On-call Engineer
  • Time to Verification, 48 hours, owned by the On-call Engineer
  • Time to Bounty, 1 week, owned by the AppSec Triage Team
  • Time to Issue Resolution, 2 Weeks, owned by the individual Engineering Team

Initially there were a lot of reports to handle and as we improved our response time, more researchers became interested in our program and were invited by HackerOne. This created a manageable number of issues to triage every week, though the cadence never seemed consistent. We had to learn that the program was a great way to get some initial eyes but that it was up to us to retain interested talent in our program.

Working with the Community

To successfully create a bug bounty program there’s a lot to do inside your organization but the researchers writing the reports are people too and your team’s communication is key to getting good reports and retaining the right researchers. When we begun cleaning up our queue, reducing our time to response and adjusted our policy; we saw a spike in reports, just what we were hoping for! The problem was, that a lot of these reports were either duplicates of already triage’d issues, issues of low value and reports of poor quality. This quickly made up the majority of the reports and the team felt like it was wasting a lot of time validating, researching and closing reports.

To address these concerns we went back, again to our policy and reviewed our program rules, secondary scopes and exclusions. We realized that there were entire classes of issues that we did not consider to be worthy of a bounty. You can take a look at the list but it includes things such as reporting a vulnerable piece of software 15 days of it’s disclosure. You don’t get to pick up every CVE and cash a bounty with every user of said software. We also decided to exclude scanner generated reports as these were often times messy and lacked proper steps to verify.

Communication is 🔑

Of course that doesn’t cover communication with the researchers. Researchers have their own metrics they are striving for and your organization’s ability to respond, decide a bounty and fix an issue in a timely manner impacts their standing on the platform as well; so they will be quick to remind you about a stale issue.

We made our best effort to reply promptly and with a professional attitude to everyone but take special care with those that are abiding the rules and providing useful reports. Those that returned got a first look, extra time and better bounties for sticking with the program. In general most researchers were professional and accepted the team’s decision but sometimes you would get push back or a rude researcher. If you don’t feel listened to or disrespected, don’t waste your time. Cater to the researchers that are providing you with value and good feedback, even if you have to go out of your way. These folks are worth the effort and hopefully they come back again.

This is Critical, MUST ADDRESS!!!

When we started out with our program at Casper we wrote up our own severity brackets and used our judgement on anything that fell in between or didn’t fit in nicely. There was occasional disagreement between the team and researchers about what the severity was on particular issues that lead us to reconsider how we were classifying reports and what the bounty payout would be given a particular issue. In this we took a look at several popular classification schemes and settled on using CVSS, which gives us a way to evaluate the criteria for a particular vulnerability. We would use that score in calculating, given a bounty range what the payout within that range would be.

Is This in Scope?

On occasion we would receive a report that was technically out of scope according to our policy but was still valuable to us. This caused us to reconsider our stance and add what we were calling ‘secondary scopes’, properties that were not created or maintained by Casper but that held value to the organization. These secondary scopes came with a reduced bounty, as they were not where we wanted researchers to train their focus but we realized it was important to reward good work that provides value to the company.

Getting in the Groove

Keep in mind that we were still digging ourselves out of a hole at the start. It was initially a lot to validate and triage from the near year long pileup of reports. The upfront commitment was a big one for the teams but we worked together to lay out a roadmap and during our quarterly planning exercise got buy-in from leadership and the individual teams.

We’d budgeted around 20% of each team’s time that first quarter to resolve security issues emanating from our bounty program. This was a pretty big chuck of time for our teams that they were used to having for other work. To their credit and the organization’s there was an awareness of the importance of security but of course it meant saying ‘no’ to more things in the planning process. Regular feedback was gathered through our triage meetings, which were open to anyone to attend, the Security Guild and our bi-weekly all hands Q&A.

After awhile we were able to bring the per-quarter commitment to around 10% per team once we had cleared the initial queue of issues. We also adjusted SLAs and meeting cadence as reports became a regular occurrence. We invited anyone who had an interest to attend our triage meetings so that we could learn together on how to validate, reward and get a feel for how to interact with the community.

In the End

About three to four months in the queue had been reigned in(~80 verified reports) and we were seeing new reports in the single digits per week, sometimes a week would go by with zero reports at all. At some point we became concerned that we were not getting enough attention, clearly we had not fixed ALL of the security issues there were to find. We reached out to HackerOne for some tips in keeping our program fresh and maintain the interest of researchers. To that end they recommended that we revise our bounty policy and rewards to capture more attention. This worked! We did see an uptick in interest as we raised our bounties to be competitive with larger organizations and even a simple policy update will let researchers know that you are still engaged with the program.

When reviving the bug bounty program, the aim was not to purely fix more bugs but to increase security awareness among the company and develop a culture of security. To that end many conversations were sparked, discussions had, tools introduced and we’d begun including security and threat modeling into our technical design review process. Though not everything was easy, there were tense conversations, push back from different places and budget/legal roadblocks to clear, in the end security was a priority to our customers, to our partners and to ourselves as a company.

In the next installment we’ll get into taking the program from private and self-managed to a managed public program and what the outcome was for us.

Big shout outs to the team at Casper, working hard with what they’ve got and HackerOne, for being good partners and answering our occasionally grumpy emails with good humor.

--

--

Cheston Lee
The Startup

Engineering leadership, AppSec, US Politics and cats.