Bugs Wanted Dead or Alive — A New Approach to Responsible Disclosure for All

Curtis Brazzell
Oct 10 · 13 min read

“What’s all the hubbub, bub?”

In this blog I’d like to talk to you about hunting bugs in your environment. After all, a large part of proactive security is all about finding and eliminating bugs before your adversary can leverage them against you. It’s not a point-in-time task, it’s a constant battle due to new issues which can surface at any time. We’re in a an on-going bug-squashing-frenzy for the foreseeable future and the outlaws can weaponize and script exploitation of these bugs in near-real time. It’s the wild-wild west out there, folks! After this you’ll be the “hootin’est, tootin’est, shootin’est, bob-tail wildcat, in the west!”

No matter who you are or what your security maturity level is in your organization, there’s always room for improvement. I also want to discuss responsible disclosure in the form of Vulnerability Disclosure Programs (VDP’s), it’s current limitations, and propose a potential solution for a simple and standardized approach that everyone can adopt, no matter how small or large your business is.

Check Your Posture

Honestly, How Many People Just Sat Up Straighter in Their Chairs?

Depending on the security maturity level of your organization, you have some options when it comes to squashing bugs. Maybe your security posture is hardened and you’re ready to participate in a bug bounty program or red team assessment. Perhaps you’re just getting started and need to begin with vulnerability scans or penetration testing first. Wherever you are, there are options you have to continue strengthening that posture and we’ll discuss the pros and cons of each. The end goal is the same either way, we want to exterminate these bugs first!

A common pitfall I see is when a client wants to put the proverbial cart before the horse. They’ve never done a penetration test before but they want to go “all in” and start with an advanced Red Team. Don’t get me wrong, it’s wonderful that they’re ready to take this on and start taking security vulnerabilities seriously. However, I don’t think there’s a lot of value in running before you can walk. There may be more value in starting with something like a vulnerability scan in order to eliminate the low-hanging fruit before simulating a nation-state attacker. In my opinion a red team should be leveraged to test the defenses of an organization after they feel like they’ve fortified their environment as best they can. They’re seeing how they hold up against the storm and how good their defenses are at identifying and stopping potential threats. Otherwise it’s like shooting fish in a barrel for a tester and they may not have to try, thus bringing into awareness, other techniques that may be available to them. Tired of these metaphors yet? 😃

“Say your prayers, Varmint!”

  • Vulnerability Scanning

Everyone should be doing some level of External and Internal vulnerability scanning on a recurring schedule. This is the easiest way to identify bugs and prevent scripted and other non-targeted attacks against your assets. It also is a good way to verify that your patch management process is working properly (you are patching, right?). Lastly, there’s the added benefit of catching configuration mistakes and keeping change management in check.

There are a couple of different types of vulnerability scanning, but we typically talk about scanning networks and web applications. This can be unauthenticated or authenticated for either, and in the case of AppSec, Dynamic or Static (code analysis). If internal resources are limited, you can have this managed for you as a service or, if budget is a problem you may opt to have your own scanner in-house. Either way, this is something everyone should be doing no matter if they’re Swiss cheese or Fort Knox.

  • Penetration Testing

I have to say this again, but Penetration Testing is not the same as Vulnerability Scanning. Pentesting expands upon the discovery of vulnerabilities by actively exploiting them in order to gain access to the environment and sensitive data within. Many tools and manual methodologies are used which are not, during a typical vulnerability scan. Most organizations today perform pentesting at least once a year in addition to regular vulnerability scanning.

  • Red Team

Red teaming expands upon penetration testing by simulating a real-world targeted attack against your organization. Often times there is more time allocated for Open Source Intelligence (OSINT) and reconnaissance as well as adding Social Engineering to scope. Less information is given up front by the client and the engagement is very much a “black-box” approach.

  • Bug Bounty

If you’ve been doing the above for a while and you’re confident in your stance, you may consider being ready for a bug bounty program to see how you hold up against a larger attack surface. There are private programs for those whom want to test the waters and control the testing process, as well as public programs for those whom are ready to open up the flood gates to everyone. There are some pros and cons involved with bug bounties, which I’ll touch on some briefly.

The benefits are pretty obvious I think. If done properly they can be more effective at uncovering bugs and cost less than a traditional red team or pentest engagement. With a larger talent pool of bug hunters who are financially motivated, they are likely to find things which may otherwise not be found within the constraints of a normal pentest. I know from personal experience I’ve found difficult to find bugs in software prior to them having a bug bounty program and years later when I revisited them, the issues had been resolved. Private programs can also enforce a non-disclosure agreement (NDA) which limits bad publicity when a bug is discovered.

However, if done improperly or before you’re ready, it can be much more costly depending on the reward system agreed upon and how public your program is. Don’t be Apple and set your payouts to $1.5M if you’re just getting started. Also, on the opposite side of the same coin with a larger talent pool, you’re likely to have many more inexperienced people testing your resources. This can be a problem if they ignore or fail to follow your scope or testing guidelines and can negatively impact the availability of your environment. It’s also a full time job for someone to respond to and sort through valid submissions. Reports can vary from tester to tester and many do not qualify as actionable security issues.

Another thing I’ve noticed is that inexperienced testers may find a low-hanging vulnerability such as Cross-Site Scripting (XSS) but they lack the necessary knowledge and experience to know that this may be chained together with another vulnerability to further the attack. You may respond to and fix this one issue only to be missing the bigger risk if it were to show up elsewhere, or fail to properly understand the potential impact.

  • Responsible Disclosure / Vulnerability Disclosure Program (VDP)

Finally, the main point in this blog! Do you (yes, you!) have a VDP? In short, a VDP is a program or policy that defines how a third party or researcher should follow your preferred process for disclosing a vulnerability to you. Let me be clear.. you WANT to encourage this. As a vendor, white hat researchers will likely come across issues over time and you want them to report them to you before the black hats find them (and they will!). You don’t have to pay out monetary rewards, but this is just more incentive for reporting if you can swing it. Even without it, you’ll hopefully have people responsibly disclosing bugs just to do the right thing. If you do this you’re getting free security consultation!

History is filled with examples of how responsible disclosure attempts were handled poorly by vendors or researchers and trust was lost in the process. Mostly due to a lack of understanding or from bad experiences where it wasn’t handled well, vendors often feel threatened or possibly even blackmailed by requests for rewards in exchange for proof-of-concepts (PoCs). Researchers have been prosecuted as a result of “hacking” without permission and the scene has been an ugly one for some time, with the security community essentially all but giving up on responsible disclosure. It’s hard to fault a researcher for taking a (sometimes great) personal risk for little or no reward just because they want to do the right thing.

I personally have disclosed dozens of bugs to vendors and open source tools because I believe it helps make the Internet a safer place for everyone. Corny, I know.. but I genuinely feel passionate about this. Sometimes selfishly, the severity of the bug has the potential to impact my own information sitting on a database and I have a personal motivation to help resolve the issue. I’ll often spend a lot of time sifting through social media, “about us” sections of the website, and various other sources to find the best security contact to report the issue to. I’ll also check to see if they have a current bug bounty. If they do not, more often than not I’ll take the time to draft up a report and send it via a third party like US-CERT or I’ll send it anonymously from a disposable email account over a VPN so it can’t be easily traced back to me. This is sad, because it makes me feel like a criminal when all I want to do is help and all of this is time consuming.

I’ll often get asked by other white hats what to do when they stumble across something like this. I typically ask, “Well, are you hoping to get something out of this? How did you discover it?” I try to make them think through the risk vs the potential reward. If the way in which they found it was done aggressively, it is more likely to be received poorly. Often times the researcher determines it’s just simply not worth taking the risk at all, which does no one a service in the end, all because there isn’t enough information to determine how the company will respond. Sometimes they are grateful and understand the benefit while other times they act defensively.

So what’s the solution? In my opinion, EVERYONE should have a VDP in place, no matter how small or large the organization. The VDP has to be easy to find, contain clear and concise information, make it quick and easy for the researcher, and most importantly, reassure the researcher that they will not be prosecuted for reporting issues in good faith. This last point is called a “Safe Harbor” promise. In doing my research to publish this blog I came across a wonderful GitHub repo run by https://disclose.io/, where they’re attempting to standardize Safe Harbor and make it easy for anyone to apply a template. This seems like a wonderful resource to go for that.

Also notice above how I said “quick and easy”, which is key when you want to encourage responsible disclosure, especially when there’s no financial incentive for researchers to report bugs. If you make it difficult to locate the specifics of the VDP such as the primary PoC or they feel like there’s a risk of being prosecuted, the likelihood of responsible disclosure will be low. This means it will remain until someone else finds it, which could mean a breach for you. Just last week I reported a bug with a pretty detailed HTML report only to have the vendor come back and ask for a PoC video. Because customer information was involved and had to be redacted, this would have taken me a considerable effort. I ended up opting to move on and offered to help discuss over a conference call if they needed further assistance.

The trouble with VDPs today is that hardly anyone implements them. The National Institute of Standards and Technology (NIST) recently incorporated language into their Cybersecurity Framework in 2018, but speaking with my Compliance team internally they have yet to run into a client where they’ve helped implement or review one. In fact, according to HackerOne, out of the Forbes Global 2000 list of businesses 93% of them did not have a VDP. I personally believe one of the reasons for this is the lack of standardization. The National Telecommunications and Information Administration (NTIA) has a template and BugCrowd offers one which are wonderful efforts, but there are no guidelines on where to store this consistently on each site and it’s a lot to read through for a researcher. We need something better, which leads me to an idea I have and a concept of what the future could look. In short, I’m hoping to create a movement which can be easily adopted by all in order to make the Internet a safer place by encouraging (not discouraging) responsible disclosure. It would be both standardized and parsable, making automation possible.

I’m glad you asked, Foghorn! Well, as a researcher I want a place I can go to in order to know for sure if there’s a VDP in place and as a former administrator I would want to be able to implement it easily. If that VDP is in place, I want to quickly determine if they have a safe harbor promise and who the primary contact is for reporting security issues. I also want an easy way to report, because in all honesty if I have to spend a lot of my free time building PoC videos and custom documentation it may not be worth my time.

As I think about the problem and potential standardization and automated solutions, some initially spring to mind. An email distribution could be adopted by everyone, such as security@org.com, but that would be difficult to keep consistent and you would have to inquire about such a program first. Likewise, you could try to get everyone to reserve “/vdp” as an application path or subdomain to place the VDP policy in, but that’s not always practical and can’t easily be parsed. What may work is a centralized, non-profit database which acts as a central authority for everyone. However, you run into issues with trust, maintenance, and ownership and things get complicated quickly.

As I thought more about this, I couldn’t help but be reminded of the robots.txt files hosted on root directories of web servers everywhere. As most of you already are aware, a robots.txt file is a plain text file which is meant for search engine bots to be able to parse in order to know which contents of a web site should be spidered or not. It’s not seen by your typical user unless that user is technical and wants to view the contents. Go ahead, head on over to https://medium.com/robots.txt to see what I mean. Burp Suite Professional and other tools parse the contents and look for “disallowed” content that may be interesting from a security perspective. I believe this same approach can be adopted for VDP use. Something as simple as a file named vdp.txt can be placed alongside robots.txt and could be in a format that’s both human readable and can be parsed by a script, such as YAML, XML, or JSON. The information contained within could be used by vulnerability scanners to make automatic reporting of new issues a breeze for researchers. See an example I created below:

vdp.txt (Not Quite YAML Syntax, But Could Be!)

If the organization also wants to adopt a rich HTML page for their disclosure policy they can reference it here for more information. They can also specify things such as scope, which issues are not included (for the scanners to reference), contact methods and their recipients, the safe harbor promise, and other requests and rewards for the program. Vulnerability Scanners such as Burp Suite Professional and Nessus, to name a couple, could parse this information automatically based on the in-scope domain and in turn display reporting functionality for organizations with a VDP. It can also be specific for issues which are in scope and haven’t previously been reported through known CVE’s.

Burp Suite Professional — Reporting VDP Concept
Nessus — Reporting VDP Concept

The idea here is that if the target has a VDP and both the bug and resource are in-scope, a “Report Issues” tab could be adopted by vulnerability scanners to make bug reporting easy. In fact, I’m working on a Burp Suite Extender plugin right now to do just this that anyone can add. One button may allow a customized email with an attached HTML report to be drafted by pulling in contact information for the researcher and the target’s PoC. Another button could report through a third party proxy, such as US-CERT, and another could automate and send a basic report without any user interaction. Finally, the last option could be used to simply parse and display the target’s VDP policy without ever leaving the testing tool of choice.

“Th-th-th-that’s all, folks!”

Let’s Squash Some Bugs Together!

I’m not saying my solution is the only one and I’m sure other people have had better ideas. HackerOne and BugCrowd may even have a free tier for organizations, but I haven’t checked. I know BugCrowd has an awesome GitHub resource where they can help you get started with your own program. In either case, I haven’t seen anything to date that’s universal and I’m hoping to ignite the discussion and inspire something like this to happen. I’m hoping to finalize my vdp.txt template and Burp plugin soon and share it on my GitHub account so anyone can improve upon and implement it, within minutes. I’d love to see a movement like this take off across the Internet even if it’s not my method. My point in this entire article is simply that VDP is a great idea but it’s broken in it’s current state. We need to encourage responsible disclosure in order to squash bugs and we need to make it easy and safe for the researchers and for organizations or it won’t happen. Also, aside from VDP, set a goal to improve your own internal method for identifying and eliminating bugs and work towards it. Even now I’m sure there are some hiding you haven’t discovered. Thanks for tuning in. Until next time!

Curtis Brazzell

Written by

Passionate geek for Information/Cyber Security! I’m always learning and am happy to contribute anything I can share with the community. Follow me @ Twitter!

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade