Bug Bounties: the Dangers of Assumptions and Unmanaged Expectations

Adrian Sanabria
12 min readDec 18, 2015

--

When we make a website or application accessible on the Internet, it is an unfortunate but well-known fact that attacks will occur almost immediately and will continue indefinitely. When we create a bug bounty, we go a step further — inviting attacks from experienced and skilled security researchers. Setting clear rules, guidelines and expectations for these bug programs then, is very important.

In many ways, creating a bug bounty is a smart move. We know we’re already getting attacked anyway, and success is time-based: we have to find and fix the holes before malicious parties find them first and exploit them. The bug bounty is an opportunity to offer an incentive for reporting the flaws instead of exploiting them for profit. In the old days, the intent was to offer the equivalent of grey hats a win-win: the ability to benefit from their exploits in a way that didn’t hurt the company. These days, bug bounties have become organized and professionalized through bug bounty ‘brokers’ (BBBs for short — BugCrowd, HackerOne, SynAck, Cobalt, BountySource and Zerodium are the ones I’m aware of) and a recent spat between Facebook and a researcher acting independently is a shining example of why these brokers exist.

This classic example doesn’t match this recent case, but is a basic example of why there is a market for these brokers.

  1. Researcher finds a bug, and does the right thing — reports it to the company that the software/systems/product belongs to.
  2. The company doesn’t like how the researcher went about finding the bug or doesn’t like the fact that the researcher was looking for bugs in the first place (depending on how progressive/experienced the company might be when it comes to dealing with these situations).
  3. The researcher, company or both aren’t satisfied with how the situation was handled.
  4. Drama. Accusations. Legal action.

The broker exists to smooth out this process for both parties and does so in a few ways. The key is that the broker creates a framework based on experience with how issues in these situations usually arise. As a mediator, the broker can also step in to prevent issues from escalating before they get out of hand, as well.

  • They handle all the financial negotiations on both sides. People get weird around money, so this is for the best. The broker gets paid by taking a piece of the bounty for their services, so it is in their best interest to make everyone happy here.
  • They can handle direct communications or at least act as a mediator. This is also incredibly important, because the perspectives and languages (sometimes literally and metaphorically) spoken by both parties create serious challenges and can result in frustration and anger.
  • They set expectations for both sides and basic rules around how testing for vulnerabilities/bugs is conducted. Having a clear, common set of rules and expectations is the most important bit here, as we’ll see when we discuss the most recent public bug bounty incident.
  • In some cases, they are even in a position to remove the researcher’s access to the asset being tested, as a safeguard.

The current situation is unfortunate for those involved, but is incredibly useful as a learning opportunity for the rest of us. The background is that an individual who works for one of the BBBs I mentioned, SynAck, was doing testing independently of his ‘employer’ on Facebook.

UPDATE: It has been brought to my attention that Wes was and is an actual employee of SynAck rather than part of their pool of bug hunters. Wes makes several statements that support this in his blog, but also makes statements suggesting he is part of SynAck’s pool. Perhaps he does both — I’m still a bit unclear. Regardless, it doesn’t change my opinion that Facebook should have handled this matter according to how the bugs were reported — as a matter between Facebook and Wes.

A quick note before we continue: I put employer in quotes there, because most BBBs employ testers as independent contractors (via 1099 tax-wise, I assume). I’ve not heard of any BBBs that require exclusivity contracts, so it is not uncommon for ‘employees’ to also work for every other BBB and outside of the BBBs, simultaneously. Such is the nature of freelance work. A steady paycheck isn’t guaranteed, so these testers have many concurrent relationships, projects and opportunities.

The short version of the incident is that the researcher received a tip from a friend on a possible bug related to Facebook’s Instagram subsidiary (acquired in 2012 for $1bn). The researcher found a pretty serious bug, reported it and got paid for it.

Sounds pretty routine, right? So what was the problem?

The first point of contention here was that the researcher didn’t stop there.

The bug he found gave him access to an Instagram server and he used that access to go explore deeper, looking for more vulnerabilities. This is the point where Facebook says the researcher overstepped his role as a security researcher looking for bugs to report. Personally, I’d agree. As an ex-pentester, this is typically where the line is drawn between security/vulnerability research and a full-on penetration test. In the security world, the act of using one vulnerability to go deeper and look for more is usually referred to as ‘pivoting’. The researcher even put together a crude, but helpful diagram of how he managed to eventually pivot to a point where a large-scale breach of most of Instagram’s services would have been possible.

Copied from the security researcher’s blog post: https://exfiltrated.com/research-Instagram-RCE.php

The second point of contention is that the researcher published the details of everything he discovered through his pivots, whereas Facebook didn’t want anything beyond the original bug to be published publicly.

It seems foolish that this researcher would have jumped into a full-on penetration test when he clearly should have only been looking for publicly-exploitable bugs, right? Well, it isn’t that simple and that’s where communication, expectations and assumptions come in. I strongly believe the first point of contention here has everything to do with communication and human nature — NOT the tester’s ethics. The researcher’s decision to publish everything (the second point of contention) is a disclosure discussion, however, and disclosure is a deep, murky pit that’s been covered more deeply and extensively than I care to repeat here. I must admit, however, we wouldn’t have the opportunity to learn as much from this situation if this level of transparency wasn’t made available to us. I’m grateful for the opportunity to discuss this issue openly and since it is out in the open now, hope that this post and general discussion reduce the chance of repeat incidents.

What went wrong?

Assumptions were made, on both sides. In fact, I’m going to make more assumptions in my attempt to explain why I think things happened in the way that they did. I should give a bit of background here. I spent most of my career as a defender on the enterprise side, but have also spent most of it testing and hacking, both professionally and as a hobby. I have never participated in a public bug bounty, but have discovered and responsibly disclosed a handful of very critical security vulnerabilities (severity emphasis mine — feel free to challenge my definition of critical over a beer some time). I’ve spent considerable time talking to both bug bounty broker founders and participants. I’m not associated with any BBBs financially, as far as I’m aware (I work as an analyst for 451 Research and am not aware of every vendor that is or is not a client).

Facebook assumption #1: The difference between vulnerability discovery (i.e. bug finding/testing) and a penetration test is clear enough to everyone that it doesn’t need to be defined.

Clearly, this was not the case, as the researcher did not understand that he was only allowed to go ‘one layer deep’ with his testing. In his correspondence, one Facebook employee mentions “taking additional action after locating a bug violates our bounty policy”. Along with the researcher, I can’t find this anywhere in official sources or unofficial sources, like this one Facebook security engineer’s Bounty Hunter’s Guide. I can understand the frustration of trying to figure out where lines are drawn, only to feel like they’re moving around with each ‘clarification’ you get. The official policy could greatly benefit from a few specific examples of both good and bad scenarios. A list of actions and methods that are acceptable or unacceptable would also be an improvement. The list doesn’t have to be exhausted, but as the researcher points out in a later communication, Microsoft has a great example of how to achieve this goal in a very concise way.

Facebook assumption #2: “Reasonable time to respond” doesn’t need to be defined in the disclosure policy.

This one isn’t a huge deal, but I think can be improved somewhat. I understand that communication can get complicated, so a static SLA doesn’t work for all cases, but perhaps a description of the process would help. Also, lack of response shouldn’t be used as an excuse or implied permission for a tester to continue to directly attack/test systems.

Facebook assumption #3: The researcher does work for SynAck, so he must be doing this work on their behalf.

This could be chalked up as a lack of understanding in how BBBs and the bug bounty industry in general, works. As I’ve previously stated, anyone familiar with BBBs will be aware that testers aren’t exclusive to a single broker and that this would be a bad assumption. The other possibility is that Facebook was aware of this and contacted SynAck to put intentionally put pressure on the researcher, which would be… ironic. Furthermore, most BBBs I’ve talked to have technical and procedural methods that allow clients to determine whether testers are operating in conjunction with a BBB or are operating independently. At the very least, if the researcher was working through SynAck, I assume he would have submitted the bugs indirectly through SynAck, not directly to Facebook. Though I don’t know the full details, SynAck can’t get paid if they’re not somehow inserted into the bug submission process, right?

Researcher assumption #1: The lack of rules explicitly prohibiting actions can be interpreted as implicit permission to do it.

This is one of the first rules any pentester with a good mentor/education learns. CYA. If in doubt, don’t do it. Don’t go outside of the scope. If unsure, ask.

If unsure, ask. If you’ve been in this industry for over a decade and have spent time on the offensive side (you know what I mean, quick chuckling), this concept is probably deeply ingrained. So much so, that it is easy for some of us to dismiss the researcher’s actions as unethical — end of story. I don’t believe it is that simple though, and I don’t think that approach will do anything to prevent these sorts of incidents from reoccurring. I’m giving him the benefit of the doubt that he hasn’t been briefed or educated on such things, but ultimately, I don’t know. I would find it surprising if he hadn’t learned some of this through SynAck. I have no personal experience with onboarding at BBBs, but I’d be surprised if there isn’t some level of mandatory education including codes of ethics, understanding and respecting rules of engagement for testers. Any tester, regardless of skill or experience, is a potential liability for a BBB if assumptions are made about a tester or client’s expectations.

We must remember that this penetration testing, by nature, is something that’s clearly illegal in any other circumstance. When I performed penetration tests, the only thing that separated me from a criminal was my intent and ethics (which are all in my head) and a contract with the company. The researcher in this case, and bug bounty testers in general, have zero legal protections as far as I’m aware. It is essentially down to the word of the researcher and the company as to whether actions are criminal or fit within the guidelines of a non-legal bug bounty policy. Again, this is where a BBB can come in handy, as a shield and resource for the researcher. Also, just the thought of the size of Facebook’s legal team is enough to scare me away from ever doing any security testing on their assets without explicit liability protection.

Here’s a rule of thumb: don’t get pushy with a company you’ve hacked that employs more lawyers than you’ve ever met in your entire life.

Researcher assumption #2: A lack of clear responses to the rules is implicit permission.

What’s the point of asking if you’re going to do it anyway?

The researcher repeatedly goes deeper into his testing without explicit permission. He even knows enough to ask if he’s allowed to do what he’s doing. He gets answers he finds confusing and unclear. Rather than wait for clarification, he keeps on going. I chalk some of this up to the excitement and adrenaline associated with hacking — especially penetration testing. Things get more and more exciting the deeper you go and it becomes difficult to find a key for a door and not open it. I’ve been there and I understand. From my point-of-view, his actions seem reckless, but I can see how perspective can become skewed for someone that doesn’t know what they don’t know, and hasn’t had to deal with the consequences of going too far. What this researcher didn’t understand is that testing someone else’s live systems without explicit authorization is at best a liability, at worst a crime (regardless of intentions) and definitely is not a right. Unfortunately, in the world of security testing, the lines can move in the best of times and we can still inadvertently cross them.

Researcher assumption #3: Transferring data carte blanche from a large corporation is just part of the job

At one point, the researcher states he “queued up several buckets to download, and went to bed for the night”. This is another case of the researcher unfortunately being unaware of the ramifications of his actions. He not only downloaded nearly everything necessary to breach every part of Instagram that matters, he listed everything he had access to in his blog post! As an extremely popular globally-used social network, Instagram is likely a target for nation-state level hackers and other very sophisticated and well-funded adversaries. By advertising the fact that he has this data, this researcher potentially made himself a high-value target for cyber military/criminal groups.

Personally, I’m surprised Facebook simply asked him to delete the data he downloaded. As an ex-incident handler, I don’t think it would be unreasonable for Facebook to put through a subpoena for all his hard drives and access to cloud storage providers he uses. I suspect his saving grace was that most of the data he gained access to happened to be authentication/authorization-related and fairly easy to revoke and/or change. Otherwise, did he securely delete the data? How can we trust that he did? What if he sells his laptop next week? Unchanged, the data he got his hands on is a dream-come-true for anyone wanting to leverage a social network for social-engineering, phishing, spear-phishing or fraud. Worse, due to the fact that social networks are designed to be able to post to other social networks, the data he had access to also constituted a breach of some portion of user accounts on at least five other social networks.

Imagine a nation with no qualms about human rights violations or concept of free speech getting their hands on some of these API and code-signing keys? I personally hope Facebook has some monitoring in place to tell them when private keys and certificates are downloaded or accessed in S3 buckets.

Facebook mistakes

  • Lack of details and clarity around bug bounty program rules
  • Poor initial effort to answer the researcher’s questions
  • Choosing not to communicate directly with the researcher, going instead

Researcher mistakes

  • Basing assumptions on other companies’ bug bounty programs and rules
  • Going beyond the scope of the ‘surface-level’ vulnerability discovery before getting clear answers to scope questions
  • Failing to recognize the vulnerable position a lone researcher is in when going up against a large corporation (this is part of the point of using a company like SynAck in the first place!)
  • Downloading secrets and highly sensitive data of a billion-dollar social media company without considering the risks

The researcher found some jaw-dropping systemic and design-level security issues within Instagram. The truth is that Instagram and all its users will be better off with these issues addressed. It’s just unfortunate how the discovery of these issues came about. I understand the dilemma Alex Stamos is faced with here — to pay a researcher for doing a penetration test instead of bug/vuln hunting would be a dangerous precedent to set.

At the end of the researcher’s blog post, he characterizes the situation as just another case where a security researcher has been mistreated by greedy corporate Facebook. The truth, rather, is that a talented, but inexperienced security researcher got in way over his head, completely unaware of when he crossed a line. As far as I can tell, no harm was done, so I suggest the primary lesson we take away from this is that we need better education and communication for open testing programs like bug bounties. Going through a BBB is one opportunity to accomplish this, but is no guarantee, as going through a BBB isn’t required for open bug bounties.

In an unprofessionalized field, however, the difficulty is finding the choke point. No degree, certification or training is required to go after a bug bounty. Nor are we required to go through a broker. Therefore, there aren’t a whole lot of safe assumptions a company can make when creating a bug bounty, since there is no baseline education or experience to participate. In this case, at least, the only opportunity to have avoided this incident would have been on Facebook’s own vulnerability reporting policy. That doesn’t make this Facebook’s fault or absolve the researcher of responsibility — for now, these efforts are voluntary on both sides. Hopefully, incidents like these don’t convince the government it needs to get involved…

--

--

Adrian Sanabria

Information security veteran blogging primarily about how technology can hinder or help productivity and progress here. Co-founder of Savage Security.