What Responsible AI Can Learn From the Race to Fix Meltdown and Spectre

Partnership on AI
AI&.
Published in
17 min readFeb 26, 2021

By Jasmine Wang

AI is by no means the first field to face the challenges of responsibly publishing and deploying high-stakes, dual-use research. This is the second post in a series examining how other fields have dealt with these issues and what the AI community can learn. It is presented as part of the Partnership on AI’s work on Publication Norms. Visit our website for more information.

If you found out “one of computing’s most basic safeguards” was compromised, who could you tell without endangering the world’s data? That was the question the tech industry faced in 2017 after Jann Horn, a 22-year-old security researcher at Google, discovered two of the biggest bugs to ever afflict computer chips: Meltdown and Spectre.

Imagine everything you think is secure suddenly becoming open to attack: passwords, personal photos, private documents and messages. The two classes of vulnerabilities that Horn uncovered made this disturbing scenario possible. Worse yet, since the flaws affected decades of computer chips instead of a single piece of software, there would be no quick fix. Desktops, laptops, cloud computing, and even smartphones were at risk.

One researcher likened the issue to “every computer ever made” being broken. So, to start solving a problem like that, who do you tell?

You don’t need to be a cybersecurity expert to recognize the complex information flow challenge Spectre and Meltdown posed. News of these bugs needed to get to some parties immediately, shared with others only later, and deliberately kept from the public for longer to prevent exploitation by malicious actors. Fortunately, the field of cybersecurity had been exploring responsible disclosure mechanisms for many years by the time these flaws were discovered.

In comparison, the artificial intelligence (AI) community has just started to grapple with the public risks associated with the publication of high-stakes research. AI teams at major tech firms have only recently begun to build processes that take infohazards and malicious actors into account. But AI researchers don’t have to wait for their own Meltdown moment to catch up. By studying the Meltdown and Spectre case, AI researchers can learn how thoughtful norms about the flow of information played out after a real-life sensitive discovery.

Jann Horn, a researcher at Google’s elite Project Zero cybersecurity unit, wasn’t specifically looking for a world-altering security exploit in April of 2017. According to Bloomberg News, Horn was just reading Intel’s processor manuals to see if their hardware could handle some code he had written. Leafing through these tome-like texts, Horn started looking into a processing technique known as “speculative execution.” To improve their speed, modern processors can guess ahead and execute commands before knowing if the commands are correct or not. Although programs aren’t typically permitted to read data from other programs, Horn eventually realized that a malicious program could exploit speculative execution to access this information from a computer chip’s memory.

Had he discovered the bugs a few decades earlier, Horn’s first thought might have been to disclose them publicly. The history of cybersecurity is entwined with a difficult debate around how vulnerabilities should be disclosed in order to best protect the interests of the public. This ongoing conversation has been punctuated by a series of bad outcomes. In 2002, for example, bug hunter David Litchfield demonstrated an SQL exploit at a security conference after notifying Microsoft, which had patched the exploit. His intention was to warn people of the defect and highlight the importance of patching. However, Litchfield’s demo code was later used to create the infamous Slammer worm. It took only 15 minutes for Slammer to spread worldwide, eventually taking millions of South Korean internet users and 13,000 Bank of America ATMs offline. Incidents like this made the cybersecurity community increasingly attuned to infosecurity, and the norms of the field matured in response.

On June 1, 2017, Horn officially notified Intel of the exploit, as well as chipmakers AMD and ARM, whose products were also affected. His team, Project Zero, gave the three CPU vendors their standard disclosure period of 90 days. For almost three months, there would be an embargo on the vulnerabilities, but by the end of the summer, Project Zero planned to go public with the information.

“Please note that so far, we have not notified other parts of Google,” Horn wrote in his email to the chipmakers, according to The Verge. “When you notify other parties about this issue, please don’t share information unnecessarily.” Although the discovery came out of Project Zero, Google was not the first company to be notified. In fact, Project Zero is adamant about not privileging information to Google in pursuit of being a fair and neutral discoverer. Thanks to Horn, knowledge of the vulnerabilities was now in Intel, AMD, and ARM’s hands to share with those they believed could address the problem.

Communication failures occurred almost immediately. The chipmakers notified Microsoft within days, but Matt Linton, a “chaos specialist” at Google, says it took them more than a month to share the issue with Google. According to Linton, the CPU vendors believed Google had already been informed since Project Zero had notified them of the vulnerabilities, despite a clear note to the contrary in the original report.

“The story of Google’s perspective on Meltdown begins with both an act of brilliance and an act of extraordinary miscommunication,” Linton told attendees of the Black Hat security conference in 2018, “which is a real part of how incident response works.”

The companies who were notified by the chipmakers in this second wave — which included Microsoft, Google, Apple, and Amazon — had to inform their own affected vendors in turn. Microsoft, for example, knew that third-party antivirus software might stop working on Windows machines after the company released its Spectre and Meltdown updates. The company needed to prevent this from happening without leaking word of the vulnerabilities to the world. Ultimately, Microsoft distributed a pre-release version of Windows to software makers for compatibility testing — without telling them what was changed about the operating system or why.

As the end of the disclosure period neared and the enormity of the problem became clearer, the companies involved asked Project Zero to extend the embargo to January 2018. According to Eric Doerr, who led the Microsoft Security Response Center, they had to assume, by default, that they wouldn’t get it. However, Project Zero granted the extension, citing the complex nature of the problem. Project Zero reserves “the right to bring deadlines forwards or backwards based on extreme circumstances.”

Even with the extra time, working on Spectre and Meltdown was intense and difficult. The turning point, at least emotionally, seemed to come in November when representatives of the main companies working to mitigate Spectre and Meltdown got together face-to-face to share strategies and collaborate.

“It’s funny how rare that kind of a meeting is. A bunch of people pushed back both inside of Microsoft and in other partners,” said Doerr at the same conference where Linton spoke. “People said, ‘we don’t do this,’ or, ‘the legal requirements are going to be hard.’” The group eventually worked through the legal agreements and got together.

“I was blown away by the collaboration in the room. […] It was a ‘leap of faith’ that [this] was the right thing to do to protect our shared customers and ecosystems. Like, who shares mitigations with competitors?” Doerr said, laughing. “In case you hadn’t noticed, Google and Microsoft don’t always get along. That was the turning point when the collaboration went up an order of magnitude, and I felt for the first time that we were really all pulling together towards a shared objective.”

Linton agreed, saying, “Up until then, we were all afraid to talk to one another. Getting to be engineers together in person was amazing.” According to Doerr, the approach from that point forward was more collaborative, and less of a “hub-and-spoke” model where communication largely flowed through the chip manufacturers.

In December, two more teams of cybersecurity researchers discovered the vulnerabilities and reported them to Intel. One was from the Graz University of Technology, led by Daniel Gruss, and the other from Cyberus, a German security firm. These teams were put in contact with each other after Intel told them they needed to maintain utmost secrecy about the vulnerabilities until the embargo end date. “We were not very involved in the coordination and resolution process since it was so late in the process already,” Gruss told me in a recent interview. “We mostly focused on the scientific evaluation and scientific discussion of the vulnerabilities. But in the disclosure process the attribution of finders was also an important topic for us.” The concern for these cybersecurity teams, similar to many researchers, was whether or not their months of labor would be acknowledged and rewarded as their discovery.

As these co-discoveries demonstrated, Meltdown and Spectre could not be kept secret forever even if Project Zero further extended the deadline. In the fall of 2017, the world’s biggest tech companies issued patch notices which indirectly hinted at the vulnerability, prompting industry speculation. People began to zero in on the patches. Patches, ironically, are an infohazard: A malicious actor can compare patched software with previous iterations to identify the bug being fixed, potentially exploiting it against as-of-yet unpatched systems.

The planned embargo end date was now January 9, but a careless slip by an AMD developer submitting a Linux patch in December made that impossible. The developer excluded AMD chips from his patch, explaining in a note: “The AMD microarchitecture does not allow memory references, including speculative references, that access higher privileged data when running in a lesser privileged mode when that access would result in a page fault.” The message narrowed the public’s search for the flaw specifically to speculative attempts to access kernel data from user programs. The end was nigh.

The final nail in the coffin came when tech news site The Register, unaware of the embargo, published a piece on the vulnerability on January 2. “[The public] went from not knowing December 29, to having a proof of concept on January 2,” said Linton. “[Project Zero] immediately disclosed, after panic-notifying vendors.” The full details of the two attacks were finally released to the public on January 3, revealed through a website created by the various teams that had independently discovered and reported the vulnerabilities.

In a later letter to U.S. Congress, Intel said that it had only disclosed information to companies that “could assist Intel in enhancing the security of technology users” and planned to brief government representatives shortly before the end of the embargo. For many crucial outside response groups, this meant that the public disclosure was the first time they had heard of these vulnerabilities. This included the U.S. government as well as the federally funded Computer Emergency Readiness Team Coordination Center (CERT/CC), which works with public and private organizations to issue warnings about cybersecurity hazards and coordinate responses. CERT/CC only found out about Meltdown and Spectre when the website went live, making it difficult to create accurate documents on the vulnerabilities. Initially, the coordination center released a report claiming that fully replacing CPUs was the only solution, causing panic.

A vulnerability analyst at the CERT/CC later said that this notification delay was “probably too long, particularly for very special new types of vulnerabilities like this.” For its part, Intel defended its limited disclosure process, noting there was “no indication that any of these vulnerabilities had been exploited by malicious actors” in that same letter to Congress.

Meltdown was largely resolved with operating system fixes made during the original embargo period, but over the next year, security researchers continued to find new exploits related to Spectre. The challenge Spectre posed to Intel was complex. Patches alone wouldn’t be enough. To defend against speculative execution attacks in future chips, they needed to redesign the chips themselves.

Over time, researchers began to believe that the problem was even more systemic than initially suspected, stemming from fundamental flaws in how all modern chips are constructed. As a 2019 paper by Google engineers put it: “This class of flaws [is] deeper — at the microarchitectural level of the processor — and more widely distributed — in essentially every high performance processor — than perhaps any security flaw in history, affecting billions of CPUs in production across all device classes.”

Work continues today. “We always expected this would keep us busy for years,” Daniel Gruss, who led the Graz University team, told Wired. And as fixes continue to be deployed, a catastrophic attack that always required a skilled adversary now requires significantly more knowledge to instigate.

As the field of AI begins to explore responsible norms around the disclosure of novel information, the task can feel daunting. With the rapid pace of AI research, the best way to prevent tomorrow’s harms is not always obvious today. In Spectre and Meltdown, however, the AI community can see how thoughtful policies performed when applied to a real-world crisis. Notably, a real-world crisis involving information that could be exploited by almost any skilled attacker with access to a computer.

This incident in particular leaves the AI community with four intertwined lessons: coordination policies are necessary for complex multi-stakeholder issues; cooperation is surprisingly possible; institutional support is required beyond bug bounties; and publication policies must address the issue of simultaneous discovery.

1. Developing coordination policies proactively is essential.

Coordination policies simplify highly complex, high-pressure, and time-sensitive scenarios. They are crucial common knowledge and coordination infrastructure. Had Spectre or Meltdown been discovered just a decade earlier, the coordination policy that the chipmakers followed would not have existed, making the situation even more difficult to navigate. Assumptions like the one that Intel made about Project Zero’s notification of Google would likely have multiplied, leaving even more vendors out of the loop for longer than ideal.

Over the past 30 years, as computer security became increasingly vital for a secure society, norms around public reporting quickly evolved. At first, the choice for a security researcher was largely binary: disclose or keep silent. The default choice was often full disclosure — where the discovered flaws are made public without the consent of the vendor — which would both win the discoverer social acclaim and force the vendor’s hand in writing a patch. The (potentially devastating) consequences of this approach are obvious, as it leaves every affected system immediately vulnerable to attack.

The Coordinated Vulnerability Disclosure (CVD) process that Project Zero and the CPU vendors followed was first introduced by Microsoft in 2010. A CVD exchange between discoverer and vendor is commonly parameterized by the amount of time the vendor has to address the vulnerability, the frequency and thoroughness of communication levels expected between the vendor and discoverer, and the existence of third parties.

The Dutch government’s acclaimed guidelines for CVD could serve as a useful template for those hoping to establish similar processes. Their National Cyber Security Centre (NCSC) proposes a standard (but flexible) term of 60 days between the report and public disclosure, stresses the importance of transparent and regular communication between the two parties, and suggests having a neutral third-party intermediary. Additionally, the guidelines emphasize the importance of designing reward schemes that, beyond monetary incentives, provide finders with public recognition. The framework has been invoked as an example to follow for other EU member states, none of which have developed similar initiatives yet.

In the same spirit, a researcher who discovers a vulnerability in an AI system, or who has made a high-stakes breakthrough, could make a private report to the affected organization, specifying the time they have to address it before the researcher publishes the information. The specific parameters in such guidelines would evolve over time as the community, for example, assesses what timelines are reasonable for different kinds of vulnerabilities as well as the extent to which AI research knowledge has a defensive bias.

Researchers have noted that “knowledge of a software vulnerability is very useful for developing a defensive fix,” which can be easily deployed and scaled within a digital system. AI vulnerabilities, on the other hand, are more likely to be deeply entangled with social systems, making them significantly harder to “patch.” Overall, initial comparisons of AI and cyber vulnerabilities point to even stronger caution in AI around public deployment of high-stakes systems for security purposes, and therefore a stronger bias towards coordinated versus full disclosure.

We can also see from this case that even following a well-specified coordination policy leaves room for error. Many other coordination policies that were more implicit were followed as Intel shared information on a need-to-know basis with different teams at affected organizations. The missteps made in this case are a signal to the AI community that careful development of such coordination policies is crucial and prudent, but not a simple panacea to the extremely messy problem of information flow, especially as the capabilities of AI systems scale.

2. Collaboration is effective and possible, if legal issues are anticipated.

Legal issues (or rather the perception of such issues) seem to have initially blocked different affected vendors in the Spectre and Meltdown case from collaborating. As the quotes from Google’s Matt Linton and Microsoft’s Erci Doerr demonstrate, however, multiple vendors viewed the in-person meetings that were eventually organized as a watershed moment. The AI community should consider in advance what sorts of cooperation might be possible, anticipate the antitrust issues that may arise, and do the legal legwork in advance to unblock cooperation.

One recent example: When OpenAI initially released smaller versions of GPT-2, they invited researchers to do the AI equivalent of pentesting on larger models before a potential wider release to the public. Each of these collaborations involved signing a legal agreement specifying the terms of the relationship, and OpenAI subsequently published a template for such agreements. As OpenAI noted in their report on the topic, sharing with many partners would increase the likelihood of a leak or hack of the model weights, so legal agreements substituted for scalable technical infrastructure.

It would serve the AI community well to develop a taxonomy of cybersecurity-like vulnerabilities in AI to help envision the different types of collaboration that the field should develop infrastructure for. For example, a taxonomy could aid AI companies in deciding if they should develop vulnerability disclosure policies specific to AI and distinct from cybersecurity policies.

Here’s a possible starting point for a potential taxonomy covering three possible scenarios. First, one can easily imagine cybersecurity incidents affecting AI, given that AI runs on computers. Second, a vulnerability could be discovered in an instance or class of models. Wallace et. al (2019), for example, found sequences they called “universal triggers” that could make all generative language models create toxic (e.g. racist) output. A final scenario could be when a vulnerability is newly built and created, rather than merely pre-existing and revealed. For example, if one company were to build a generative model that could generate deepfakes indistinguishable from original videos, they would have in effect introduced a serious bug in the ability of a video site’s recommendation algorithm to prioritize true, high-quality information.

3. Institutional support is required for a systematic, full-stack approach to safety.

Because cybersecurity is so crucial, systems have been built both around and within companies to create a security culture. These systems include processes, like CVD, as well as incentives.

Although bug bounties were part of the context of the Spectre and Meltdown case (the discoverers were paid an undisclosed sum), it’s difficult to assess if the case would have turned out differently had there been no bug bounties. All of the finders who reported Meltdown and Spectre were supported by host institutions, both private (like Google Project Zero) and public (like Graz University). Facebook supports general cybersecurity research with “Secure the Internet” grants. Google runs the Vulnerability Research Grant and a complementary Patch Reward Program. The AI community should consider offering similar support for proactive security research. Ongoing support is the only way of guaranteeing that necessary security research will be conducted. Again learning from Intel, their 2019 Product Security Report states that more than 60% of the vulnerabilities addressed that year were a direct result of Intel’s investment in ongoing product assurance through its own internal efforts.

However, bug bounties are still a useful mechanism for improving the safety of systems, as shown by wide adoption across the technology industry. AI organizations could use bug bounty programs to incentivize the identification of vulnerabilities in consumer-facing applications and services, but complex and fundamental security vulnerabilities require more significant resources and technical skills to uncover, and proactive resources should be dedicated to such efforts. Working for bug bounties is more precarious labour than most other types of gig work: a discoverer cannot guarantee the rate at which they will discover bugs, and once reported, are at the mercy of the vendor for severity assessment and payout.

Careful thinking will be required to adapt bug bounties to the realm of AI. In addition to not replacing institutional support, bounty programs cannot supervene proper CVD procedures. Imagine a scenario where a company requests an unconditional non-disclosure agreement in exchange for a higher bounty. For HackerOne’s former chief policy officer Katie Moussouris, this already occurs in some form. She says that “bug bounty platforms have become marketplaces where their silence is being bought and sold to prevent public exposure of insecure practices”.

The AI community should consider implementing bug bounty programs for AI while recognizing their inherent limitations. Financial rewards for discovering vulnerabilities serve dual functions: they support researchers whose labor may otherwise be unpaid and also serve as a credential and stamp of acknowledgement. They are also a counterbalance against going to the black market, where exploitable vulnerabilities can command market prices upwards of $100,000.

One example implementation of bug bounties that may be of interest: The Internet Bug Bounty is a nonprofit managed by a panel of volunteers selected from the security community, and rewards finders of vulnerabilities in widely used software. If AI does indeed become a ubiquitous, fundamental technology, as some predict, such a mechanism would operationalize prioritizing its safety and security as a public good.

4. Accreditation alternatives to full publication are essential for limited disclosure.

Delaying publication is directly at odds with a core researcher motivation: earning credit. Credit accrues by what Michael Strevens and other philosophers of science call the “priority rule”: the first discoverer to publish in science gets all the prestige. The first person to make a discovery is rewarded in a myriad of ways, from being cited to even having phenomena named after them. Scientists live in fear of their research being “scooped,” or another researcher publishing a similar result more quickly. One does not typically get credit for intermediate steps, and there is no “proof” of work that matters until a peer-reviewed article is published, at least for the purposes of acclaim.

In the Spectre and Meltdown case, multiple teams discovered the same bug over the course of a few months, and all were unaware of each other. However, they all disclosed to the same vendor, Intel, and Intel was able to serve as an arbitrator. Many of the researchers’ discussions with each other revolved around accreditation of the discovery, since academic papers on the vulnerability could, of course, not be published until the embargo came to a close. In the end, Intel acknowledged them all on their external report, while noting that Horn was the first discoverer.

The problem of simultaneous discovery is known in cybersecurity as a “bug collision.” It is estimated that a third of all zero-days when disclosed were first discovered by the NSA.

“Cybersecurity research, just like any other kind of science, proceeds incrementally,” Daniel Gruss of the Graz team told me. “That means that any point when you make a breakthrough, someone else may be close behind.”

It is easy to imagine a challenging analogous situation in AI: an AI organization decides to delay the release of their work until they have worked on harm mitigation methods, but a competitor takes advantage of this delay to publish their own similar system before harm mitigation measures are in place. Without the ability to earn credit without full publication, actors who err on the side of prudence would be penalized for doing so, or be pushed to publish fully despite preferring not to.

One publication mechanism that was suggested in a PAI Partner consultation: When a researcher makes a discovery that they’d prefer to publish at a later date, they could publish a hash of the paper instead. Then, if another researcher were to publish something similar, they would be able to decrypt the hash and prove that they had made the discovery first. However, this does not offer all the benefits that the norm of coordinated vulnerability disclosure has in cybersecurity: At any point any researcher might decide to publish and there is no coordination or common knowledge between the researchers.

Cybersecurity is crucial as a case study for the AI community not only because it bears directly on AI security but because the community has a rich history of dealing with high-stakes and time-sensitive issues that require multi-stakeholder resolution. With careful, proactive consideration, the AI community will one day be able to adeptly handle its Meltdown moment.

This post would not be possible without remarkable journalistic and analytical work on this case published by Wired and Bloomberg. We are also deeply grateful to Dr. Daniel Gruss (Graz University of Technology), Dr. Gregory Lewis (Future of Humanity Institute, Oxford University), and other reviewers for their contributions and feedback on earlier drafts of this piece.

Jasmine Wang was a Research Fellow at PAI focussing on Publication Norms.

--

--

Partnership on AI
AI&.
Editor for

The Partnership on AI is a global nonprofit organization committed to the responsible development and use of artificial intelligence.