What the AI Community Can Learn From Sneezing Ferrets and a Mutant Virus Debate

Lessons on publication norms for the AI community from biosecurity

Partnership on AI
AI&.
13 min readDec 8, 2020

--

By Jasmine Wang

AI is by no means the first field to face the challenges of responsibly publishing and deploying high-stakes, dual-use research. This post represents the first in a series where we will examine how other fields have dealt with these issues and what the AI community can learn. It is presented as part of the Partnership on AI’s work on Publication Norms. Visit our website here for more information.

In the spring of 2012, Ron Fouchier contemplated a decision that could put him in prison for up to six years or cost him over $100,000 USD in fines. A white-haired 45-year-old who spent most days in a concrete research facility in Rotterdam, the Dutch virologist had suddenly become the focus of an international debate about the potential weaponization of influenza. His work, which involved mutating the H5N1 virus to make it more transmissible, had already set off an uproar that reverberated from US national security circles to the halls of the World Health Organization (WHO). Now, if he published his research without the Dutch government’s permission, he was told he could actually go to jail. Increasingly nervous that people perceived him to be a “mad scientist,” Fouchier told the New Yorker at the time that he felt like the subject of “an international witch hunt.”

Fouchier’s predicament might seem unimaginable to most scientists, like something out of a nightmare. But the chain of events that led him to this moment are worth studying for anyone whose work could pose public risks. This is especially true for those in artificial intelligence (AI) grappling with how to responsibly disseminate research with the potential for misuse, an important facet of publication norms.

AI and Machine Learning (ML) are increasingly being applied across new domains, including ones with safety-critical applications, leading many to ask what responsible AI/ML research, innovation, and deployment look like. In answering these questions, the AI community can and should consider how other fields have approached comparable challenges. Fouchier’s story offers important lessons from the biosecurity community and its long history of debate about publication norms. In particular, the H5N1 case illustrates the benefits (and inherent limitations) of third-party governance bodies — and, by implication, the importance of individual researcher responsibility.

H5N1 influenza, otherwise known as bird flu, is a severe respiratory disease. The naturally occurring H5N1 virus, however, rarely infects people and is almost never transmitted to others when it does. According to Fouchier, some scientists believed that H5N1 could never become airborne between mammals — he wanted to prove them wrong. In his own words, his team first “mutated the hell out of H5N1.” They then squirted the mutated virus into the nose of a ferret and next implanted that ferret’s nasal fluid into another (and another, and another) ferret’s nose, making it sneeze. The virus spread.

Fouchier’s field of inquiry is known as “gain-of-function” research, which aims to enhance the disease-causing abilities of pathogens to prepare the health community for future outbreaks. In the case of H5N1, which has an exceptionally high death-rate of around 60 percent, the naturally occurring virus lacked a crucial prerequisite to becoming a real pandemic risk: transmissibility. By simply transferring H5N1 from one animal to others, Fouchier had made it highly transmissible. His team had succeeded in making what he called “one of the most dangerous viruses you can imagine” airborne.

Fouchier presented his findings at an influenza conference in Malta in September 2011, announcing his intention to publish them in greater detail, which would enable others to replicate his research. After Fouchier submitted his paper to the academic journal Science, US health officials became aware of its existence, sending it to the National Science Advisory Board for Biosecurity (NSABB) for review. This advisory body was established in the wake of the 2001 anthrax attacks to provide oversight for dual-use biomedical research, defined by the WHO as scientific work “which is intended for benefit but which might easily be misapplied to do harm.” Fouchier’s ferret experiment — which effectively made a deadly disease even more dangerous — admittedly fell under this category.

The NSABB and its H5N1 Working Group subsequently spent hundreds of hours discussing the research. In December 2011, the NSABB unanimously recommended that key methodological details which could enable replication of the experiments be withheld from publication in Science. In a press release announcing the decision, the National Institutes of Health (NIH) said that the US government was also working on a mechanism to grant researchers with a legitimate need access to the redacted information.

This recommendation received an immediate rejoinder. Science’s Editor-in-Chief said that the journal’s response would be “heavily dependent” on the creation of an explicit plan by the US government to share the omitted information with “responsible scientists who request it, as part of their legitimate efforts to improve public health and safety.” Fouchier himself told the New York Times that around 1000 scientists from more than 100 laboratories worldwide had a need to know this information.

A second expert panel of 22 health officials, scientists, and journal editors convened by the World Health Organization (WHO) came to a far different (if not unanimous) conclusion from the NSABB, calling for full publication. Keiji Fukuda, the assistant director-general of health security and environment at the WHO, cited the difficulty and complexity of creating an information-sharing mechanism as a key rationale.

It was not just bureaucratic difficulties, but ambiguities about authority, control, and eligibility criteria that concerned the panel. “Who would hold on to the sensitive information?” Fukuda said at a press conference. “Under what conditions would that information be released? What are the other complicating factors? It was recognized that coming up with such a mechanism would be very difficult to do overnight, if not impossible.”

Anthony Fauci, the long-serving director of the National Institute of Allergy and Infectious Diseases who would become familiar to many during the COVID-19 pandemic, urged Fouchier and other H5N1 researchers to declare a voluntary moratorium on their work. Fauci told the New York Times that he viewed a moratorium as an act of good faith during a time of polarized opinion — an important one, given that the controversy could lead to excessive restrictions on future research. The scientists took his advice. In January 2012, 39 prominent influenza researchers from around the world, including Fouchier, announced they were voluntarily pausing H5N1 gain-of-function research for 60 days. This moratorium ended up lasting almost a year.

Just months after their unanimous recommendation, the NSABB reversed their position on Fouchier’s paper in March 2012, voting 12–6 in favor of a revised version being published in full. Their reasoning? While the research was still concerning, the revised manuscript did not appear to provide information that would immediately enable misuse. The board also cited the need for freely shared information among countries when responding to international pandemics. A majority of members of the NSABB still believed there was a critical need for a mechanism for disseminating sensitive scientific information. They acknowledged that there were complex questions and legal issues involved in developing such a mechanism, but nonetheless that a “feasible, secure mechanism for sharing sensitive scientific information” was essential, urging the US government to develop one.

This requested plan for disseminating the papers on a need-to-know basis didn’t materialize then and still does not exist now.

After getting a green light from the NSABB and WHO, Fouchier was told of a new challenge. His research needed to be approved for an export license from the Dutch government, which considered the publication of his research to be a potential violation of E.U. regulations aimed at preventing the proliferation of weapons of mass destruction and dual-use technologies. At first, he declined to apply for the export license, opposed to the precedent it would set, and intended to publish his research without one. In late April 2012, however, Fouchier decided to apply for a license, which was granted.

In the end, Fouchier’s paper appeared in a special issue of Science in June 2012. Had he gone through with publishing his research without the export license, the potential penalties included up to six years in prison or a fine equivalent to $102,000 USD.

Even after acquiescing, Fouchier felt so strongly about the importance of unrestricted and free scientific expression that he continued to challenge the requirement legally. He lost his case, meaning similar papers by Dutch scientists would likely require export licenses. For years, Fouchier contested the verdict in several arenas of increasing authority. Eventually, the ruling was annulled on procedural grounds in July 2015, meaning future research would still be considered on a case-by-case basis.

“I’m disappointed,” Fouchier told Science at the time, “They didn’t want to touch the hot potato and passed it on instead.”

Back in the US, similar virus research faced increased scrutiny in the wake of the Fouchier controversy. In October 2014, the White House announced an unprecedented “pause” on all federal funding of gain-of-function research involving influenza, MERS, or SARS. The pause was only lifted in December 2017 — with a new provision requiring gain-of-function proposals to be approved by a government panel. “We see this as a rigorous policy,” NIH Director Francis Collins told the New York Times. “We want to be sure we’re doing this right.”

For his part, Fouchier continued working on H5N1, and is currently the deputy head of the Erasmus MC department of Viroscience. A few years after US funding for his research stopped, the NIH began to support Fouchier’s research again.

While the H5N1 controversy did not settle every issue it raised, the incident as a whole does leave the AI community with four intertwined lessons: Third-party institutions can result in more well-considered publication outcomes; absent other action, these entities might only be created in response to crises; these entities, however, are inherently limited in their capabilities; and, thus, researchers must exercise some degree of personal responsibility.

1. Third-party institutions can lead to more well-considered publication outcomes

There are two main reasons why third-party institutions like the NSABB can lead to more thoughtful outcomes: They can counterbalance publishing incentives that bias researchers towards publication and they can provide additional expertise and critical context that individual researchers may lack.

Any researcher knows the desperate need to publish. Their reputation — and thus access to funding, collaboration opportunities, and publication venues — depends directly on the quality and quantity of papers they publish. There’s also the drive to advance scientific progress, and the very real possibility of societal benefits from their research. However, in high-stakes work there are inevitably trade-offs that need to be considered, and third-party institutions can counterbalance default publishing incentives, leading to more well-considered outcomes. In the case of H5N1, the NSABB brought up important publication considerations and proposed an alternative publication strategy. The WHO provided additional perspective and challenged the NSABB’s recommendations not out of individual interest, but as part of their mission concerned with public well-being.

A third-party institution, if properly composed, can provide multidisciplinary and security-relevant context on publication decisions. The NSABB was uniquely positioned with “Secret”-level security clearance (ranked only under “Top Secret”-level clearance in the US), allowing them to comment on issues of national security. They thus had additional decision-making context that enabled the assessment of security-relevant features of Fouchier’s research. The NSABB is also multidisciplinary, with as many as 25 voting members drawn variously from the microbiology, public health, national security, biosafety, and scientific publishing communities — in addition to non-voting ex officio members from 15 federal agencies.

The involvement of the NSABB thereby solved two important issues in responsible research: researcher bias towards publication and lack of domain expertise or critical information to judge risks. Despite AI research becoming increasingly high-stakes, there is no comparable institution. To provide essential balance to publication decisions, the AI community should explore the creation of a similar body.

2. Absent other action, such entities might only be created in response to crises

Despite the benefits we’ve observed, most countries do not have a public entity overlooking responsible publication practices. The founding of the NSABB was precipitated by the anthrax attacks of 2001. That the US has such an entity was therefore not inevitable and was entirely path-dependent — most other countries do not have an analogous body.

Rather than wait for reactive measures, the AI community should consider establishing a third-party panel of experts as a community resource, which would have the immediate benefits of offering neutrality and multidisciplinarity. Such a centralized entity would also build up useful institutional knowledge and history over time that could be later transferred to any successor entity, government-led or otherwise.

The NSABB was only established after a serious biosecurity crisis. We should not wait for such a near-miss with AI.

3. The powers of third-party entities are structurally limited due to the international nature of science and the autonomy of researchers

However, third-party institutions are not a panacea. As Fouchier’s case demonstrates, even a government body specifically created for the purpose of aiding publication decisions doesn’t completely solve related problems of coordination and the dissemination of information. A prominent philosopher of science, Heather Douglas, concluded upon analyzing the H5N1 case that “stronger institutional mechanisms collectivizing the responsibilities of scientists with respect to dual-use research and its potential to cause great societal harm are clearly needed.” In the case of AI, some practices particular to AI research may render (even strong) institutional mechanisms less effective than necessary.

The AI community should ensure that any publication norms entity it establishes is sufficiently resourced. Not only were the NSABB’s recommendations non-binding, they also did not have the implementation capacity in-house to execute on a key condition of their recommendations being accepted: the ability to share redacted information with scientists with a need to know. Notably, this would have been a novel form of publication — national policy on fundamental research previously specified that it should either be openly published or classified. The WHO’s main argument for full publication was the lack of a public agency to execute limited disclosure of Fouchier’s paper. The fact remains that if a case like H5N1 occurred again, we would still find ourselves without the institutional capacity or direct accountability to implement such a mechanism. To be effective, an analogous institution for AI should be able to quickly and flexibly allocate financial and engineering resources to be able to respond adequately to unforeseen publication challenges.

Third-party institutions that are created by the state are geographically limited. A significant reason the NSABB had the influence it did on Ron Fouchier, who was Dutch, was because his research (as well as the institute he worked for) was NIH-funded. Additionally, he sought to publish his work in a peer-reviewed journal, Science, which had internal guidelines about what was straightforwardly publishable and what was not, and a procedure in place for escalation to the NSABB. Science is inherently a global enterprise, with many interlocking cross-national procedures and components. Those cross-border interdependencies add bureaucracy but also act as partial safeguards in a system where there is no obvious central authority. As the distribution of cutting-edge research work continues to become more globalized, state actors potentially become less influential.

Furthermore, some of the attributes of the scientific system that make such institutions useful do not generalize to AI. AI developers are more likely to publish their papers on arxiv.org, which doesn’t require peer review, and the most important developments in AI increasingly come from top industry labs, not government-funded institutes. Additionally, many AI researchers are employed in industry, where often research is never published due to company interests. Thus, the capabilities of a state-led entity like the NSABB would be even more limited for AI, given the lessened importance of controllable levers like publication, shifting more responsibility to the community and individual researchers.

These reasons reinforce the earlier recommendation for the AI community to explore the creation of its own third-party entity, which may prove to be more suitable than a state-led one.

4. Researchers must take on some responsibility for carefully thinking through their publication decisions

Individual researchers cannot entirely offload the responsibility to consider publication impacts to outside entities. In the H5N1 case, actions taken on the part of the scientific community showcased two ways soft norms interacted with hard norms: the researchers’ voluntary moratorium allowed for the development of more well thought-out policy, while Fouchier’s individual actions weakened the impact of any future restrictions placed on his work.

Due to the limitations of third-party institutions discussed above, the AI community must accept that some responsibility to anticipate and mitigate the impacts of their work lies with researchers themselves. This is especially pertinent for AI, where publishing preprints on sites like arxiv.org (bypassing third-party review) is an established norm. It is not only undesirable, but also not possible, for a third-party entity to oversee a researcher’s work and publication decisions in their entirety.

The nature of scientific collaboration also limits the effectiveness of external mechanisms for information security. After the initial NSABB recommendation for partial redaction, many scientists noted that it might not effectively control information flow. The AI community should consider which stage of research would be an effective point to influence activity, and empower earlier management of research concerns. For example, the ML and neuroscience conference NeurIPS rejected four papers this year on ethical grounds, with seven others flagged for ethical concerns (conditionally accepted). If there were existing resources providing guidelines, the authors may have been able to preempt such concerns by proactively adapting their research.

With increased transparency about the criteria for ethical review, researchers could personally influence the direction, scope, and discussion of their research.

Drawing lessons from the field of biosecurity might seem like a daunting task for the artificial intelligence community, whose discussions of responsible publication practices are far more nascent. However, we believe insights derived from the H5N1 case are better understood as an opportunity, one to be proactive in advancing a community that supports the development of safe and responsible artificial intelligence for all.

This post would not be possible without remarkable journalistic and analytical work on this case published by the New York Times and the Nuclear Threat Initiative. We are also deeply grateful to Jack Clark (OpenAI), Dr. Heather Douglas (Michigan State University), Dr. Gregory Lewis (Future of Humanity Institute, Oxford University), and other reviewers for their contributions and feedback on earlier drafts of this piece.

--

--

Partnership on AI
AI&.

The Partnership on AI is a global nonprofit organization committed to the responsible development and use of artificial intelligence.