Automation in nuclear weapon systems: lessons from the man who saved the world

Nina Miller

International Affairs
International Affairs Blog
5 min readJul 9, 2021

--

Interior of the French Navy attack nuclear submarine ‘Saphir’, during training exercises on 28 February 2009 off Saint-Mandrier, France. Photo: Alexis Rosenfeld via Getty Images.

In 1983, the world came within a phone call of nuclear annihilation. When an alert of incoming ballistic missiles registered at an early warning command centre outside of Moscow, Lieutenant Colonel Stanislav Petrov had to decide whether or not to confirm the signal to his superior, an action which could have sparked a catastrophic nuclear exchange. Rather than escalate the report up to Soviet leadership, Petrov — who felt he ‘was sitting on a hot frying pan’ — decided the missile alert was a system malfunction.

Later called ‘the man who saved the world’, Petrov demonstrated an astute understanding of the limits of machine analysis. What can modern policymakers learn from Petrov’s experience about the impact of automation on accidents in nuclear weapon systems?

Available information indicates that US officials are integrating greater amounts of automation and potentially machine learning in nuclear command, control, and communications (NC3). Indeed, the fields in which increased automation is being considered vary from predictive maintenance and data analytics to cybersecurity, tracking of adversary submarines, and early warning systems. Human operators often ‘overtrust’ automated systems in other high-consequence environments like civil aviation and medicine, yet it remains unclear exactly how automation misuse could increase the risk of nuclear accidents or escalation.

The existing US NC3 complex has been tested and developed to ensure nuclear weapons are always available for use with presidential authority but never used accidentally or without authorization. US policymakers emphasize that humans will remain in the loop and machine learning will be used primarily for data analysis and system maintenance.

As the United States and other nuclear powers modernize and develop NC3 infrastructure, there are important lessons to learn from the Petrov case and from the broader psychological literature on automation. Next time, human supervision of machines might not be enough to prevent nuclear war.

Petrov’s ‘close call’

Did having a human in the loop actually prevent nuclear war? In interviews later in his life, Petrov gave three main reasons for his admittedly uncertain decision. First, Petrov had a gut instinct that ‘when people start a war, they don’t start it with only five missiles’. Second, ground-based radar provided disconfirming evidence that an attack was not underway, although it is likely these data would have lagged behind a few minutes anyway. Third, Petrov knew the system was unreliable and had been hurriedly deployed. Petrov’s ‘funny feeling in [his] gut’ was an emotional intuition based on his contextual knowledge, including the automated system’s unreliability. This knowledge and instinct informed a rational approach to the decision.

What we know about automation bias

Automation is not inherently risky or destabilizing. In other contexts, automation has improved system function by capitalizing on the speed and reliability of machines relative to human operators. Yet, automated systems have been implicated in deadly accidents including the Patriot fratricides and the Air France Flight 447 crash. We know from other contexts that certain factors increase the chance of automation-related errors:

  • A higher level of automation — for instance, analyzing information versus providing recommended courses of action.
  • Higher reliability and consistency of automated systems. This is known as the lumberjack effect, because ‘the higher they are, they farther they fall’.
  • Distraction and fatigue in human operators, which could result from multi-tasking and environmental factors.
  • ‘Learned carelessness’, resulting from iterated interactions with an automated system. When operators fail to pay attention and lose situation awareness without consequences, it increases the risk of future complacency.

These factors point to an inherent paradox when it comes to human — machine interaction: the more reliable and useful an automated system is, the less likely human operators are to critically assess and pay attention to its function. In other words, the probability of a catastrophic mistake caused by automation bias or complacency in NC3 will be highest for consistent, highly reliable systems with a high level of automation.

Lessons from Petrov

Within the context of automation psychology, Petrov’s decision is hardly surprising. Put simply, it was rational to distrust the system and to seek out additional information. However, Petrov had two political and organizational factors working against him. First, although Petrov did not face an official time limit, there was immense pressure to decide quickly and inform Soviet leadership before US ballistic missiles reached them. This limited his ability to pause and consult additional information beyond the computer systems. Second, Petrov went against protocol when he dismissed the false alert, a decision that led to an official reprimand by his superiors.

Will the next Petrov make the right decision? To decrease the risk of automation misuse and instability, next generation command and control will need to reward vigilance, give operators the time and ability to consult additional information, and ensure that nuclear postures in the United States and elsewhere do not encourage over-reliance on machines in a crisis.

Decision-support systems that develop recommendations for human operators about the use of nuclear weapons are likely to involve the highest risk of automation misuse. Machine advice could be misinterpreted or uncritically trusted when the systems perform well in peacetime and wargames, leading users to develop a ‘learned carelessness’ when using the system. The lumberjack effect is perhaps the most counterintuitive and dangerous paradox — if the Soviet early warning system had been highly reliable and vetted, Petrov might not have hesitated.

As US officials contemplate the proper role of machine learning in a modernized NC3 infrastructure, they should be careful not to take the wrong lessons from Petrov’s experience. Human supervision is not enough. Healthy human-machine teams need opportunities to train together and learn from mistakes, which is difficult or impossible for certain NC3 functions like early warning or force planning. Proposed solutions like explainable AI and enhancing trust in AI could actually be counterproductive if they create false expectations of machine reliability or inadvertently encourage complacency. Nuclear modernization in the United States and elsewhere should take as a starting point that the paradoxes of automation cannot be solved, only mitigated and managed.

Nina Miller is a PhD student in MIT’s Department of Political Science and currently a Research Associate with Lawrence Livermore National Laboratory’s Center for Global Security Research (CGSR). Her research focuses on the intersection of international security, political psychology, and technology.

The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States government or Lawrence Livermore National Security, LLC, and shall not be used for advertising or product endorsement purposes.

This blogpost is part of a collaboration between International Affairs and the Future Strategy Forum (FSF). FSF is an organization and annual conference series that seeks to elevate women’s expertise in national security, build mentorship, and connect graduate students to policymakers.

All views expressed are individual not institutional.

--

--

International Affairs
International Affairs Blog

Celebrating 100+ years as a leading journal of international relations. Follow for analysis on the latest global issues. Subscribe at http://cht.hm/2iztRyb.