Six Unexamined Premises Regarding Artificial Intelligence and National Security

A guest post by Lucy Suchman

Strategic Defense Initiative (SDI) aka the ”Star Wars” Missile Defense Program

On March 1st, the National Security Commission on Artificial Intelligence (NSCAI) released its Final Report and Recommendations to the President and Congress. While the Report is the outcome of an extended period of discussion and consultation, the Commission’s recommendations rest upon a set of unexamined, and highly questionable, assumptions.

The Commissioners counsel that accelerated adoption of AI-enabled weapon systems is necessary to maintain US military advantage. They promise that AI can enable the achievement of a fully integrated, interoperable command and control system. For the Intelligence Community, Commissioner Jason Matheny states: “Decision-makers should be able to access a real-time dashboard of threats in the world with real time forecasts” (Public plenary, Jan 25, 2021.) The thinking here is future conditional and wishful.

Screenshot from National Security Commission on Artificial Intelligence (NSCAI) Public Meeting recording

NSCAI members comprise current and former CEOs and other senior managers of (Big) tech companies (Amazon, Google, In-Q-Tel, Microsoft, Oracle), current and former members of the Defense and Intelligence agencies, and senior members of universities with extensive DoD funding (Caltech, CMU). They have presumably been nominated to serve on the grounds that they have the relevant expertise, but without acknowledging their vested interests in increased funding for AI research and development. This despite Chair Eric Schmidt’s statement at the NSCAI Plenary of Jan 25, 2021 that “We ended up with a representative team of America.”

The first sentence of the NSCAI Final Report reads: “Artificial Intelligence (AI) technologies promise to be the most powerful tools in generations for expanding knowledge, increasing prosperity, and enriching the human experience.” More faith-based than demonstrable, this statement also belies the vagaries of just what AI is. While technologists understand ‘AI’ as a convenient (and highly saleable) shorthand for a suite of statistically based techniques and technologies for automating data analysis, the term falsely implies something singular and unprecedented.

The Report’s recommendations are aimed at dominance in what Commissioners posit as a global competition among the world’s two “AI superpowers,” the US and China. In the Introduction to the Report the Commission declares that “Americans must recognize the assertive role that the government will have to play in ensuring the United States wins this innovation competition. Congress and the President will have to support the scale of public resources required to achieve it.” There is no acknowledgement that the NSCAI itself helps to create this arms race by taking it as an unquestionable (not to mention self-interested) premise of their work, or any discussion of how such a race for dominance might be de-escalated.

Along with the premise of an unavoidable arms race, this proposition takes any question of decisions not to pursue the development of AI technologies off the table. Whether the focus is on the threats or the limits of existing technologies, the solution is greater investment. This premise ignores the history of failures in AI research in domains that require real-time interaction with open and changing environments. This history shows that not all problems can be solved with more money.

While promoting AI’s incorporation into military systems, the Commission warns that “AI will compress decision time frames from minutes to seconds, expand the scale of attacks, and demand responses that will tax the limits of human cognition” (Final Report p. 25). The solution, it follows, must be increasingly autonomous weapon systems, based on the “AI promise — that a machine can perceive, decide, and act more quickly, in a more complex environment, with more accuracy than a human” (Final Report p. 24). Despite the lack of evidence to substantiate this promise, and the continuing international debate over the legality and morality of autonomous weapon systems, the Commission concludes that the US must pursue their development. On this basis and despite growing calls, the Commission argues that it would not be in the US interest to support a global prohibition on lethal autonomous weapon systems.

The conclusions of the NSCAI inquiry, in sum, are foregone: the self-reinforcing dynamic of an escalating arms race justifies massive investment of public funds into research and development in ‘AI’. There is no space devoted to considering alternatives to the expansion of a national security strategy based on US military and technological dominance — for example, through greater investment in humanitarian aid and international diplomacy. Given the unexamined premises of the report, it is imperative that Congress and the President’s Office of Science and Technology Policy appraise the Commission’s recommendations critically and subject them to debate, in a forum that opens the discussion to a broader range of expertise and visions for greater security.

Lucy Suchman is Professor Emerita of the Anthropology of Science and Technology at Lancaster University, a member of the International Committee for Robot Arms Control, and a member of the Advisory Board of the AI Now Institute.

Researching the social implications of artificial intelligence now to ensure a more equitable future