BOOK REVIEW

AI Narratives: A History of Imaginative Thinking about Intelligent Machines

Review by Stephen Hughes

--

The book is at its strongest when it outlines the resistant narratives which problematise AI. It is here, also, that public engagement scholars might be most interested.

In a research funding proposal to use the latest advances in Neurocognitive Artificial Intelligence (AI) to develop touchless haptic technologies the second line of the project’s objectives jumped out at me. It reminded me of the power of imagination in the justification of science: “Sadly, many people now feel like the society described in the 1990s science fiction movie Demolition Man, where physical contact was prevented and heavily sanctioned”.

Demolition Man, of all movies, was being used as the cultural reference point for a multi-million-euro European funding proposal. It is a wonderful action movie, but hardly a touchstone for our collective understanding of physical intimacy. And yet, there it was, justifying the development of novel AI technologies.

Stephen Cave, Kanta Dihal and Sarah Dillon (eds): AI Narratives: A History of Imaginative Thinking about Intelligent Machines, Oxford University Press, 2020; 424 pp

For those of us interested in the relationships between science and the public, this example raises a few questions: Which narratives are being used to guide research and innovation in AI? Who gets to choose them? What other kinds of narratives might we consider? AI Narratives is a collection of essays which explores these kinds of questions, examining how narrative representations of artificial intelligence have shaped the development of the technology itself, our understanding of ourselves as humans, and the social and political orders which emerge from these relationships.

Drawing on Sheila Jasanoff’s concept of sociotechnical imaginaries, the editors state that their aim is to address the gap left by Jasanoff’s work, namely, the important role that narratives play in driving sociotechnical imaginaries. In doing so, the collection seeks to document the dominant AI narratives of Western culture and to problematise them in light of diverse counter-narratives.

Distinguishing narratives from imaginaries is no easy task and the collection does not provide much insight on the distinction. For me, AI Narratives does less to carve out a unique space for narrative in the sociotechnical imaginaries of AI than to provide a powerful account of AI imaginaries, more broadly. The book does this precisely because its narratives traverse the various temporal and socio-material scales which typify sociotechnical imaginaries.

The collection begins with grand AI narratives which span from Antiquity to Modernity, encompassing everything from Homeric self-driving ships (chapter 1) and enchanted gold sentinels (chapter 2), to a magical speaking head (chapter 3) and a talking doll (chapter 5). This section convincingly demonstrates the power of enduring narratives about enhanced intelligence and how they were (and still are) employed to justify and police a collectively imagined social order. As Truitt, in chapter 2, observes, then, “like now, artificial intelligence was seen as a way to maintain, exercise, or consolidate authority”.

One of the central questions that studies of sociotechnical imaginaries attempt to address is why, at certain times, groups or communities choose to follow certain technological pathways rather than others. What is it that makes specific imaginaries stick? The second half of AI Narratives provides some interesting answers to this question. Stephen Cave’s chapter explores how narrative devices are used in speculative nonfiction to encourage the adoption of AI; these include promissory wish fulfilment, technological inevitability, and pronominal continuity across human and machine consciousness. Beth Singler (Chapter 11) outlines how certain narratives successfully circulate through culture because they are counter-intuitive against a “background of intuitive expectations”. These “attention-grabbing” stories are ones we continue to tell.

In paying attention to the socio-material dynamics of imaginaries, the collection demonstrates that narratives are not simply ideas, but complex entities recruiting bodies, concepts, representations, and institutions with the power to persuade and affect us. These essays also show that things could be assembled differently, exploring the diverse ways that AI has been represented and problematised in science fiction novels and cinema (chapters 8–12), tech marketing (chapter 13), and the design of sex robots (chapter 15).

The book is at its strongest when it outlines the resistant narratives which problematise AI. It is here, also, that public engagement scholars might be most interested. The perspectives provided in queer, Afro-futurist cyberpunk (chapter 12) or in the design of desexualised, feminist robots (chapter 15) point to alternative innovation pathways for AI, grounded more in justice than in advancing grander fantasies of immortality and divine control.

These counter-narratives remind us that hypothetical arguments about master and slave dialectics or the rights of imaginary AIs occupy cultural space at the expense of discussions about the myriad ways in which machine learning algorithms oppress and discriminate, here and now, along the familiar cultural fault-lines of race, ethnicity, gender, (dis)ability, and class. Fingers crossed I encounter references to Afro-futurist cyberpunk in the next research funding bid I come across.

Stephen Hughes is a lecturer at the Department of Science and Technology Studies at University College London. He recently completed his doctoral dissertation, Love Leitrim/Hate Fracking: The Affective Technopolitics of Environmental Controversy in Ireland.

--

--

Public Understanding of Science Blog
SciComm Book reviews

Public Understanding of Science is a fully peer review international journal covering all aspects of the inter-relationships between stemm and the public.