The Spin Cycle—A Fog of Effects

Bruce Gilchrist
7 min readJun 22, 2023
“Rashomon” by Akira Kurosawa (1950). Press Photo of Toshiro Mifune and Daisuke Kato. Public Domain.

The general discussion around so-called artificial intelligence (AI) continues to be wrong-headed — those of us outside the scientific domain struggle with how to conceptualise and define it. Even those within the research field have different definitions depending on their vested interest. Maya Indra Ganesh, a digital cultures researcher, characterises AI as a suite of technologies that includes machine learning, computer vision, reasoning, and natural language processing, among others. They exist in an awkward and unique space as technology, metaphor and socio-technical imaginary. It can be viewed as an assemblage of interacting things — it’s not just one black box, but several interconnected black boxes sharing information in some way.

Recent developments have introduced Multimodal AI as a new paradigm within the field. According to the Brave search engine summariser, “Multimodal AI systems are experimenting with driving text/NLP and vision to an aligned embedding space to facilitate multimodal decision-making,” where they “combine various data types, such as image, text, speech, and numerical data, with multiple intelligence processing algorithms to achieve higher performances.” Tools currently available to artists — including multimodal generative AI — are fiendishly difficult to comprehend in terms of the underlying data science and engineering. However, it may be beneficial to consider a series of cautionary tales, encapsulated as effects, that have accompanied the evolution of AI: the AI effect, the Eliza effect and the Clever Hans effect.

The AI effect describes how something previously considered artificial intelligence loses its classification as such once it becomes realized. This phenomenon arises from the unfulfilled and exaggerated promises associated with AI, often resulting in cycles of hype and subsequent disillusionment; a story of ‘non-arrival’ creating a void in the imaginary. This aligns with the hype cycle, defined by Gartner as that series of cyclical events encompassing breakthroughs, inflated expectations, disillusionment, practical use cases, and eventual adoption.

Originating from 1960s computer science and Joseph Weizenbaum’s seminal chatbot, Eliza, the Eliza effect refers to the tendency to attribute human-like qualities to computer behaviours, reading more understanding into computer-generated symbols than warranted. Individuals perceive the system as having intrinsic qualities and abilities which simply aren’t there.

The Clever Hans effect is derived from an early 2oth century story of a German carnival act involving a ‘mind-reading’ horse called Hans, and has been adopted by the computer science community as a way of explaining the decisions of state-of-the-art ‘learning’ machines. Sebastian Lapuschkin and co-authors intend to add a voice of caution to the ongoing excitement around ‘machine intelligence.’ The theory relates to Clever Hans phenomena from psychology, which is rooted in observer expectation, where the observer is somehow projecting what they want to see onto the thing being observed, consequently affecting their perception of the outcome.

The prevalence and socio-technical implications of AI are being endlessly discussed right now within a prevailing fog of these effects. To assert that a large language model (LLM), known to be operating simply through next word prediction, is actually demonstrating humanlike reasoning ability is an extraordinary claim that is often made. Validation would require exceptional evidence that simply does not exist. Governments, the public, and some computer scientists are being misled into thinking that these models are performing tasks that they are incapable of doing, which perpetuates myths, fearful tropes and worn-out assumptions. A catalogue of misconceptions — highlighted by Daniel Leufer — in combination with a growing awareness of extractive modi operandi, the politics of data and its potential harms, generates a distinct form of anxiety.

The terminology and conceptualisation of AI trace back to 1950s American Academia, to the emerging field of artificial intelligence research. Bruce Sterling, co-author of The Difference Engine (1990) with William Gibson describes the “old-school artificial intelligence” of this period as a “rather tragic branch of computer science.” The adoption of the term artificial intelligence was met with division within the research community. In her book, Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence, Pamela McCorduck describes how Claude Shannon favoured the rubric, Automata Studies. Alan Newell objected to the term artificial intelligence and continued to describe his work in the field as ‘complex information processing’. Allegedly, John McCarthy recognised its marketing value and leveraged usage of the term. The AI researcher Kate Crawford in her book, Atlas of AI: Power, Politics, and The Planetary Costs of Artificial Intelligence, identifies this moment as a critical juncture where computation was mistakenly equated with human intelligence, characterising it as the making of a “terrible error, a sort of original sin of the field.” This conceptualisation of minds as computers and computers as minds laid the groundwork for subsequent developments in AI. Marvin Minsky is on record as being particularly affronted by the use of the term artificial intelligence and its proximity to human thinking. Human intelligence he said, is “more of an aesthetic question or one of a sense of dignity, than a technical matter…a complex of performances which we happen to respect but do not understand.”

In the context of current generative AI, there is another effect worth considering — the Rashomon effect, popularised through Akira Kurosawa’s film Rashomon (1950). Rashomon is partly set in a crumbling, derelict city gate, signifying dissolution in medieval Japan—a potent symbol in post-WW2 Japan ruined by weapons of mass destruction. Rashomon describes a series of contradictory events that took place in a forest rendered as a mythical space and symbolising an unknowable realm.

The Rashomon effect captures the unreliability of eyewitness accounts. Robert Anderson has described in his article, The Rashomon Effect and Communication, the way in which its academic use extends to thinking, knowing, and remembering within complex and ambiguous scenarios. This effect has permeated various fields, entering the lexica of filmmaking, journalism, ethnography and the judiciary, providing an explanatory framework for grappling with complexity.

The nature of internet search is changing, becoming unfathomably complex, as companies chase the AI hype cycle and race to incorporate generative AI into their products. Google’s search quality has been in steady decline for years and will likely only get worse according to Cory Doctorow, who describes how we are becoming accustomed to stories about chatbots propagating falsehoods, tricking people, including experts, into accepting and repeating fabrications, “[…] when Google ingests and repeats a lie, the lie gets spread to more sources. Those sources then form the basis for a new kind of ground truth, a ‘zombie statistic’ that can’t be killed, despite its manifest wrongness.”

Perhaps we can call on this description of the degradation of search and the zombie statistic to support Antonio García Martínez’s speculation that future historians will struggle with factional histories of the current era, and that ironically — given the ubiquity of digital media making it the most documented period of history — agreeing on factual events becomes ever more arduous. Within this emerging media reality, characterised by atomised online experiences, the Rashomon effect becomes pervasive. Deepfake technology further destabilises familiar notions, confusing the boundaries between reality and synthesised fictions—Baudrillard’s concept of hyperreality retains its relevance as real and fabricated elements continue to seamlessly blend in new ways.

Without a foundation of data there is no artificial intelligence, and in the context of generative AI having ingested so much of the available information, it’s uncertain where new systems should be trained next. There is concern that they might be fed synthetic data that’s been generated by another AI model, which raises the question: at what point does the information become so derivative that it stops being fit for purpose?

In an epoch that begins to feel crushed and exhausted, there is a sense of being embedded in ambiguity and abstraction as a kind of machine-induced wreckage. As the familiar mutates and reformulates, displacement from a former reality emerges as a kind of continuum, intertwining with the realm of imagination. How can one creatively relate to the current displacement as a new media reality? Can doubt and lack of purpose in the face of complex technological opacity become a method, where relinquishing control and embracing disorientation become objectives?

Film still from “Rashomon” by Akira Kurosawa (1950).

By harnessing generative AI for artistic experiments, the creative process itself is often discussed as a form of protagonist. But we need to develop clear alternatives to thinking about these tools as ‘collaborators’ in order to dispel the tropes of so-called artificial intelligence. When employing these tools, words and images are refracted through the medium of AI, in the process becoming transfigured media dredged from semantic neighbourhoods within the vast latent spaces of statistical models. Things become parsed, abstracted, and reassembled into unfamiliar relationships, disjunctions, and unsettling hybridisations. Objects are decontextualised from original sources and purposes. These relationships may appear broken or fragmented, yet they can be imbued with artistic intentionality, akin to kintsukuroi, the Japanese art of repairing broken objects. Considered in this way, brokenness does not signify irreparable damage but rather becomes an opportunity to embellish and emphasise the fault lines, to align art with the grammar and syntax of its times.

Image generated in Midjourney by the author (May 2023), using the narrative of Akira Kurosawa’s “Rashomon” as a source for the prompt text.

References

‘The Rashomon Effect and Communication’, by Robert Anderson (2016). Available at: https://www.researchgate.net/publication/314781888_The_Rashomon_Effect_and_Communication

‘Atlas of AI: Power, Politics, and The Planetary Costs of Artificial Intelligence’, by Kate Crawford (2012). Published by Yale University Press.

‘Anthropomorphic Agents: Friend, Foe, or Folly’, by W. J. King (1995). Available at: https://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=371822FA794727745AD7B3B83B1B5483?doi=10.1.1.57.3474&rep=rep1&type=pdf

‘Unmasking Clever Hans Predictors and Assessing What Machines Really Learn’, by Sebastian Lapuschkin et al. (2019) in Nature Communications, 10.

‘Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence’, by Pamela McCorduck (2004). Published by A.K. Peters.

--

--