The Spin Cycle
The general discussion around so-called artificial intelligence (AI) continues to be wrong-headed — those outside the scientific domain struggle with conceptualising and defining it. Even those within the research field have different definitions depending on their vested interest. Maya Indra Ganesh, a digital cultures researcher, characterises AI as a suite of technologies that includes machine learning, computer vision, reasoning, and natural language processing, among others, existing in an awkward and unique space as technology, metaphor and socio-technical imaginary.
Recent developments have introduced Multimodal AI as a new paradigm within the field. According to the Brave search engine summariser, “Multimodal AI systems are experimenting with driving text/NLP and vision to an aligned embedding space to facilitate multimodal decision-making,” where they “combine various data types, such as image, text, speech, and numerical data, with multiple intelligence processing algorithms to achieve higher performances.” Tools currently available to artists — including multimodal generative AI — are fiendishly difficult to comprehend in terms of the underlying data science and engineering. However, it may be beneficial to consider a series of cautionary tales, encapsulated as effects, that have accompanied the evolution of AI: the AI effect, the Eliza effect and the Clever Hans effect.
The AI effect describes how something previously considered artificial intelligence loses its classification as such once it becomes realised. This phenomenon arises from the unfulfilled and exaggerated promises associated with AI, often resulting in cycles of hype and subsequent disillusionment; a story of ‘non-arrival’ creating a void in the imagination. This aligns with the hype cycle, defined by Gartner 1 as that series of cyclical events encompassing breakthroughs, inflated expectations, disillusionment, practical use cases, and eventual adoption.
Originating from 1960s computer science and Joseph Weizenbaum’s seminal chatbot, Eliza, the Eliza effect refers to the tendency to attribute human-like qualities to computer behaviours, reading more understanding into computer-generated symbols than warranted. Individuals perceive the system as having intrinsic qualities and abilities which simply aren’t there.
The Clever Hans effect is derived from an early 2oth century story of a German carnival act involving a ‘mind-reading’ horse called Hans and has been adopted by the computer science community as a way of explaining the decisions of state-of-the-art ‘learning’ machines. Sebastian Lapuschkin and co-authors (2019) intend to add a voice of caution to the ongoing excitement around ‘machine intelligence.’ The theory relates to Clever Hans phenomena from psychology, which is rooted in observer expectation, where the observer somehow projects what they want to see onto the thing observed, consequently affecting their perception of the outcome.
The prevalence and socio-technical implications of AI are discussed within a prevailing fog of these effects. To assert that a large language model (LLM), known to operate simply through next-word prediction, demonstrates human-like reasoning ability is an extraordinary claim often made. Validation would require exceptional evidence that simply does not exist. Governments, the public, and some computer scientists are being misled into thinking these models are performing tasks they are incapable of, perpetuating myths, fearful tropes and worn-out assumptions. A catalogue of misconceptions highlighted by Daniel Leufer (2020), in combination with a growing awareness of extractive modi operandi, the politics of data and its potential harms, generates a distinct form of anxiety.
The terminology and conceptualisation of AI trace back to 1950s American Academia, to the emerging field of artificial intelligence research. Bruce Sterling, co-author of The Difference Engine with William Gibson in 1990 describes the “old-school artificial intelligence” of this period as a “rather tragic branch of computer science” (2023). The adoption of the term artificial intelligence was met with division within the research community. In her book, Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence, Pamela McCorduck describes how Claude Shannon favoured the rubric, Automata Studies. Alan Newell objected to the term artificial intelligence and continued to describe his work in the field as complex information processing. Allegedly, John McCarthy recognised its marketing value and leveraged the use of the term. The AI researcher Kate Crawford in her book, Atlas of AI: Power, Politics, and The Planetary Costs of Artificial Intelligence, identifies this moment as a critical juncture where computation was mistakenly equated with human intelligence, characterising it as the making of a “terrible error, a sort of original sin of the field.” This conceptualisation of minds as computers and computers as minds laid the groundwork for subsequent developments in AI. Marvin Minsky is on record as being particularly affronted by the use of the term ‘artificial intelligence’ and its proximity to human thinking. Human intelligence, he said, is “more of an aesthetic question or one of a sense of dignity, than a technical matter…a complex of performances which we happen to respect but do not understand.”
In the context of artificial intelligence development and current generative AI, there is another effect worth considering, the Rashomon effect, popularised through Akira Kurosawa’s film Rashomon (1950). Rashomon is partly set in a crumbling, derelict city gate, signifying the dissolution of truth in medieval Japan — a potent symbol in post-WW2 Japan ruined by weapons of mass destruction. Rashomon describes a series of contradictory events taking place in a forest rendered as a mythical space.
The Rashomon effect captures the unreliability of eyewitness accounts. In his article, The Rashomon Effect and Communication (2016), Robert Anderson describes how its academic use extends to thinking, knowing, and remembering within complex and ambiguous scenarios. This effect has permeated various fields, entering the lexica of filmmaking, journalism, ethnography and the judiciary, providing an explanatory framework for grappling with complexity.
We can imagine future historians struggling with factional histories from the current era. Ironically, given the ubiquity of digital media making it the most documented period of history, agreeing on the integrity of records will become ever more arduous. Within this emerging media environment, the Rashomon effect becomes pervasive. Deepfake technology further destabilises familiar notions, confusing the boundaries between consensus reality and synthesised fiction, as real and fabricated elements seamlessly blend in new ways.
Being engulfed in ambiguity and abstraction contributes to an epoch that begins to feel crushed and exhausted as the familiar mutates and reformulates. Displacement from former realities emerges as a continuum, intertwining with the realm of imagination. How can one creatively relate to the current displacement as a new media reality? Can doubt and lack of purpose in the face of complex technological opacity become a method, where relinquishing control and embracing disorientation become objectives?
By harnessing generative AI for artistic experiments, the creative process is often discussed as a form of protagonist. However, we need clear alternatives to thinking about these tools as ‘collaborators’ to dispel familiar tropes. When employing these tools, words and images are ‘refracted’ through the AI medium, becoming transfigured media dredged from the vast latent spaces of statistical models. Things become parsed, abstracted, and reassembled into unfamiliar relationships, disjunctions, and unsettling hybridisations — objects are decontextualised from sources and purposes. These relationships may appear broken or fragmented, yet they can be imbued with artistic intentionality, akin to kintsukuroi, the Japanese art of repairing broken objects. In this way, brokenness does not signify irreparable damage. It becomes an opportunity to embellish and emphasise the fault lines, aligning art with the grammar and syntax of its times.
References
‘The Rashomon Effect and Communication’, by Robert Anderson (2016). Available at: https://www.researchgate.net/publication/314781888_The_Rashomon_Effect_and_Communication
‘Atlas of AI: Power, Politics, and The Planetary Costs of Artificial Intelligence’, by Kate Crawford (2012). Published by Yale University Press.
‘Anthropomorphic Agents: Friend, Foe, or Folly’, by W. J. King (1995). Available at: https://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=371822FA794727745AD7B3B83B1B5483?doi=10.1.1.57.3474&rep=rep1&type=pdf
‘Unmasking Clever Hans Predictors and Assessing What Machines Really Learn’, by Sebastian Lapuschkin et al. (2019) in Nature Communications, 10.
‘Bruce Sterling on the Art of Text-to-Image Generative AI’, by Geert Lovink (2023). Institute of Network Cultures Blog. Available at: https://networkcultures.org/blog/2023/05/17/bruce-sterling-on-the-art-of-text-to-image-generative-ai/
‘Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence’, by Pamela McCorduck (2004). Published by A.K. Peters.