The arts and humanities are already embedded in AI: Values, visions and the future university — Part 2

Terry Flew
Mediated Trust
Published in
4 min readMay 6, 2024
Robot at laptop

Citing Ayn Rand and the CCP may not have inspired you to believe that arts and humanities are having much sway over the development of AI and other digital technologies. In fact, it may suggest the opposite, that critical thinking has been completely absorbed into the techno-capitalist machine. In order to better understand how critical theories of technology, informed by the arts and humanities, have come to emerge, it is important to look at humanities developments at universities that tend to be known more for their science and technology programs.

I am thinking in the US about universities such as Stanford, UC Berkeley and MIT, more so that Harvard, Yale and Princeton. All of these institutions have programs in the arts and humanities, but they typically sit alongside programs about technology. In my field of communication and media studies, at Stanford this sits within a School of Humanities and Sciences, while at MIT it is in a Comparative Media Studies program known for its contributions to games studies, digital writing, computational media and histories of technology.

I am thinking her of the American philosopher Hubert Dreyfus, who worked with early AI scholars such as Herbert Simon at MIT, and developed his critique of AI, What Computers Can’t Do, at UC Berkeley, first published in 1972. Dreyfus’s critique of the conceptual underpinnings of AI drew upon his work on philosophers and political theorists such as Husserl, Merleau-Ponty, Heidegger and Foucault to critique the assumptions of AI theorists around cognition and the nature of knowledge. My point would be that, although Dreyfus was a philosopher, it was his proximity to the people and labs in which AI was being trialed (such as the RAND Corporation at MIT) that acted as the stimulus to his work.

Dreyfus’s book has gone through many editions and has been consistently engaged with in the AI community. A consensus reluctantly conceded would be that he has demonstrated the limits of AI models based on what is known as cognitive simulation, which proposes that the computer can replicate the human mind and that computing processes can be made analogous to human thought processes. Whether more complex models of AI overcome the critiques of Dreyfus is frequently debated: the key point is that, from a philosophical perspective, Dreyfus presented context as the knowledge frontier that “thinking machines” of any kind would struggle with.

Put differently, the issue is not whether a computer can beat you in a game of chess. The issue is whether it will identify better applicants for jobs, be a better predictor of the capacity of borrowers to repay loans, or better determine the likelihood of crime in a neighbourhood. These are the contexts in which the questions of limits to AI-generated knowledge matter.

If we were to identify where the arts and humanities have something new to say about AI, one place to start would be around the two defining terms themselves (a very humanities move!). The data that drives machine learning is far from artificial. In fact, it is precisely the materiality of the data — the fact that it comes from real people doing real things in a digital format — that is at the core of claims as to its potential and its achievements. In that respect, the really interesting questions about the uses of data for the purposes of machine learning are absolutely tied to its materiality: how it was acquired; was it made available with the consent of its creators; are there biases in how it was gathered; how such biases manifest themselves in how it is used, etc.

The second point relates to intelligence. In a sense, Dreyfus’s distinction between replicable and learnable activities — which computers can do — and context-dependent forms of being-in-the-world — which he saw as marking the frontier of what they can’t do — reappears in the relationship of information to knowledge. What we call artificial intelligence involves moving the frontier of digital technology from information processing to its creation of new information based on the complex reassembling of existing information. And this has been happening for a while — the evolution of spell checking would be one way of tracking it across the devices you routinely use.

The question, again, is one of determining when and how it matters. The creatures that roam Godzilla X Kong appear far more authentic than the shark that terrorized Amity Island in Jaws 50 years earlier. But whether we do or don’t find them convincing is related more to our purpose in watching the films than the actual technical properties of the creatures in question and how closely they mimic actual creatures. But such mimicry in other contexts, such as the deep faking of political leaders during election campaigns, puts us in different ethical territory. And the answers to the questions raised are not intrinsic to the science that developed them.

Back to Part 1

Forward to Part 3

--

--

Terry Flew
Mediated Trust

Terry Flew is Professor of Digital Communication and Culture and Australian Research Council (ARC) Laureate Fellow at the University of Sydney.