On generative AI and not being free

We need to slow the hell down and do some philosophy

James Plunkett
7 min readApr 21, 2023

What does technology have to do with not being free?

Langdon Winner, Autonomous Technology (1972)

It’s hard to read all the hype about generative AI without feeling we’ve got no grip on this thing—both practically and conceptually. No-one really has a clue what we’ve summoned.

This makes me think one of the best things we can do right now is to philosophise. As Heidegger wrote in What is called thinking?:

We must first of all respond to the nature of technology, and only afterward ask how man might become its master.

If we’re going to put generative AI to use, and not have it put us to use, we need to start by knowing its nature.

That’s why I’ve been reading lots of philosophy about our relationship with technology, from writers like Jacques Ellul (The Technological Society (1954)), to Hannah Arendt (The Human Condition (1958) and her 1960s essays), to Langdon Winner (Autonomous Technology (1972)).

If you want to understand our predicament with AI, there are few better places to turn.

So what was this work fundamentally about? It came in a wave of concern, from the mid-1950s to the mid-1970s, about how technology can inhibit human autonomy.[1]

This crested in popular culture in 1968 with two classic films of the genre: 2001: Space Odyssey (‘I’m sorry Dave, I’m afraid I can’t do that’) and The Graduate (‘I want to say one word to you: plastics’). Both of which repay rewatching, with one eye on generative AI.[2]

For me the central question posed by this work is something like this:

Is technology really the neutral tool that we often imagine it to be? i.e. is technology something we invent and then decide how to use?

Spoiler alert: the answer is no.

Our relationship with technology isn’t a ‘tool-use’ relationship; it’s a two-way deal. Technology gives us great power but only if we live on its terms. And when it comes to negotiating and then enforcing those terms, technology is as insistent as hell.

This gives us a way to think about our predicament with generative AI.

These new technologies offer us alluring new powers but these powers will come with stringent, far-reaching, freedom-inhibiting terms.

The challenge we have now — which is also well-trodden in the philosophy of technology — is that this new world is emerging faster than our ability to comprehend it.

As Winner wrote back in 1972, “we lack the ability to make our situation intelligible”. Or, as Paul Valery wrote even earlier, in 1944: “our means of investigation and action have far outstripped our means of representation and understanding.”

So here’s another way to describe what’s taking place today at the frontier of innovation using generative AI:

A small and unrepresentative group of people are signing a contract with technology on behalf of humanity, with no break clause, written in a language that no-one yet understands.

So when people say we need to slow down AI innovation, I agree. But I also think we need to speed up our efforts to understand this new world and to negotiate a new contract with technology.

We need to learn how to read the contract that AI is offering us — so that we understand the powers it offers but also what it demands in return. And to do that we need to take Heidegger’s advice, trying harder to grasp the essential nature of these technologies.

What will this entail? In the next few weeks I’ll share some more insights from past writing on the philosophy of technology, which has a lot to offer.

But here are four quick reflections for now.

First, if AI is going to help us build a society that’s richer and happier and fairer and greener, we need to snap out of our weirdly submissive mode.

At the moment we’re acting like the best we can do is to understand what technology demands of us and accept — or at most reject, or mollify, or ameliorate—the terms on which it insists.

We should remember that we can formulate our own terms too — things we want to add to the contract. Things we want AI to do for us. Which is to say that we need not just mitigations and protections, but a positive vision for the kind of society we want to make possible with AI.

Second, I think the analogy of learning a new language is a good one. Because understanding AI and its implications, especially in a field like public policy, will require that we literally learn new words or concepts.

We need to address what Winner calls ”a failure of both ordinary speech and social scientific discourse to keep pace with the reality that needs to be discussed”.

And this means we’ll need to unlearn some old words and concepts too — ones that used to be helpful but that aren’t any more.

(This need to ‘unlearn’ old words comes up often when we’re adapting to technological change, as I wrote here in the context of our language around inequality.)

We also need to realise that a new language will not be given to us by technologists. Or, rather, a new language will be given to us by technologists but it won’t be the right language with which to evaluate our situation in the round. Here is Winner again:

The same concepts used in building and maintaining a given technology are not those useful in understanding its broader implications for the human community.

So social scientists and philosophers need to assert themselves against technologists. Technology — and our understanding of technology — must work in service of our wider societal and ethical ambitions, and must be understood in the context of our wider, richer systems of understanding. We need to put technology back in its place in the order of things.

Third, we don’t just need to understand what’s going on; we also need to get better at imagining alternatives to what is going on.

This relates to the concept of path dependency, which is central to understanding what’s at stake in technological change.

As we accrete new technological discoveries, and put them to use, the lights of alternative paths — the paths we could have taken but didn’t — turn off forever. Hence why, in a time of rapid change, our work to understand the paths on offer — and to stop a small unrepresentative group of people from choosing paths for us — is high stakes and urgent.

(I wrote about this idea of forking paths here, in a summary of W. Brian Arthur’s gem of a book, The Nature of Technology, and I wrote more directly about path dependency here, also touching on its relationship to power.)

Fourth, it seems to me we have a window within which the consequences of generative AI still seem weird, and this window is valuable — once it closes, it won’t open again.

As these technologies become mundane, we’ll forget what it was like before we had them. As E.M. Forster writes in The Machine Stops:

Above her, beneath her, and around her, the Machine hummed eternally; she did not notice the noise, for she had been born with it in her ears.

Returning to the idea of a contract that we sign with technology, we might say that the contract is binding even though it’s written in invisible ink. We’re held to its terms forever, even though, years later, we won’t remember what the terms were. So the act of signing is also an act of forgetting.

This informs the approach we should take. Specifically it means we must take an actively critical stance, using the word critical in the way it was used by Herbert Marcuse in his 1964 classic, One-dimensional Man. We need to work actively to keep alternative ways of seeing the world alive.

Let’s end for now with two quotes that speak to this point better than I can. Here’s Lola Olufemi in Imagining Otherwise:

There is a meaningful relationship between our capacity to imagine and the impetus to resist.

And here’s H.G. Wells, from his last published work, The Mind at the End of its Tether, with his parting warning to humanity:

Hard imaginative thinking has not increased so as to keep pace with the expansion and complication of human societies and organisation. That is the darkest shadow upon the hopes of mankind.

Footnotes

  1. I’m using the word ‘technology’ here in a broad sense, in keeping with Ellul’s concept of ‘technique’, or the idea of the technological society.
  2. As you can probably guess, the opening image in this post, showing Dustin Hoffman in 2001: Space Odyssey, comes courtesy of Midjourney.

To stay in touch with my writing, you can follow me on Medium or support me on Substack. Meanwhile here are some links to other things I’ve written:

My book, End State, a big picture take on how we adapt the state for a digital age, running from the industrial revolution to radical ideas for the future.

A series of three essays I wrote for a Joseph Rowntree Foundation project, Social justice in a digital age. They explore platform capitalism, care in a high-technology society, and inequality in an intangible age.

A list of posts (not quite up to date) for a year-long series about how we should think about technology, and the way it’s changing what we need from the state.

--

--