AI isn’t alive. Technology is.

We’re watching the wrong hole in the fence

James Plunkett
10 min readMay 5, 2023

Another week, another set of headlines about AI, warning justifiably of the technology’s dangers.

I still worry, though, that we’re having the wrong conversation.

We’re focused on the risks around intelligent AI. But what about the wider processes by which we develop and use technology? Are these broader processes meaningfully under our control?

I’m not sure they are, and it seems to me this is where the profound risks sit. So in this post I’d like to ask a different question to the one we normally ask. Not ‘is AI alive?’ but ‘is technology autonomous?’

The good news is, we’ve asked this question before. Technological autonomy, and its relationship to human agency, was a central theme in a surge of concern about technology from the 1950s to the 1970s.

So what does this writing say about our predicament today? Is technology autonomous? And, to the extent that it is, what does that mean for human autonomy?

A line drawing in black pencil on a white background: a human shakes hands with a robot.

Technology is autonomous. So what are we?

The strongest version of the technological autonomy thesis came in Jacques Ellul’s The Technological Society (1954) which unpacks, over 500 pages, the argument that “technique has become autonomous”.

Let’s start with that word Ellul uses — ‘technique’, as opposed to technology. Ellul uses the word ‘technique’ because he wants to zoom out, looking not just at individual technologies, like electricity or AI, or even at technology as a whole — technology as a body of knowhow. Ellul wants to go even broader, looking at the whole class of systematised practices that we adopt as a society in order to put technology to use.

So let’s bring to mind not just machines, or machine learning models, but also the institutions in which, and through which, these technologies manifest in our lives.

From the business models that make use of the unprecedented power of digital platforms and AI to historic institutions like the practices of assembly line mass production, or the organisational form of the corporation. The ways we find we need to live and work in order to use technology well.

What does it mean to say technology, in this sense, is autonomous?

The claim isn’t quite that technology is alive (as per the slightly clickbait title of this post), nor that technology is conscious or acting with will.

A more precise articulation is to say that technology is not heteronomous. Meaning that the process by which technology advances ever deeper into our lives is not meaningfully subject to laws external to itself. It’s a process with its own momentum but, more than that, it’s a process with its own direction, or that insists on a certain form.

Here’s Langdon Winner, a more moderate proponent of the thesis, writing in his 1970s classic, Autonomous Technology (1972):

Technology in a real sense now governs its own course, speed, and destination.

The claim I’ll focus on here — because it feels to me more contentious — is that point about the destination. The idea that technological progress isn’t just hard to stop, like a car rolling down a hill, but also hard to steer.

What Ellul and Winner are saying is that the process of ‘invention’ might feel like something we’re directing, but really it’s systematised and heavily dependent on what came before; in W. Brian Arthur’s phrase, it’s combinatorial in nature.

So much so, in fact, that the technological discoveries we make were sitting there all along, waiting to be found. As Ellul puts it, “each new advance is latent in the existing technical ensemble”. So the forms into which technology unfolds aren’t really in doubt — it’s just a matter of time.

All of which prompts an obvious question: if technology is autonomous, what about our autonomy, as humans?

Ellul is again uncompromising on this — too much so for my liking. He writes that “there can be no human autonomy in the face of technical autonomy”. Winner is more moderate, using an evolutionary metaphor to picture humans as “participants in a selective environment”.

Under Winner’s approach, then, which I find more persuasive, the institutions we establish — including institutions like the market, and the ways in which we regulate the market — provide an environmental condition that guides how technology unfolds. They help to determine which technologies (and which uses) wither, and which ones thrive.

Still, it’s a provocative view, and one that leaves behind our old mental model of the pioneering inventor. There’s still a role in technological development for the human agent but it starts to feel more like a chemical agent. As Winner puts it:

Human beings still have a nominal presence in the network, but they have lost their roles as active, directing agents.

Our relationship with technology becomes submissive in nature, leaving us to experience a loss of mastery, a waning of “intellectual, moral, and political command”.

So, those are the main claims. And personally I find it hard not to smile when reading these books — written between 50 and 70 years ago — and think ‘you haven’t seen anything yet’.

But is any of this actually right?

Let’s reflect on whether we buy the idea of technological autonomy and on why, regardless of whether or not we agree with these claims, this might still be the right conversation to have.

Personally, I find myself persuaded by the early parts of the argument — the idea that technology has daunting power as a system, which leaves human beings with diminished agency.

This feels to me consistent with how mainstream thinkers now conceive of technological discovery, and especially the way we’ve moved away from the idea of a lone inventor changing history with a flash of genius.

The full-fat, individual agent view of discovery isn’t really en vogue any more — it’s been replaced by a more social view of the way in which technological knowhow accretes in a society over time. (See here for a summary of my favourite such account, from W. Brian Arthur).

We know, for example, that it’s common for two people to ‘discover’ the same invention at the same time in different parts of the world, which implies that the process of discovery has a systemic, unfolding quality.

And we also know that people at the frontier of technology — including Geoffrey Hinton, whose resignation from Google triggered this week’s headlines about dangerous AI — often feel unable to slow or redirect the forces at work. The most infamous example being the invention of nuclear weapons, described here by Werner Heisenberg:

The worst thing about it all is precisely the realisation that it was all so unavoidable.

So the idea of technology as a system unfolding with its own logic isn’t really all that contentious, and it’s a useful corrective to the sanguine view that AI is a tool that can invent and then decide how to use.

Where I get off the bus, though, is just before it heads to Determinism-ville. Which is to say I’d rather not give up entirely on the possibility of human autonomy, even in a world in which we accept that technology is pervasive and insistent on inhabiting a certain form.

Maybe it’s wishful thinking, but my sense is that we can read this literature, and find it persuasive, but still exit with human autonomy intact.

The metaphor I find most helpful here, because I think it strikes the right balance, is the idea that our relationship with technology takes the form of a contract. Technology gives us great power but only if we live on its terms.

This image of a contract, or a deal, captures nicely, I think, the positives and negatives of technology. Even more so, it captures the ‘if’ that binds the two — the way technology says to us ‘you can have this incredible power, but only if you live or work in this way’.

This idea of a contract prompts us to think about how we should approach our negotiation with technology — including the specific instance of our contract with AI.

It also helps us to make sense of the freedom-inhibiting terms on which technology insists — the things technology demands of us in return for the powers it offers.

What do I mean by freedom-inhibiting terms? Let’s think back to Ellul’s word ‘technique’ — the idea that we can’t talk meaningfully about technology — or about a specific general purpose technology, like electricity or AI — without also talking about the systematised practices that we need to adopt in order to put technology to use.

Our experience in the 20th century offers a useful case study here. In the late 19th century, we unearthed electricity as a general purpose technology, but we didn’t secure the benefits of electricity — the benefits to productivity, for example — until we adopted new practices that meant we could put electricity to use.

To be specific, we had to change the way we produced things entirely so that it fitted the logic that the new technology required. We reorganised work around the assembly line, reconceiving the very notions of work and craft in the process. Later, we reorganised our lives around institutions like the modern corporation and the supermarket.

In this narrow sense, then, the contract we struck with technology in the 20th century worked as it had promised.

Technology delivered on its side of the bargain, giving us the power to raise productivity — so much that, in 150 years, average real incomes rose sixfold as annual working hours nearly halved and life expectancy (because we later extended the contract into the field of health) more than doubled.

Meanwhile, though, technology was super insistent on enforcing those freedom-inhibiting terms. Which is to say that technology held us to those new ways of living and working, and even pushed those approaches into ever more domains of our lives (which is why Ellul also makes a big deal of saying technology is ‘totalising’).

All of which limited our freedom of manoeuvre, putting bounds around the way we live, constraining everything from the way we relate to work and to each other, to the way we conceive of our lives.

So here’s where I think the critical literature from this earlier period is right: when it comes to negotiating the deal with technology, we’re really bad at reading the terms. As Winner puts it, we never ask ourselves what we’ll be asked to pay for the power we take from technology. In the negotiation, we’re weirdly submissive.

A broader reckoning

Where does this all take us? It takes us to a broader debate, not just about specific technologies like AI but about technology and how it relates to freedom. A debate about what it would take to negotiate well.

One thing I like about this is that it’s inescapably a conversation about technology and capitalism, and the relationship between the two.

Or, more specifically, it’s a chance to reappraise the role that capitalism — and especially the institution of the competitive market — plays in our negotiation with technology.

Without getting into this too deeply now (it’s a big conversation!), this turns pretty quickly into a reevaluation of classical liberalism, but from a slightly different angle to the one we normally take.

With apologies for being horribly reductive, the general thrust of classical liberalism was that the institution of the market should be trusted to negotiate the contract with technology on our behalf. i.e. we could trust the market to aggregate our preferences in ways that fostered technologies that supported human flourishing.

Maybe we could even say that classical liberals wanted us to give the market power of attorney in the negotiation of our contract with technology — they wanted us to trust the market to represent our interests in a negotiation that we didn’t have the capacity to negotiate ourselves.

All of which poses a timely question: is the market still doing a good job as our attorney? Is it negotiating well with technology on our behalf? Does our contract with technology, and the gradual extensions to that contract — the new uses to which technology keeps being put — seem to have our interests, as human beings, at its heart?

It won’t surprise you to say I’m not so sure it does. I’ve written about this quite a lot, especially in a piece entitled The Fable of the Bees, if the bees were all on Facebook, so I won’t repeat the argument here.

The short version is that digital capitalism seems to be weakening that old Smithian idea of alignment — the idea that markets do a good job of turning selfish behaviours into apparently benevolent behaviours. (We might say that the invisible hand has become the invidious hand.)

For what it’s worth, though — and here’s where I disagree with catastrophists like Ellul — I think we can turn this around. By which I mean I think we can imagine institutions that would equip us to negotiate a better contract with technology — and even institutions that could, to some degree, negotiate on our behalf. They’re just not the institutions we have now.

So the debate this all opens up — which seems to me precisely the debate we should have — is one about the design of those institutions.

What are the mechanisms through which we should negotiate with technology? Given a world in which technology seems close to autonomous, what mechanisms can we use to exercise freedom?

One last point to end on, which also feels like quite a big deal: it seems increasingly obvious that those mechanisms can only be collective.

Which is to say we will not negotiate a good contract with technology if we leave the negotiation to each of us as individuals, as we go about our daily lives — trying our best to protect our kids online, or trying desperately to resist compulsive or addictive behaviours, or trying in vain to protect our data. We can only assert our freedom together.

In all honesty, I’m still not 100% sure where I land on all this. My point for now is just that these are the right debates to have. It’s not that we’re wrong to worry about AI — it scares me like hell. But we should keep our eyes on the bigger hole in the fence: the question of whether we’re in control of technology as a whole, and what it does to us when we’re not.

To stay in touch with my writing you can follow me on Medium or support my writing on Substack. Or for the big picture take on how we adapt our governing institutions for a digital age, there’s my book, End State.

If you’re interested in these themes, here are some relevant pieces I’ve written before. On path dependency and why it heightens the stakes in our negotiation with technology; on the nature of technology; on the Fable of the Bees, if the bees were on Facebook; and an essay entitled The Invidious Hand, asking ‘what happens to capitalism when we discover a digital dimension?’

--

--