Technology is political in more ways than one
This week I thought I’d use Langdon Winner’s 1980 article, Do Artefacts Have Politics?, to explore the idea that technology is political.
I came across Winner’s article after reading his 1977 book, Autonomous Technology, which I’ll write more about soon because it’s one of the most thought-provoking things I’ve read in a long time.
The article, though, is a nice introduction to the argument that technology is not a neutral tool that we use, like a hammer. Technology is political. Or, to use Richard Pope’s phrase, software is politics.
Winner explores the sense in which these claims can be true. When we assign political qualities to a technical artefact, what do we mean?
Winner doesn’t quite divvy things up in this way, but I’d like to suggest that there are four different ways in which technology can be political, which I’ll describe here in ascending order of significance.
First, a technology can be political because we use it in a political way.
Winner gives the example of the bridges that were built in New York by Robert Moses. In a racist attempt to keep his Long Island parks exclusive for wealthier white people, Moses made sure the bridges on the freeway to the parks were too low to allow passage for the buses that were used by the city’s poorer African American communities.
This is an example, really, of technology being used in a political way, rather than of technology itself being political. Although I guess we could still say that the bridges, like hostile architecture in general, end up being political, or at least that they’ve been politicised.
Second, a technology can be inherently political because its form or default implementation benefits some people — men, for example, or white people, or able-bodied people — over other people.
A contemporary example is algorithmic decision-making and the use of large datasets to train machine learning algorithms.
If we believe that there’s a history of systemic discrimination against marginalised people, and if we think the consequences of this discrimiation have found their way into big datasets, and if those big datasets are now being used to automate decisions which impact on people’s future life chances, then algorithmic decision-making is inherently political. i.e. by default, it will make historic discrimination stickier, so that it’s harder to free ourselves from its legacy.
This has implications for public policy, which tends to start from that old mental model of technology as a tool that can be used for good or bad.
Take as an example this work on algorithmic decision-making from the Digital Regulation Cooperation Forum. It’s good to see regulators taking seriously the question of how we use these technologies fairly. But if you read this work, you see lots of phrases like ‘operating with due care’ or ‘sharing best practice’. These strike me as the kind of things you would say if you were advising someone on how to use a hammer. They’re not the kinds of things you’d say if you thought the technology in question was inherently political, and biased-by-default unless used and regulated in a proactively anti-discriminatory way.
Third, a technology can be political because it’s more easily compatible with some political arrangements or systems than others.
As an example, Winner references the work of Lewis Mumford, who used the debate between solar and nuclear power to illuminate how some technologies are authoritarian while others are democratic.
Putting aside all the environmental pros and cons of solar vs. nuclear power, Mumford was a fan of solar power because of its political qualities. Nuclear energy, he argued, is inherently centralising because of the way a nuclear power station has to be designed, built, and maintained, and the way its power is distributed. Solar power by contrast is decentralising. It’s easier, if expensive, to install solar power at a small scale, and solar reduces people’s reliance on a centralised grid. Solar power is therefore a more democratic technology.
Mumford isn’t claiming that technology forces us to adopt certain political arrangements. He’s just saying our choice of technology puts a finger on the scales. Maybe you could design, build, and organise a nuclear power station to be decentralising, but it’s easier to do this with solar. So whatever we think of this technological choice, it’s not neutral.
Fourth, there’s the most profound sense in which technology is political: it can require certain social and economic enabling conditions which are themselves inescapably political.
To understand this claim, we need to draw on the wider arguments Winner makes in his book, Autonomous Technology, in which he explores what technology demands of us as individuals and as a society. We can also bring in the broader claims of philosophers like Jacques Ellul, who explored the rise of technique in his book The Technological Society.
Technologies are a bit like plants, in that they need certain conditions in order to thrive. So when we adopt a new technology, it’s as if we strike a kind of deal; the technology agrees to enhance our powers but only if we submit to its terms. We might, for example, need to live or work in a certain way, by adopting certain organisational forms or living arrangements, in order for the technology to work well. Or we might need to make use of certain methods, disciplines, or mindsets that cohere with the technology’s logic.
A specific example is the way a certain technology might require us to configure our workplace to a certain layout. Electricity had requirements like this; it helped to deliver big increases in productivity but only because we adopted assembly line mass production.
As this example suggests, when we’re talking about a general purpose technology like electricity or the Internet, the social and economic implications of the deals we strike with technology can be profound.
At this level, the claim that technology is political strays into the territory of Marxism and deterministic theories of social development. Even more so if we go further to claim that, since technological progress is compelled by the power of capital, we don’t have much choice in all of this. The system — capitalism — will be intent on putting technology’s enabling conditions in place in order to make the most of its productive potential. It’s therefore hard for us not to accept technology’s terms. Winner quotes Engels, who made the striking (and arguably not very Marxist) claim that “the automatic machinery of a big factory is much more despotic than the small capitalists who employ workers ever have been.”
Still, we don’t have to go all the way to Marx to appreciate the pertinence of these points today. We just have to believe a version of that softer claim we made above: that technology tilts the playing field towards certain social and economic configurations that are themselves political. So again, technology is anything but neutral.
One last point: remember Winner was writing all of this more than 40 years ago, and Ellul nearly 60 years ago. If these claims were important then, surely they’re even more important now. It seems clear that digital technologies have exacting terms — i.e. they have tight demands on the way we live, work, and think. They also seem to be more insistent and totalising than previous technologies, reaching deeper into our lives so that their terms of engagement are harder to escape.
All of which is really just to say: technology is political in more ways than one, and it’s increasingly important that we dig into these insights and work through their implications for society.
This post is part of a year-long series on how we govern the future. To read along, follow me on Medium here or support the project for £3 a month on Substack. For the big story behind all this, the paperback of End State is out now.