The war between technology & democracy
My father was the first person any of us knew who owned a computer. A hulking IBM AS/400 B10, sturdy, expensive and imposing, sat in his home office in the late 1980s. My brother, sister and I mostly ignored it of course — it was dad’s ‘work’.
The problem was that the monster didn’t ignore us. My father, a creative disciplinarian, used his International Business Machine to design and print out the weekly chores in a 7-day grid: wash up, tidy bedroom, make beds, and so on. We called it The Schedule. Each day he would check off the tasks, and each failure would result in a 10 pence deduction from our weekly pocket money.
At the end of the week, the monster would churn out the numbers, and there was no arguing with its accuracy or fairness. All decisions final. Six months in, I was just about breaking even. My more laid-back brother owed dad a small fortune.
Maybe, even at this very young age, I vaguely sensed that machines aren’t just neutral tools that make your life easier. Much depends what existing power arrangements are in place. They can also be remarkably good tools of surveillance for those in charge and can make things possible that weren’t possible before.
Those childish thoughts are now a more serious preoccupation — both for me and society writ large. The last several years have been characterised by a succession of stories about how digital technology — especially the internet — is creating problems for our social, political and economic arrangements. You have doubtless read recently that our elections are being stolen, Russians are hacking our minds, fake news is duping us. If you’re feeling especially morbid, you’ll know robots are about to make us all obsolete.
Some of this hand-wringing is from liberals who are unable to understand how Trump won, and blaming the internet evidently makes them feel better about themselves. Some of the more outrageous news headlines — coming from both left and right — about Facebook and Google destroying everything are driven by the old barons losing ad revenue to these upstarts.
But nearly all of them miss the point. All the recent stories of bots, trolls, hacking, crypto, stolen data, are viewed in isolation, rather than symptoms of a much bigger problem we are facing. That bigger problem is the following: we have an analogue democracy and a new norm of digital technology. And the two don’t work very well together.
We rightly celebrate how the internet gives us a platform, allows new movements to form, and helps us access new information. These are good things, but don’t be blinded by to the other problems the same technology is creating. Our democracy relies on lots of boring stuff to make it actually work as a system of collective self-government that people believe in and support: a sovereign authority that functions effectively, a healthy political culture, a strong civil society, elections that people trust, active citizens who can make important moral judgements, a relatively strong middle class, and so on. We have built these institutions up over several decades — decades of analogue technology.
Now however we have a new set of technologies — digital technology — which is slowly eroding all of them. It’s not to blame one side or the other — simple to state there’s an incompatibility problem.
This structural problem is far more important than billionaires in Silicon Valley or troll farms in St Petersburg. And if we don’t find a new settlement between tech and democracy, more and more people will simply conclude that democracy no longer really works, and look for something else. This being a lecture series about dictatorship, you won’t be surprised to learn that some new form of dictatorship — a sort of gentle, benevolent data dictatorship — is the most likely candidate for replacing it. Something a little like my father’s efficient but depressing Schedule.
I’ll take three examples of how recently reported problems and explain how they are symptoms of this tech / democracy tension. Let’s start with Cambridge Analytica, one of the biggest stories of 2018, and also one of the most misunderstood.
You’ve probably heard something like this: Cambridge Analytica manipulated millions of minds with a magical technique called ‘psychographics’ — where people’s personality types were calculated, and then used to send messages which played to those personalities. Mind control and subliminal messaging! Alexander Nix, CEO of Cambridge Analytica called psychographics his secret sauce — while whistle-blower Christopher Wylie called it ‘Steve Bannon’s psychological warfare tool’.
I don’t think any of it worked. I’ve seen no evidence it was effective. My strong hunch is that most of it was salesperson’s bluster. The truth is at once simpler and more worrying. Cambridge Analytica, using perfectly legal means, bought or collected 5,000 data points of about 200 million Americans from the huge data brokerage industry which trades data about you: magazine subscriptions, gun ownership, car ownership, web-browsing habits, credit rating, and so on. They combined this data with Republican Party data (known as ‘Voter Vault’), and modelled each voter — what they cared about, and how likely they were to be persuaded to vote Trump. They grouped these voters into ‘universes’, such as American Mums who hadn’t voted before. They then designed specialised ads for each universe, and targeted them with personalised adverts, based on what they’d pieced together about them.
Everything was tested, re-tested, re-designed. They sent out thousands of versions of fundraising emails or Facebook ads, working out what performed best. They tried donate pages with red buttons, green buttons, yellow buttons. They even tested which unflattering picture of Hilary worked best.
A few weeks in, analysis suggested there were enough persuadable voters in Pennsylvania, Wisconsin and Michigan to bring these states in play, even though most commentators thought they were unassailable Clinton territory. Driven by the data, they started to bombard people in those three states with Facebook and television ads. (They spent the tens of millions of dollars on targeted Facebook adverts, especially using the ‘custom audiences’ option, which allows you to target specific individuals). A later internal study by Facebook found that the Trump team were far better than Clinton at running Facebook ads.
This sort of thing never changes everyone’s mind — but it can, in tight elections, make a difference. Trump won Pennsylvania by 44 thousand votes out of 6 million cast, Wisconsin by 22 thousand, and Michigan by 11 thousand. If Clinton had won these three states, she would now be President.
The reason this is worrying is because everyone is doing it. Anyone working in online advertising will tell you it’s industry standard. Clinton was doing it. The Brexit campaign were doing it. The UK Labour Party is doing it.
Elections are becoming a data science, based on profile building and personalised adverts. Where does this take us? By 2020 there will be around 50 billion devices connected to the net — quadruple what there is now — each one hoovering up your data: cars, fridges, clothes, road signs, books. Within a decade your fridge will work out what time you eat, your car will know where you’ve been, and your home assistant device will work out your approximate anger levels by your voice tone. Obviously this will be gobbled up by hungry political analysts. By cross referencing fridge data against the number of emotional words in your Facebook posts, a strat-comm team of the future will correlate that you’re more angry when you’re hungry — and target you with an emotive, law and order candidate just as you’re feeling peckish. Just received a warm message plus donation page from the Greens? That’s because your smart bin shows you recycled that morning, and an analysis of your tweets suggests you’re in a good mood.
Politicians have always sought to understand and persuade citizens. The Republican Party boasted in the 1890s that it possessed a complete mailing list of voters, with names, addresses and ages. But elections run with industrial scale data science throws up new challenges which we’re not really set up to deal with.
What happens when, in a decade or so, each person receives a completely advert that’s entirely unique to them. Is it still really an election if one candidate sends 1 million different adverts to 1 million different people? Aren’t elections meant to be about the broad debates of the day, thrashed out in public? How do you hold candidates to account in such a system? And how do regulators check on what’s being served up in such a scenario? During the UK EU referendum, voters were show Facebook adverts claiming that the EU was trying to stop British people from drinking cups of tea! It is a miracle that the vote was so close.
It might even, in the long run, help certain types of politicians to thrive. If politics becomes a behavioural science of triggers and emotional nudges it’s reasonable to assume this would most benefit candidates with the least consistent principles, the ones who make the flexible campaign promises. Perhaps the politicians of the future will be those with the fewest ideas and greatest talent for emotionally charged vagueness, because that leaves maximum scope for algorithmic based targeted messaging.
I’ll let you decide whether this has already happened.
This is hard to stop with our current model because social media platforms are essentially ad firms. That’s where all the money comes from. Their incentive is to a) keep you on the platform for as long as possible, since that means serving you up more ads and b) build up a better profile of your hopes, fears, thoughts and feelings — because those ads can be better tailored to you. In addition to making us constantly distracted — the reason we check our phones so often is because the apps are designed to keep us hooked in — it also means the long-term plan is to know us better than we know ourselves. And that will open us up to knew forms of manipulation. In other words, Cambridge Analytica is just the start.
These are the things — the challenge of ten years from now on the current track — that we should be thinking about.
Journalists often miss the longer-term trends that underlie the tech stories, because they are under pressure to meet insane deadlines and produce insane headlines. Here’s another example.
There is at present an understandable concern that social media has been exploited by fascists and bigots, who use it to spread their message of hate. There are good grounds for such concerns of course. But I think the bigger trend is not that fascists are good at social media: it’s that social media is turning all of us into fascists. Not in our ideology, but in the style of politics we adopt.
The fascist style of politics is one which creates alternative realities, prioritises reaction without thought, whips are rage and encourages tribal loyalty to the Great Leader. If Mussolini were to design a communications system to encourage a fascist style of politics, I suspect it wouldn’t look too dissimilar to some of our popular social media platforms.
Let’s take fake news, an obsession de nos jours. It is widely assumed that people like Tommy Robinson — former leader of the English Defence League — surrounds himself with ‘fake news’ and conspiracy theories. It’s not quite that simple. I’ve spent a lot of time with Robinson (shadowing him for my second book, Radicals). He does read and share fake news of course, but it’s more accurate to say he surrounds himself with cherry picked true news, which corroborate his world view of Islam and the West being incompatible. For several years he has therefore constructed a plausible and coherent version of this world view, through careful one-sided selection of truth. This is not the same as ‘fake news’. This is a problem of selectively omitting certain truths.
The ability to construct believable alternative realities is an important component of any fascist mode of politics, because where there is no commonly shared truth, there is nothing upon which you can anchor political discussion and debate. All that remains is two groups screaming at each other.
This is something we are all doing, albeit in a less extreme way. Selecting some truth and omitting others, in order to build our own plausible and coherent realities.
I’m not blaming Zuck or Dorsey or Brin or Page. It’s simply that certain technologies lend themselves to certain behaviours. Part of the problem stems from a major miscalculation repeatedly made (in good faith) since the 1990s in Silicon Valley. These techno-utopians believe that more information and connectivity will make us wiser, kinder, smarter. Our politics will be more informed if have more information. However, we have too much information. We’re drowning in blogs and facts and charts and more facts. It’s too much to deal with rationally. All we can do is relying on gut instincts and heuristics: my guy / not my guy, that feels true, that confirms what I already thought. Essentially, these are all emotional responses.
That overload, in part, drives us to select our truths. (And to make matter worse there is some evidence that social media platforms are incentivised to show more polarising, aggressive content: because that is more likely to attract our attention and keep us online. This is not even done consciously, it’s simply an algorithmic reflection of what we tend to click on.)
It also drives us to reaction without reflection. In a print-based society, for all its flaws, there is at least a cultural predisposition for an ordering and coherence of facts and ideas, something the linguist Walter Ong called “the analytic management of knowledge”. It lends itself to reflection. Social media platforms however are built to a very different logic: an endless, rapid flow of dissonant ideas and arguments, one after the other, without obvious order or sense of progression. It’s designed for you to blast out thoughts or ripostes over breakfast, on the move, at the bus-stop. It demands your immediate, ill-thought through response. What’s on your mind, Jamie? Facebook asks. What’s happening? Demands Twitter. I’ve noticed people rush to get their denouncements and public displays of outrage in quick, without bothering to work out what they actually think.
Fascists have always worshipped action for action’s sake, because to think is to emasculate oneself with doubt, critical analysis, and reasonableness. “Action being beautiful in itself,” explains Umberto Eco, in a famous essay about the fascism “it must be taken before, or without, any previous reflection”. It would be difficult to write a better definition of a mad rampaging online mob than this. This tendency has been brilliantly exploited by Steve Bannon, who makes statements designed to provoke a frothing-mouth response from liberals. They always oblige, which forces people in the middle to take sides — and that’s the goal. I’m not talking about left or right here, by the way. Both are guilty, since both are reacting to the same basic incentives and new information structure.
All this — the speed, the info overload, the emotive mode — is driving a very obvious re-emergence of tribalism. This combines to create a new form of tribalism in politics. In our hyper-connected, information saturated world, we are encircled by enemies and protected by fellow travellers. Joining a tribe is the only way to survive. And online there is always a fact or a comment or a hot take to prove your side is right and the other side is utterly wrong. When was the last time you actually changed your mind after discussing something online? I’ll answer that for you: probably never, because who has time online for the long, careful, respectful discussion necessary to see the other side of it? In such a world, opponents can’t merely hold principled differences of opinion, they must have sinister motives. Our opponent are liars, cheats, Machiavellians. There’s no compromising with any of them.
These are of course prefect conditions for the tribal leader to arrive and channel the rage, fix the world’s chaos, and bring order to chaos. Hannah Arednt warned us of this decades ago.
Is it all that surprising therefore that social media is helping politicians that embrace this style? Populists are far more in keeping with the philosophy and feel of today’s tech. They promise easy and immediate solutions to complicated problems, without compromise or failure. This is Tinder politics. (They all, incidentally, are in favour of some form of direct democracy — because they claim to represent the ‘real people’).
Is it surprising that, despite this apparently being an age without deference, there is a newly found hero-worship and total leader loyalty in certain quarters? Whether Macron, Trump, Corbyn, Wilders, Trudeau — we await the anointed one to save us, and thus swear total loyalty and fealty to them.
Is it surprising that surveys find growing taste for authoritarian leaders? Is it all that surprising that, in these conditions, truth appears less important than loyalty to the side you’re on?
My final example is the artificial intelligence revolution that’s coming. As with my previous two stories, there are some ludicrous headlines about machines taking all our jobs. Or perhaps going sentient and turning on us. These stories are usually stupid and misleading. We’re very good at working out all the existing jobs we’ll lose, but very bad at imagining the ones not yet invented. And machine sentience is probably best left to the philosophers.
The actual problem is more subtle. Last year I travelled 150 kilometers on a driverless truck in Florida, built by a Silicon Valley start-up. Self-driving taxis in city centres are still a long time off — for both technical and regulatory reasons — but self-driving trucks are likely to disrupt the trucking industry fairly soon. Take this as illustrative from other aspects of the economy. Hundreds of thousands of people drive trucks for a living. For many people who left school without qualifications, it’s a decent, reliable job.
The actual as artificial intelligence and software play a far bigger role in our economy, who wins and loses? Will the losers — there are always losers in transitions — have opportunities to become winners? Whether the people who have the skills or the assets or the networks to take advantage of the inevitable AI-productivity boost to get wealthier relative to everyone else. Will the next wave of tech turbo-charge inequality?
In addition to favouring more skilled workers, digital technology tends increases the financial returns to capital owners over labour. Machines don’t demand a share of the profits, which means any machine-driven productivity gains accrue to whoever owns them, and that’s usually the wealthy. The percentage share of GDP going to labour relative to capital has been falling in recent years; for much of the twentieth century, the ratio of national wealth in the US between labour and capital was 66/33. It is now 58/42.
With this in mind, I always asked the self-driving technologists — who has created some good, well paid jobs of course — what the truckers should to do when the revolution arrives. I’d nearly always get the same answer:
They should retrain as machine learning specialists or robotics engineers.
I can’t decide if this is naive or devious. It’s certainly unrealistic. Some of them might: but not most. Far more likely, I suspect, is that they will smash these blasted machines up, as I used to imagine doing with the IBM. If you haven’t already done so, I recommend you read Ted Kascinski’s ‘Manifesto’, written in the mid-1990s.
“…machines will take care of more and more of the simpler tasks so that there will be an increasing surplus of human workers at the lower levels of ability…”
“Technology advances with great rapidity and threatens freedom at many different points at the same time (increasing dependence of individuals on large organizations, propaganda and other psychological techniques, invasion of privacy through surveillance devices and computers, etc.)”
At the time these read like the ravings of a mad-man, because no-one even owned a computer. And his actions were detestable of course: he murdered three people and injured many more. But you can now find very similar thoughts in editorials in our most prestigious newspapers.
If people come to see machines as a serious threat to their livelihoods, and without realistic means of replacement or routes to prosperity, they will try to sabotage them. Armed with white spray paint and leaked instruction manuals, displaced truckers will change the road markings in order to make them crash or malfunction.
Where does this all lead? I don’t believe democracy is on the verge of collapse. We’re not entering a world of crypto-anarchy, fully automated luxury communism or libertarian paradise.
The threat, I suspect, is more subtle. Over the next 20 years, on the current trajectory, growing numbers of people will conclude that democracy doesn’t work. Elections can’t be trusted. Jobs can’t be created. And everyone is getting furious and not listening to each other.
You have perhaps seen the various surveys that show confidence in democracy is on the wane, especially among younger people. A recent survey in the Journal of Democracy found that only thirty percent of US millennials agree that ‘it’s essential to live in a democracy’, compared to 75 per cent of those born in the 1930s, and results in most other democracies demonstrate a similar pattern. It is no coincidence that according to the most recent Economist index of democracies, over the last couple of years over half have become less democratic. (In the 2017 Democracy Index the average global score fell from 5.52 in 2016 to 5.48. 89 countries experienced a decline — only 27 saw an improvement).
These stats won’t get any better if it can’t solve things or deliver the things people ask of it. We need a new settlement. I’ve proposed some ways of doing that in my book The People vs Tech. Democracy needs an upgrade — and we need to start re-shaping our institutions and expectations too. But tech needs to be brought more under democratic control too. And of course all of us need to change our behaviour too: since it is, in the end, our swipes and clicks and shares that are constantly feeding the data machine.
The idea of democracy won’t disappear, especially in an age where everyone has a voice and a platform. It won’t be a return to the exact conditions of the 1930s — too much is different today. History rhymes but doesn’t repeat. I can’t predict exactly what might replace it, but one version is a techno-authoritarianism — populists armed with powerful tech, promising to use it to solve every problem. We could even still have plebiscites and MPs and the rest. But it would be little more than a shell system, where real power and authority was increasingly centralised and run by a small group of techno-wizards that no-one else understands. That could be in governments, which rely on increasingly technical solutions no-one can hold accountable, or the private sector owning all the data and the capital — with control over public attitudes and debate which is all but imperceptible.
This is hardly a catastrophic dystopia, but rather a damp and weary farewell to democracy. The worst part is that if a less democratic system delivered more wealth, prosperity and stability — many people would be perfectly happy with it. But at that point, it might be very difficult to get back what we’ll have lost.