Five reasons to be terrified of AI

AI will improve our lives immeasurably, but what are the downsides?

Russia’s F.E.D.O.R robot — Donat Sorokin / TASS

Artificial Intelligence (AI) is a hot topic right now and rightly so. Chatting to Alexa makes me feel like I’m living in the future. I’ll be first in line to pile into a driverless taxi or try out a robot doctor. Yet, like any technology there’s inevitably a dark side. I’d argue there are as many reasons to be terrified of AI as there are to be excited about it. Here’s a few that might get you thinking. Don’t have nightmares.

  1. Terminator-style apocalypse

Bring up the risks of AI and this is where most people go. Philosopher Nick Bostrom sets out this scenario in his book Superintelligence. We create a machine that exceeds our intelligence in every way. It then decides to takes control of life on Earth and bends it to its own wishes. Humanity drops down the pecking order from top dogs to about level with actual dogs.

Is this realistic? Toby Walsh, a real-life professor of AI, dismisses the idea of a malevolent and all-powerful AI as something “mostly believed by people not working in artificial intelligence”. Ouch.

It’s true that most AI being developed today is what’s known as Artificial Narrow Intelligence. This means it’s dedicated to a specific purpose like speech recognition. Before we could get to superintelligence we would first need to develop a system with general intelligence, which can adapt itself to tackle any problem. Companies like DeepMind (owned by Google) are working on this but it’s a long way off, perhaps 25 years or more.

Conclusion: Rest easy, Skynet won’t be taking over any time soon.

Terrifyingness: 10 — Likelihood: 2

2. Out-of-control AI

A superintelligence might threaten us by setting itself goals which conflict with humanity’s interests. A different scenario is that we give an AI a goal and it pursues it so zealously it inadvertently causes us harm.

For example, we develop an intelligent street cleaning machine and it starts killing anything in its path in the drive for the cleanest possible streets. Engineers try to intervene but the machine rationally sees any attempts to repurpose or deactivate it as impediments to its goal and fights back.

There are a number of ways to avoid this. Hardwiring human control into safety critical AI systems is one. Another is to put potentially dangerous AI systems into some kind of container so they can’t escape.

We could also build laws and ethics into AI systems. This is tricky because rules written by humans tend to be open to interpretation. The classic example of this is the driverless car. Simply giving a car a mandate to protect human life is no good. At some point it will need to choose between passenger or pedestrian safety.

Big IT projects certainly do go catastrophically wrong on a regular basis. So it’s conceivable that we will create at least one AI that gets out of control before we learn our lesson.

Conclusion: An AI might go rogue one day but hopefully its powers will only be trivial.

Terrifyingness: 9 — Likelihood: 4

3. Lethal autonomous weapons

Military drones have been widely adopted because they let states dish out death and destruction at no risk to their own personnel. Lethal autonomous weapons (a.k.a. killer robots) take it up a notch and use AI to make their own decisions on who to kill. You could theoretically unleash an army of drones or robots somewhere, set them the goal of subjugating that country, and leave them to it.

That scenario is not as far-fetched as you might think. Missile systems are already in use that automatically choose and engage targets. Human controllers are relegated to a kind of supervisory status. They can see what’s happening and can intervene to halt the weapon system if they deem necessary.

A whole range of AI and robotics experts recently signed an open letter demanding urgent action to address concerns around autonomous weapons. There’s no reason in principle why the international community couldn’t ban such weapons. For example, blinding laser weapons have been barred since 1995 and so far over a hundred nations have signed up to this agreement.

Many argue, however, that autonomous weapons are just too attractive to the military for international prohibition to gain any traction. Given the widespread deployment of drones, I’d be inclined to agree. The best we can hope for is effective regulation.

Conclusion: This train has already left the station. The best case scenario is that restraints will be put on the use of lethal autonomous weapons.

Terrifyingness: 9 — Likelihood: 9

4. Mass surveillance and state control

Sophisticated AI allows an unprecedented level of population surveillance and control. Take image recognition as an example. We may have 1.85 million CCTV cameras in the UK but their effectiveness is limited because we largely rely on human operators. Image recognition software will supercharge this camera network. It can recognise one face in a large crowd and much more besides. Picking out a gun in your hand, for example, or spotting someone who’s acting as if they might attempt suicide.

In China, authorities are using image recognition to name and shame jaywalkers. Cross against the lights and your photograph and personal details are displayed on a huge screen above the road. Offenders can choose to pay a fine, take a course or spend 20 minutes helping a traffic officer.

Another way AI can be used by the state is by combing through data from disparate sources to find patterns of troublesome behaviour. This was a big element of the Snowden revelations.

China is again pushing the boundaries here with its plan to amass every bit of information available about each citizen and combine this into a single trustworthiness score. If you have a low score, good luck getting a well-paid job or a decent apartment.

Although Britain is a mature democracy, it’s also a big fan of both surveillance technology and data collection. For example, ISPs are legally required to store your internet history whilst the police use number plate recognitions systems to record millions of vehicle journeys. It’s a safe bet that AI will be used to analyse these treasure troves of information to identify different groups of people, be they political extremists, benefit cheats or illegal immigrants.

Conclusion: You probably find AI-driven surveillance either reassuring or scary depending on your political leanings.

Terrifyingness: Debatable— Likelihood: 9

“Factory Square” (CC BY-NC 2.0) by TunnelBug

5. Wholesale job losses

Mass unemployment is, in my opinion, the scariest risk posed by AI. Many tasks that could previously only be done by humans will be taken over by AI. For example, Google Translate is already nearing human levels of accuracy. Not great news if translation work puts food on your family’s table.

There is in theory no job that can’t be automated eventually. White collar jobs can be simply swapped for software and blue collar jobs can be replaced by AI combined with other technologies like robotics and 3D printing.

It’s impossible to put exact numbers on the scale of job losses from this new wave of automation. One recent report by PwC suggests nearly 40% of US jobs could go by 2030. McKinsey argues $2.7 trillion of wages in the US are at risk, representing half of all activity in the economy.

Are you sitting pretty if you have a well-paid, professional job? Probably not. Any job can be broken down into discrete tasks. Automation at first takes over the simplest tasks. Over time it gobbles up more and more until you’re putting your belongings in a cardboard box.

Take legal work as an example. Already software is being used to take on the previously labour-intensive job of analysing millions of pages of case documents. It is also being used to review contracts. In future, it might conduct legal research, provide administrative support, generate legal documents and perform due diligence. As legal AI becomes more sophisticated, how many people will a big law firm actually need to employ?

Bill Gates argues we can solve this problem by imposing a robot tax on companies to delay automation and provide funds for retraining workers. What are we going to retrain these millions of people to do? Sending them off to coding school is not an option when we already have an AI that can write code itself.

There’s plenty of coverage of the coming jobs apocalypse in the media. Nevertheless I don’t get the sense that political leaders are really engaging with it. President Trump wants to create more jobs by pressuring American corporations to build factories at home. Not much point if these factories are 99.9% automated.

I’m convinced we’re staring down the barrel of major social unrest. To take just one example, 3.5 million Americans drive trucks. If all trucks go driverless, that’s a big mob of angry people. Add in van drivers, taxi drivers, bus drivers, train drivers, even pilots, and you can see how the problem multiplies.

We could ultimately end up in a situation where the market economy can’t function. If nobody has a job, nobody has the money to buy the goods and services these clever machines are churning out. Maybe we’ll all get paid a basic income and spend our days writing poetry or building model railways?

Conclusion: No job is truly safe from AI. Start thinking right now about how it’s going to affect you and what career paths your kids should take.

Terrifyingness: 9— Likelihood: 10

Thanks so much for reading. I’d love to get your thoughts and comments, and if you liked the article, keep holding that clap button so more people see it.