Do the benefits of artificial intelligence outweigh the risks? My entry for the Economist’s Open Future Essay Competition
Widespread fears of AI and automation taking away jobs are overblown, but so is the conceit that AI is only good for automating routine, low level tasks. The truth, as always, lies somewhere in between.
AI is what Erik Brynjollfson at MIT calls a “general purpose technology” — like electricity or the internal combustion engine before it, it has knock-on effects, where its very presence leads to the creation of entirely new ways of producing goods and services, getting around and living.
In this case, it allows us to fundamentally rethink what we use computers for and what computers can do for us.
Before we go any further, its important that we understand what exactly it is that AI, and its most dominant sub-field, deep learning actually does. It essentially allows computers to recognise patterns in data that have been presented to them
As a deep learning programme is presented with more data about whatever task it has been assigned to do — detecting tumours from MRIs, recognising faces in selfies — the better it gets at that task. It does this through a more than 30 year old technology called “backpropagation” which among other things, allows itself, or its programmers, to retroactively tweak the statistical parameters that it uses to recognise patterns in response to new information or as the use case changes.
But the question still stands - what can it do? In order to understand that, it is useful to think about what new technologies, more generally can do for us.
Historians of science and technology note that new “general purpose technologies” bring about three kinds of benefits.
The first are “incremental benefits” where societies can accomplish ongoing activities with a higher level of efficiency — in the case of AI, companies can get better results for questions that they are asking right now with data that they already have.
One of the main implications of this is, as Ajay Agrawal of the University of Toronto points out in his new book “Prediction Machines”, is that the cost of making predictions — about consumer buying habits, traffic congestion, what viewers are going to watch next on their computers, will fall to the extent similar to how the cost of light fell 400 fold from the 1800s to today.
Among other things, this enables companies to be leaner and more efficient on a scale that was previously unthinkable. Large stocks gathering dust in warehouses as unsold inventory, for example, could soon become a thing of the past as companies use cheap predictions to produce and ship goods just in time for consumers to buy.
The second, “transformational benefit” to new technologies allows people “new ways to access services and support livelihoods”. What this means, in the context of AI, is that companies can ask new kinds of questions of the data that they already have with them. A lawyer in the discovery stage of a complicated trial, who has to study various precedents and case law, might search for emails which are classified as “angry” or ‘anxious’ or clusters of documents which are classified as anomalous or relevant to the case at hand — work that earlier took an entire team of paralegals weeks to accomplish.
The third — “production benefits” where new categories of economic activity are created thanks to new technologies — apply to AI in the sense that new data types open themselves up to interpretation and analysis by computers — while earlier they could not “read” audio, images or video in the sense that they could analyse the content for meaning and summarise what is happening — they can increasingly do that now.
At first glance, the implications are obvious, especially in people-heavy industries such as manufacturing and retail. Robots, while often having a very high up-front cost, over time end up paying for themselves since the only expense is maintenance and electricity, as opposed to flesh and blood employees for whom ongoing expenses like salaries and benefits are something these businesses would rather not deal with.
What a lot of would be AI adopters fail to realise, however, are that most deep learning systems, powerful as they are, are also extremely limited and heavily dependent on the data that they have been fed already. A neural network that recognizes images can be totally stumped when a one single pixel in the picture is changed — or if visual noise that’s invisible to humans is added. Self-driving cars often fail to navigate environments or conditions that they have not encountered before.
An AI system can be best likened to an idiot savant or a particularly powerful pocket calculator. Deep learning as it currently is used today will not result in a sapient intelligence that abstractly reasons and makes generalized assumptions about the world. As it stands, it is unlikely to automate ordinary human activities by itself.
Further, a common mistake a lot of people make is to think of careers as a single overarching monolithic entity instead of what they actually are — a collection of discrete tasks each of which are different in terms of difficulty and susceptibility to automation. AI and ML will eliminate some of these “jobs to be done” while also creating entirely new ones. Policymakers and governments need to understand this.
The fact is — that careers which involve some element of human creativity and imagination will very probably never be fully automated. People who can do stuff that computers cannot do will continue to charge a premium in the job market. Education systems need to respond to this as well. Students don’t need to learn how to add, subtract, or memorise times tables or historical dates. Skills pertaining to communication, working well in remote teams, being able to work independently of supervision — will be increasingly vital.