The Benefits of Artificial Intelligence Outweigh the Risks

Ryan Khurana
6 min readSep 3, 2018

--

The following essay was written for The Economist’s Open Future competition on the question of whether AI will do more good than harm, and was long-listed in its category.

After decades of inertia, Artificial Intelligence (AI), the study of creating machines that can perform cognitive tasks such as problem solving and learning, has experienced a resurgence. Funding has grown exponentially since 2012 when an AI method known as Deep Learning powered a breakthrough in machine image recognition. Development is moving at an accelerating pace; the current generation of AI has advanced in processing natural language, navigating terrains, and playing complex strategy games, as demonstrated by AlphaGo’s victory over 18-time world Go champion Lee Sedol in 2016.

This rapid advance in AI is playing a major role in the Fourth Industrial Revolution, where the line between the physical and the digital becomes blurred, resulting in profound economic transformation. A recent study by the McKinsey Global Institute predicted that the productivity boost from AI adoption would result in a 0.6% increase per year in global GDP growth. A rapidly ageing global population, with fewer young people around to enter the workforce, results in a need for AI to sustain economic growth and increase quality of life.

All technological revolutions nonetheless carry anxieties over the risks of innovation. The worries about AI range from economic to social to existential. Headlines abound, claiming that automation of jobs will result in mass unemployment, that algorithms exacerbate the worst human biases, and that a Terminator-style AI may turn on its creators and destroy all human life. There are, of course, legitimate AI concerns that require forward thinking policy, but technological pessimism is thoroughly misplaced, acting as a form of contemporary Luddism.

Just as the historical Luddites were most concerned with unemployment, technological unemployment remains the most prominent public fear involving AI amongst their modern contemporaries. A seminal Oxford Martin study claimed that 47% of the US labour force is susceptible to automation. While this remains the most cited study on AI’s impact on jobs, its methodology is questionable. The study looked at broad occupational categories, with its high automation rates resulting from the conflation of entire jobs with a primary task. Unlike the model of the study, almost all real-world jobs involve a complex combination of tasks, which involve learning new things, or adapting to never before seen scenarios. The method of the study, and those subsequently inspired by it, muddles the importance of combination of tasks due to its false equivalency between what workers spend the most time doing with what is most valuable in their work.

History reveals numerous cases of technologies that automate the most laborious aspects of jobs, while not destroying the job in its entirety, such as the flying shuttle loom reducing the physical strain of weavers operating looms by hand, and ATMs reducing the time needed for tellers to perform the rote tasks of counting money and updating balances. The value of workers cannot be neatly be broken down into component tasks, but is based on the output they produce. New technologies in these examples increased the demand for human labour as a result of productivity gains, and even resulted in entirely new categories of jobs. As tasks become automated, workers are able to be more productive in the tasks that cannot be automated, and even with AI, these hard to automate tasks abound. Machines are unable to have intuitions, make moral judgements, or possess social skills, which are universal to humans, and play a role in most jobs.

Take, for example, truck driving. Although it is predicted to be made obsolete by self-driving cars, it still requires human skills that are difficult to replace. Self-driving cars will excel in most traffic conditions, but residential roads are tricky; skilled driving relies on intuition. The demand for truck drivers will decrease with AI, but the job category will not disappear, as there will need to be people with driving experience to operate fleets in the same way drones are currently operated. This is why a burgeoning literature refers to AI’s impact on jobs as augmenting, rather than automating. As the machines of the industrial revolution did away with the tasks that were dangerous and dirty, AI presents the opportunity to do away with the dull. The skills demanded will change, but with education and retraining, more fulfilling work would be prevalent.

Economic opportunity would nonetheless be insufficient to support AI if it increased social harms like discrimination. AI bias, the problem of developing models from past human behaviour filled with racist, sexist, and oppressive tendencies, presents a real social concern. Nonetheless, the potential for negative impact from bias is minimal, as it requires failure not just at the algorithmic level, but at the deployment level. To prevent large private corporations or the public sector from making implementation mistakes, an entire industry surrounding AI development has arisen, comprising consultancy divisions, such as at Tata Consultancy Services; academic research centres, such as the Future of Humanity Institute; and non-profits, such as OpenAI. The awareness of the AI community to the problems of bias, and the need for ethical AI, has incentivised responsible behaviour. There is a plethora of research being done in AI ethics, and on AI’s social impact, which is shaping future implementation.

Ethical abuses by humans are common, and the AI ecosystem is developing in a way that avoids entrenching errors, but instead improves institutional functioning. Governments are becoming aware of the need for technical expertise to address the challenges that AI brings and are proactively reaching out for advice. The UK recently appointed Demis Hassabis, co-founder of DeepMind, the firm that developed AlphaGo, as the first adviser to the Office of Artificial Intelligence. OpenAI is working with governments to address a range of AI issues, including the risk of dual-use, which is the use of consumer technologies for military purposes.

The ecosystem surrounding AI development makes predictions of existential danger implausible. The superintelligence hypothesised by Oxford philosopher Nick Bostrom as an unstoppable threat to humanity does not represent a feasible technological creation. The limits of what AI can do, as evidenced by its inability to automate fully, are too restrictive to enable a superintelligence that surpasses human beings in all domains. The cataclysm scenario seen in popular films like the Terminator series or 2001: A Space Odyssey, in which an AI is given a goal and deems the human race a roadblock to its success, remains purely science fiction.

The current paradigm of AI development, which focuses on narrow applications of the technology to specific problems, is not moving in a direction that facilitates the creation of this type of AI. Rodney Brooks, an AI pioneer, pointed out that people conflate machine performance with competence, which obscures AI’s real abilities. The ability for an AI to master playing chess does not mean it understands what the rules of chess mean, or all the underlying concepts behind the game; it simply knows how to win. This limits the capability of algorithms to develop wildly new skills outside of what is required for their narrow objectives.

Not only would the development of a superintelligent AI require a scientific breakthrough in the field, but a financial reorientation as well. The vast majority of AI funding is for context-specific applications, and these applications rarely require furthering the development of superintelligence. Even if existential risk arguments about AI are logically sound, they apply to something so different from existing or potentially existing AI technologies, as to not be relevant to regulatory discussions.

In the meantime AI is already positively impacting the economy. The general-purpose nature of AI is improving productivity and driving down long-term costs in a wide range of sectors. Beyond the high rates of AI adoption in the automotive industry and in finance, implementation has already begun in retail, tourism, media, education, and healthcare. Eight out of the ten largest companies in the world by market capitalisation are making large investments into AI, with an estimated $39 billion USD invested in 2016.

The profound human benefits of AI are readily apparent in healthcare. DeepMind is working with the NHS to ensure doctors and nurses are in the right place at the right time, and identifying patients with urgent needs, thereby reducing mortality rates. Their Stream service is currently being used to target Acute Kidney Injury, a condition responsible for 40,000 preventable deaths each year in the UK. In addition, AI-led drug discovery is being used to find cures for previously intractable diseases through identifying new patterns in large medical data sets that are too complicated for individuals to analyse.

The opportunities AI creates should not obscure the risks the technology brings, and the policy responses that need to be taken. Technological pessimism, however, only serves to hold back progress. The technologies powering previous industrial revolutions brought their challenges, but the long-run effects of the transformation have been undeniably beneficial. There is no reason to expect the effects of AI to be any different, and it would be wise to encourage this fruit bearing field of innovation.

--

--

Ryan Khurana

Executive Director of the Institute for Advancing Prosperity. Axiology, Horology, and Technology.