Sorry Elon Musk, but you are also wrong on AI

Elon Musk rebuked Mark Zuckerberg’s AI optimism yesterday as a result of “limited understanding”. While being full blown optimistic about Artificial Intelligence technology has a stench of naivety to it, even the most ardent proponents acknowledge the ethical and economic risks of putting too much in the hands of robots, Musk’s techno-pessimism is also misguided.

While Elon Musks probably one of the brightest minds in the tech industry today, and his fears may be genuine. However, these fears are the result of strong background assumptions that are not as philosophically or economically ironclad as Musk might think. It seems he is analysing the potential of AI from a purely scientific perspective, which ignores the social realities of such a creation.

Musk’s AI fears are similar to those of Bill Gates, but with an accelerated timeline. He believes that Artificial Intelligence poses a two-fold successive threat: first through mass unemployment, and second through superintelligence. The accelerated developments within the field signal to him that these threats are closer than the public perceives.

His reading of the first issue can be attributed to noted futurist Martin Ford. In his book Rise of the Robots, Ford makes the case that the technological advance of Artificial Intelligence is unlike any advance in history, and that as a result it would wipe out the need for the vast majority of human work. The central assumption of his work is that AI has “general-purpose” capabilities. Whereas historical innovations were task specific, AI algorithms can figure out ways to solve tasks in ways that humans did not specify. As these AI techniques improve and become integrated into more areas of human life, labour becomes redundant. This would inevitably result in economic catastrophe.

There are several problems with the feasibility of this view, not least that it is a purely technical reading of how innovation occurs. There are other criteria such as social, legal, and economic barriers to allowing an innovation to reach its theoretical potential, and these barriers also effect the form of development of new technology. In the case of Industrial Robotics, one of the most productivity boosting AI innovations, we see a wariness among businesses to adapt these new technologies as a result of consumer backlash. Innovation that would result in mass joblessness would not occur unless there is a public willingness to accept the benefits that machines bring in exchange. As a result, the idea that AI will result in the redundancy of human work is a political and economic, not a technological question.

Even more fundamentally, however, is the issue of conflating someone’s jobs with a task. Ford’s claim relies on the existence of a General AI for it to become reality, a claim that I will deal with later in the article, but at present this technology does not exist. Currently we have Narrow AI, which is AI designed with a specific goal in mind. This AI is already more advanced than the machines used to build the Ford Model-T in the sense that they are specified to do a task, but not specified how to do a task. This allows for a greater room for innovation and productivity on the part of the machine. They however are still restricted to tasks in their remit.

A McKinsey Global Institute report from earlier this year acknowledged that rapid improvements are possible within the tasks that AI is currently doing, such as pattern recognition, information retrieval, and navigation. It is much harder to develop AI programs to accomplish tasks that none are currently able to do, such as social and emotional capabilities, multiple agent coordination, and creativity. As a result there is at least a significant amount of time before any career requiring these capabilities are fully automatible. This means that while the majority of tasks within a person’s job can be automated, their job itself cannot.

An example of this known among labour economists is the effects of ATM’s on bank tellers. If one had reduced the bank teller’s job to the task of counting and distributing money, then an ATM would fully replace their function. History however shows us that the introduction of the ATM led to a boom in the number of bank tellers. This is as a result of a reduction in the amount of space needed to do the task that consumed the majority of a teller’s time, counting and distributing money, thereby allowing more banks to open up in a variety of areas. At the same time it increased the amount of time that bank tellers could spend on doing things that added value for the bank, such as upselling and good customer service.

This shows that the tasks that make up the majority of an individual’s day are not the tasks that necessarily add the most value. A lawyer spends a lot of time reviewing previous cases, though it is the legal advice and effective communication that provides the most value for clients. A doctor spends a lot of time interpreting diagnostics, but their value for their patients is in the empathetic guidance in treating their conditions. Using AI to reallocate resources to where they are most beneficial should not be seen as a threat to employment.

There are concerns that are valid, however, regarding whether these value add skills are widely present in the economy, and whether AI will therefore lead to widening inequality. These however are not the sorts of worries that Musk is trumpeting, and are of a far more political nature.

It is clear then that pure techno-pessimism ignores the interplay between the development of technologies and their social, legal, and economic environment. Musk’s worries then, at least in the short term, are overly fearful of something that can bring significant benefit.

His concerns surrounding General AI seem however to be grounded in far more of a consensus. Musk’s views on the existential threat of AI seem to be heavily influenced by the work of Oxford Philosopher Nick Bostrom. Bostrom has a strong following in the AI community, and his views on the potential harms of General AI are widely accepted. I myself see them as completely reasonable views should General AI be possible, but that itself is an issue that is far more complex.

It is one thing to accept the idea that: “If General AI exists, it poses an existential threat to humanity”. It is another thing altogether to generalise this to all AI, or to assume this means that General AI will exist. The idea that an Artificially Intelligent machine can have the full cognitive capacity of a human being and thereby surpass us intellectually makes strong claims on the nature of the human mind, and its replicability. A significant debate surrounds the identity between the mind and the brain, an identity that I myself reject, and one that is being increasingly questioned among post-cognitivists.

As the case is that General AI may itself be an impossibility, this provides a significant amount of time to develop an understanding of the human mind, to prepare ethical investigation into the developments of the system, and should General AI ever come about, we would be prepared. There is legitimate cause for worry here, but the timeline does not justify the depth of the Musk’s pessimism.

Musk himself is guilty of a conflation, that General AI is responsible for both our economic and existential worries. That mass joblessness is a possibility as a result of the technological improvements in AI. This makes an issue that is significant for the future seem as if it is an immanent threat. It’s why his criticism of Mark Zuckerberg’s optimism seems so out of place given current capabilities of AI technology.

Zuckerberg’s optimism surrounds a lot of the significant improvements to the quality of human life that AI can offer in the near future. Improvements to health care, care for the elderly, energy efficiency, and many other fields that are in urgent need of innovation. Musk’s long-term fears are a potential scenario, but that fear should not affect all the immediate benefits that are beneficial to society.

Issues surrounding Artificial Intelligence are far more nuanced and interdisciplinary than an optimistic or pessimistic take on the technology itself. Elon Musk should be aware of these going forward, lest he sacrifice the enormous benefits out of disproportionate fear of the risks.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.