Why We Are Scared of Artificial Intelligence

Cameron Smith
6 min readMar 11, 2017

--

Ex Machina (2015)

Some of the world’s biggest leaders in science and technology including Stephen Hawking, Elon Musk, Steve Wozniak, and Bill Gates, have expressed their concerns about the rapid progression and evolution of Artificial Intelligence (AI).

Elon Musk in a CNBC interview stated, “I think that the biggest risk is not that the AI will develop a will of its own… but rather that it will follow the will of people that establish its utility function.”

Movies like The Terminator, The Matrix, Ex Machina, 2001: A Space Odyssey amongst others, all point to the notion that artificial intelligence will evolve past human intelligence to the point that we will no longer be able to control our own creations. There is an ingrained narrative that AI will somehow reach a level of self-awareness and inevitably then attempt to destroy or replace the human race. And while some of these ideas may seem excessive, or merely the product of media and film exaggeration of AI leading to Terminator-like robots hunting humans like deer, there is enough concern amongst thought leaders such as those mentioned above, to make these concerns highly credible even if there is no certainty about them ever playing out.

While not all of the public necessarily shares the excessive fears that AI technologies are inherently dangerous, there is still enough fear in many people that has lead to a high level of distrust in these types of technologies. This lack of trust causes doubts about technologies that might otherwise be highly beneficial, including driver-less and autonomous vehicles, expanded use of robots in manufacturing, and machine learning.

As it is common for humans to always be looking for something to blame for our fears or frustrations, we often direct this blame at other humans but ultimately accept “human” type mistakes. But it seems unlikely that machines will ever be afforded the same type of acceptance.

There is also the issue of whether this distrust is in fact purely a fear of the rise of Terminator-like machines, or whether there are deeper, underlying fears behind this. What is it exactly about these technologies that scares us? Is it that the deeper distrust is really the fear that we may one day be replaced? If we can build machines, capable of thought, feelings, self-awareness and perhaps consciousness, then were does that leave us? If we can develop technologies with all of our positive attributes without the humanistic insecurities, the prejudices and high degrees of self-interest, then are we just on a journey to build our eventual replacements?

The futuristic TV show, Westwood, raises some of these ideas particularly well, as articulated beautifully by actor Anthony Hopkins, “I read the theory once that the human intellect was like peacock feathers, just an extravagant display designed to attract a mate. Mozart, Michelangelo, Monet, the Empire State Building, just an elaborate mating ritual.”

If AI can eventually not only replicate what we do but improve on it, and without any of our negative human qualities, won’t they be a better version of us? Aren’t we totally replaceable?

Fear of replacement is not new. Around fifteen years ago, the effects of globalisation were being felt around the world, especially in Western countries such as Australia and America. Competition from cheaper Chinese labour costs resulted in large amounts of manufacturing moving offshore. Today, AI reminds people of some of these fears again, however it is now in the form of server farms in Texas rather than cut-price factories in China. Western countries fear the technology of the future because they are reminded of some of the negative consequences of globalisation from the near past.

Billionaire Peter Thiel, co-founder of PayPal and early backer of Facebook, has a different view, “Stop blaming technology for all of society’s problems”. He dismissed claims that super intelligent machines pose an imminent threat to American workers. His views are based around different interpretations of what “intelligence” really means, and how it relates to the human psyche. There are many complexities around the subject that aren’t necessarily binary as humans and machines are good at different things. For example, humans aren’t particularly good at making sense of enormous amounts of data. Computers are exactly the opposite; they excel at efficient data processing but can struggle to make basic judgments that would be simple for any human.

As AI advances, there are many ethical dilemmas and complex moral questions that are raised that may require different types of fail-safes. Large amounts of testing and rigorous safety measures need to be put in place before AI can begin to be widely used to replace areas where humans are currently accountable.

In an article by the Economist, they raise some of these such ethical dilemmas; should a military drone be allowed to automatically fire on a house where a target is known to be hiding, which may also be sheltering civilians? Should a driverless car swerve to avoid pedestrians if that means hitting other vehicles or endangering its occupants? Should a robot involved in disaster recovery tell people the truth about what is happening if that risks causing panic? These questions have led to the emergence of the field of “machine ethics”, which aims to give machines the ability to make such choices appropriately, providing an operating model for ethically making important decisions. However, this field is still only in its infancy.

Are we now beginning to experience the consequences of the dangers identified by the science fiction author, Isaac Asimov, in 1942 In his “Handbook of Robotics, 56th Edition, 2058 A.D.”, he outlines the. Three Laws as:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm
  • A robot must obey the orders given it by human beings except where such orders would conflict with the First Law
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

It is becoming clear that AI will be widely used in situations where a number of these ethical dilemmas arise. Regulation, governance, and fail-safe measures are all falling behind the light-speed pace in which technology is advancing and the potential impacts of these dilemmas is emerging.

AI will replace jobs in the future. AI will become more prevalent in everyday life and we won’t be able to interact with some types of technology without engaging with it. Many future jobs will be very different in terms of their manual labour requirements and the evolution of AI will also affect many white-collar jobs also.

A factory floor in North Carolina showing humans and robots working together. (AP Photo/Chuck Burton)

Research commissioned by technology company Infosys and presented at the World Economic Forum revealed that 72 per cent of workers whose jobs are affected by AI will be redeployed within the same area of their organisation (34 per cent) or retrained for another area (38 per cent). Infosys Australia and New Zealand senior vice president and regional head, Andrew Groth, says the research is encouraging; “Robots, automation, and artificial intelligence are not as scary for workers as they seem. It’s encouraging that the vast majority of Australian business leaders are planning to re-skill or redeploy their teams to new roles if and when their AI technologies become capable of mechanizing repetitive manual labour tasks. AI technology is really a platform to enhance workers, not replace them. Jobs will be evolving”.

Humans may not ever need to be replaced. Ideally we will continue to evolve in our own way, supported by the technologies we have created.

“Watson, Deep Blue, and ever-better machine learning algorithms are cool. But the most valuable companies in the future won’t ask what problems can be solved with computers alone. Instead, they’ll ask: how can computers help humans solve hard problems?”

― Peter Thiel, Zero to One:

Sources:

--

--

Cameron Smith

Innovative technology mined entrepreneur with a diverse background in technology, business and pushing the boundaries.