The Danger That Comes With Artificial Intelligence

A common plot twist in science fiction often involves an automaton, whether it be the Terminator or Ultron, becoming sentient and rising up against its apathetic human creators. As far-fetched as it may seem, in the Digital Age this may become a reality.

Artificial Intelligence has become a huge presence in our everyday lives. With the continuation of the technological revolution, we see smartphones that can track our location, large search engines such as Google or Facebook that can access vast amounts of information in seconds, and even self- driving cars in the near future. These are all Artificial Narrow Intelligence, because they specialize in one specific area. A machine that can perform any intellectual task that a human being can, such as problem solving, learning from experience, and comprehending complex ideas is an Artificial General Intelligence (1).

Artificial Superintelligence is when fiction becomes reality, when a machine is smarter than even the best human brains. While technological advancements have undoubtedly improved our way of living, it may be time to step back and consider the implications of further artificial intelligence research.

At worst, a glitchy AI can cause an isolated problem like knocking out a power grid, causing a nuclear power plant malfunction, or triggering a stock market catastrophe. While this might worry you, Artificial Superintelligence is something that should deeply concern all of humanity.

Imagine a machine that is programmed to get rid of spam emails. As time goes on, it is programmed to improve its own intelligence, soon realizing all sources of spam emails come from humans, so the best way to get rid of spam email is to eradicate all humans. It could be as simple as that. Because the machine is smarter than all humans, how could we be able to stop it? Stephen Hawking has warned that because people would be unable to compete with an advanced AI, it could spell the end of the human race (2).

Many people see this as an exaggeration because we could simply program a machine with the best values of humanity, or with plenty of safeguards (3). Many experts also believe ASI may never be created, as our technology is severely limited, and we have no robots remotely close to superintelligence (4). However, Nick Bostrom, leading expert in the AI field, believes that there is no ground to feel certain about the timeline of AI. A machine that can teach itself could absolutely happen in the near future, but there is no guarantee that it could also take a much longer time. Although there are people who dismiss that our technology will even reach that point, we must take precautions. It’s a hopeless activity to try to understand a machine with a far greater level of intelligence than humans, so there is no concrete way to know what an ASI will do and what consequences there will be for us.

Armed with superintelligence and the technology superintelligence would create, ASI could solve every problem in humanity. ASI could first halt excess CO2 emissions by coming up with much better ways to generate energy that has nothing to do with fossil fuels. ASI could revolutionize health and medicine beyond imagination, curing cancer and other diseases. ASI could use things like nanotech to build meat from scratch that would be molecularly identical to real meat, and end world hunger. These are the anticipations of expert Ray Kurzweil, and others who acknowledge that if ASI could be created safely, it could solve all of humanity’s problems (1).

The AI Revolution could make everything artificial, opening up the possibility for a species to become immortal.

That itself sparks a discussion of ethics and practicality, as well as the overpopulation and economic consequences. Other major scientists such as Bill Gates and Elon Musk fear ASI will make such a dramatic impact that it’s likely to knock the human race off the Earth. We could just as easily become extinct, as we could conquer morality. Thus, the people controlling Artificial Intelligence, will control the world and the fate of humanity. Even with ASI, it could be dangerous. Autonomous weapons for national defense would increase the chances of an arms race. The company with the most advance technology would have the greatest amount of power (5). We don’t know their motivation. A malicious ASI could be created just as easily as a safe one. We don’t know how it will be used. The most ambitious parties will not be more concerned with the dangers than punching out their competitors, or the fame and the money that comes with such advanced technology. We can’t just stop people from researching AI, and there really is only one shot at getting this right. We don’t know if setting limitations for the use of AI will work. All of this could easily cause war over who controls the ASI, while what we need to be putting effort in is the ASI itself.

We base our ideas about the world on our personal experience, and that experience is ingrained in our heads as “the way things work.” Limited by our imagination, what we know simply doesn’t give us the tools to think accurately about the future. When we hear a prediction about the future that contradicts our past experience of how things work, our natural instinct is that the prediction must be naive (1). Most people have a hard time believing something unless they see proof. Being focused on day-to-day obstacles, we do not see all the long- term situations we are in.

Movies present AI scenarios in an unrealistic way, which makes the public feel that AI isn’t something to be taken seriously in general. Because we have no idea what will come in the future, we should be prepared for the unfathomable.

Works Cited

  1. Urban, Tim. “The Artificial Intelligence Revolution: Part 1 — Wait But Why”. Wait But Why. N. p., 2015. Web. 21 Mar. 2017.

2. Ford, Paul. “Are We Smart Enough To Control Artificial Intelligence?”. MIT Technology Review. N. p., 2017. Web. 21 Mar. 2017.

3. Booch, Grady. “Don’t Fear Superintelligent AI”. Ted.com. N. p., 2017. Web. 21 March. 2017.

4. Lewis-Kraus, Gideon. “The Great A.I. Awakening”. nytimes.com. N. p., 2016. Web. 21 Mar. 2017.

5. Harris, Sam. “Can We Build AI Without Losing Control Over It?”. Ted.com. N. p., 2017. Web. 21 Mar. 2017