Is Artificial Intelligence Dangerous

Daniel Grady
5 min readDec 12, 2015

--

As technology and funding for artificial intelligence advances at an alarming rate, the world stares in horror at the dangers to come.

­­­Artificial intelligence has played a major role in modern civilization for decades. In its simplest form, artificial intelligence achieves its task by following the guideline of, “if this, than that.”

This basic structure allows for coffee machines to turn off when the coffee is ready, and for computers to store thousands of documents inside one small machine. Although artificial intelligence has had many uses throughout modern civilization, the dangers may outweigh the benefits in light of a super intelligent machine.

Local videographer and technical specialist Quincy Roullier stated, “A.I. itself isn’t dangerous, it’s the future that worries me.”

A superintelligent A.I. is a machine that has the equivalent intelligence of a human or more. This level of intelligence is potentially dangerous due to several factors. A superintelligent machine could keep learning from itself to become smarter and creating a positive feedback loop. According to Bailey, Ten percent of researchers believe that destructive intelligence will occur within ten years. Almost all of A.I. researchers believe that it will happen in less than 100 years (Par. 4).

In Bailey’s article, “Will Superintelligent Machines Destroy Humanity?” he states how since the invention of the computer, theorists have suspected machine domination. I.S. Good argues that intelligence improving intelligence is a dangerous loop that can’t end on positive measures (Par. 3). Because of this, many people believe that it’s humanities responsibility to keep this next step in technology at bay.

If humanity were to create a simple machine to create gumballs, it would create gumballs in everyway that it knows how. If a superintelligent machine was assigned to make gumballs however, it could learn new ways to make gumballs. While some of these new ways could be harmless and more efficient, it could always analyze roadblocks that impede the production of gumballs and seek to eliminate those first. This is where the dangers of a self-learning A.I. begin to surface.

In Bailey’s article, “Will Machines Destroy Humanity?” Bostrom believes that the box mentality won’t hold A.I. forever and states that we would not stand a chance against superintelligence. He believes the safest way to research A.I. is to do it slowly. Bostrom states that we are simply not ready for this technology (Par. 10).

Another way to “box” an advanced A.I. is to create an Oracle. The Oracle A.I. or O.A.I. Is created for the sole purpose of answering any questions that are asked of it. The O.A.I. is not capable of actually doing anything but rather waits to be asked for its knowledge. An Oracle A.I. still has several dangers in itself. It could manipulate and be deceptive with its answers in order to achieve other purposes.

Currently, the O.A.I. is the today’s goal for researchers and developers throughout the world. Armstrong, Sandberg, and Bostrom state in their article, “Thinking Outside the Box,” that long unsupervised conversations between the Oracle A.I. and the people operating it should be forbidden. The questions asked need to be narrow and specific to get an answer that is equally so. By rationing the O.A.I.’s interactions with humans make it harder for the O.A.I. to be manipulative (Pg. 3).

In the article, “A.I, Religion, and Community Concern,” Rossano states, “For selfish reasons, social organisms seek to establish and maintain good personal relationships,” (Pg. 3). Rossano believes that community life may suffer due to artificial intelligence making people prefer to interact with machines rather than other people. Computers are especially designed to be cooperative and to possess whatever qualities its owner may desire. This may make them more attractive to interact with than the outside world (Pg. 3).

Local philosophy graduate student, Jonathan Kanzelmeyer stated, “All of the things that tell us that we have intelligence and minds as humans, can also be applied to robots.”

A new argument has arisen among researchers and developers as the age of A.I. approaches. Rossano states, “Current debates in A.I. have tended to focus on whether these more powerful and parallel machines will posses human like consciousness,” (Pg. 4).

The idea of a machine possessing a conscious is a deep philosophical problem.” Kanzelmeyer said. “This is recognized as the mind/body problem.”

This brings a whole new level of complexity to the argument as well as legal issues. Assuming that these machines posses intelligence that is far beyond our own, is it far fetched to believe that they have developed emotions and a conscious of their own?

“If you say no, then the question is why not.” Kanzelmeyer states.

As the dawn of advanced artificial intelligence approaches, researchers hotly debate the production of a superintelligent A.I. Nations around the world are pouring their resources into their technology trying to be the very first to win the technology race. Should the U.S. rush advanced A.I. to help prevent other hostile countries from acquiring it first, or should we research it slowly to help safeguard against possible consequences. There is no right answer to this question as of today, however the future of artificial intelligence knocks on our doorstep as it approaches ever closer.

Works Cited

Interview 1: Quincy Roullier | Cell: (775) 842–9631

Interview 2: Jonathon | Cell: (775) 224–9793

Rossano, Matt J. “Artificial Intelligence, Religion, And Community Concern.”

Zygon: Journal Of Religion & Science 36.1 (2001): 57. Academic Search Premier. Web. 13 Apr. 2015.

Kurzweil, Ray. “Don’t Fear Artificial Intelligence.” Time 184.26/27 (2014):

28. Academic Search Premier. Web. 13 Apr. 2015.

Bailey, Ronald. “Will Superintelligent Machines Destroy Humanity?.” Reason

46.7 (2014): 20–23. Academic Search Premier. Web. 13 Apr. 2015.

Maney, Kevin. “You’re Scaring Me, Siri….” Newsweek Global 164.7 (2015):

44–45. Academic Search Premier. Web. 13 Apr. 2015.

Armstrong, Stuart, Anders Sandberg, and Nick Bostrom. “Thinking Inside The Box:

Controlling And Using An Oracle AI.” Minds & Machines 22.4 (2012): 299–324. Academic Search Premier. Web. 13 Apr. 2015.

Miller, Keith, Marty J. Wolf, and Frances Grodzinsky. “Behind The Mask: Machine

Morality.” Journal Of Experimental & Theoretical Artificial Intelligence 27.1 (2015): 99–107. Academic Search Premier. Web. 13 Apr. 2015.

Søraker, Johnny. “Continuities And Discontinuities Between Humans, Intelligent

Machines, And Other Entities.” Philosophy & Technology 27.1 (2014): 31–46. Academic Search Premier. Web. 13 Apr. 2015.

Brundage, Miles. “Limitations And Risks Of Machine Ethics.” Journal Of Experimental

& Theoretical Artificial Intelligence 26.3 (2014): 355–372. Academic Search Premier. Web. 13 Apr. 2015.

Müller, Vincent C. “Risks Of General Artificial Intelligence.” Journal Of Experimental

& Theoretical Artificial Intelligence 26.3 (2014): 297–301. Academic Search Premier. Web. 13 Apr. 2015.

--

--