Donald Blank
9 min readSep 24, 2015

The Ethics of Artificial Intelligence

In today’s world technology is rapidly changing. Only eight years ago iPhones, the first smartphones, were released. Today smartphones can do more things than most people would have ever imagined. They have the ability to record 4k video, which is a video resolution of 3840 x 2160 pixels, which was only possible with professional cameras until 2013 (Condliffe par.1; Westaway par.4). Furthermore the processors of such devices are more powerful than the supercomputers that we had ten years ago (Taylor par.4). In this time of technological revolution, computer scientists are making into reality what man has been anticipating for years, artificial intelligence systems. These artificially intelligent systems will have the ability to be independent of humans and even reprogram themselves. We finally have enough understanding of the processes of human thought and have the technology required to process the data needed make this possible. According to Peter Norvig, a computer scientist at Stanford University, humanity has been considering the possibility of making an intelligent being since Aristotle (Norvig 2). However, the creation of artificial intelligence brings up several ethical issues, such as the fact that an artificially intelligent being could destroy our race, or that humans might abuse these systems for our own gain. Therefore, as pioneers in this new field, it is important for humanity to consider what is and isn’t ethical in this area of research. To make sure that artificial intelligence technology is created ethically, there must be laws formed which will protect the rights of both humans and robots.

A fundamental concept to understand is what a truly artificial intelligent being is. According to Stuart Russell and Peter Norvig in their book Artificial Intelligence: A Modern Approach, the current study of artificial intelligence is “The study of the computations that make it possible to perceive, reason, and act” (5). Therefore, one can assume that an artificially intelligent being is able to act independently of humans and thus make decisions independently. However, since these robots are independent there must be a way to program them so that they are not treacherous and erratic in nature. AI scholars have major concerns that needs to be addressed.

The primary concern of many AI theorists are the liability issues that may ensue with such devices. These liabilities are not only legal issues but also ethical issues. What should be addressed is the fact that we have the ability to create an artificial intelligence system that can feel human emotion. According to Ray Kurzweil in his book The Age of Spiritual Machines, our technology is going to reach the point where we will be able to understand all of the inner workings of the human mind, including the algorithms for thought processes like emotion. Therefore, when this understanding is attained we will be able to make software that has the same characteristics of a human being, and then give it a physical form (120–121). If this happens there are certain realities that humans must face. The most pressing one is the way that artificial intelligence systems are treated and the fact that the only reason that they are different from humans, is the fact that they were created by us. What we must consider is whether or not robots should be bound to the same laws as humans and if robots must be treated the same as us. The obvious answer to this is “yes”, we must treat robots the same as ourselves. They cannot be seen as inferior beings because we are giving them the ability to think like us, therefore it is our responsibility treat them like us.

Before it is ethical to create an AI system we must make sure that we create laws that make certain that robots are equal to us, therefore consider Isaac Asimov’s first and third laws of robotics:

  • 1st Law — A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • 3rd Law — A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

(Auburn University)

If a Robot with AI programming can follow these laws then it can be certain that it will not harm a human or harm another robotic system. What these laws do not cover is how robots should be treated by humans. Therefore, the following change must be applied to the third law: “Robots and humans must protect the existence of robots as long as such protection does not conflict with the First Law”. These laws must be put in place before any AI technology is created, so as to protect the rights of humans and to protect the rights of robots. This will assure that three problems do not occur: confusion in legal disputes between robots and humans, mistreatment of robots, and the destruction of the world by robots.

First, let’s look at scenario given in the book Artificial Intelligence: A Modern Approach on the issue of a confusion in a legal dispute between a robot and a human. It is important to consider that an AI system might not be a humanoid, but instead a system that interprets data and experience, so as to act like a human would. In the future a doctor uses an artificial intelligence system to make a diagnosis of a patient with a health issue. Problematically, the system makes a conclusion of the patient’s health that the doctor does not agree with, but the machine is programmed to be better at making an analysis than the doctor. Therefore, the doctor must follow the machine’s diagnosis. Unfortunately the machine was wrong and the patient then sues the doctor (Norvig, Russell 848). Who is at fault? Is it the creator of the machine for making a bad product, the machine itself or the doctor’s fault for not relying on his own intuition? This is a problem that many theorists have not been able to come up with a conclusive solution for. However, in the future people with undeniably have to deal with such issues. That is why before a problem such as this occurs, laws must be created to prevent the rights of people from being infringed upon. Of course if a robot is bound to the laws stated before, a wrongful diagnosis would mean causing harm to humans. Thus the solution is simple, because the robot is considered as being equivalent to a human, it must be equally as responsible as the doctor. It violated the law, to not injure a human being, that it was bound to and therefore can be found at fault. Such scenarios can easily be averted if we simply agree on implementing these laws beforehand and then judging all the following legal issues based on the standards set out by ourselves.

Next, let’s look at the mistreatment of robots. It is likely that robots will be mistreated by humans due to how robots are perceived by us. According to a study done by a group of scientists at the University of Washington, a test group of humans treated robots differently than they treated other human beings. In their study these scientists had 90 children interact with a robot called Robovie. A majority of the children were convinced after they had interacted with Robovie that he had “mental states”. However, when these children were asked if it was ok for Robovie to be mistreated, by being put in a closet, only 31% said that this was not ok. Conversely, 74% of these children believed that it was not ok for a human being to be mistreated in this way. This shows that even children, who have no idea of the difference between a person and a robotic interface, perceive these creations to be lesser to human beings. Furthermore, additional adult and children subjects in this study believed that Robovie should not be “entitled to […] liberty”, meaning that it was alright for him to be bought or sold at the will of his human masters. However, if humanity was to create an artificial being like Kurzweil suggests, a being that has the same ability,as human beings, to feel, then it would not be ethical for a robot who is intelligent to be bought or sold. Studies like this show that humans already perceive of robots as just objects that can be bought or sold at will, without any consideration to their feelings (Kahn, Gary, Shen 1–2). Despite these findings, if humanity is forced to protect the existence of robots, and to consider these beings as equal to ourselves, then it can be assured that the robots will be kept safe, which solves this dilemma.

Finally, let’s address another fear that many theorists of artificial intelligence have, namely the possibility that AI systems might decide to destroy humanity. This phenomenon as according to Steve Phillips, of Trinity International University, is very possible. A truly intelligent independent system would have the ability to reprogram itself. What is there to stop it from deciding that humanity is a threat to itself or a threat to the rest of the world? After all many people believe that we are the earth’s biggest enemy and perhaps new robots will want to secure the fate of their new home. Many people believe that these machines will not be programmed with a moral code and therefore will be beings that do not care about the suffering that will be inevitable if the entire human race were to be destroyed (par. 1). If AI systems are programmed with the laws mentioned before as the framework for their existence and these codes are “robust against manipulation” then they would never conceive the idea of trying to take over the world (Bostrom, Yudkowsky 3–4). Assuming that robots are initially programmed to believe that human life is important and not to be destroyed, then they would never even be able to envision a scenario where they take the world from us and destroy our race. Furthermore because their codes would not be able to be manipulated, this view would never change. Also if humans are forced to follow the laws of robotics and are then held justly accountable for their actions against robots, then these robots would not ever form resentments that could result in a desire to destroy the human race.

In conclusion, artificially intelligent systems and humans must be bound to laws that protect both humanity and robots. Anything else would be unethical because it would inevitably end in much suffering for either the human race or the robot race. Robots who are truly intelligent will be able to program themselves, however, if their initial programming does not allow them to break Asimov’s laws and if it allows them to have human morality, then it is ethical to create these beings. Human scientists cannot become too caught up in the desire to make a new race without considering the evident side effects. Until further discussions about the ethics of this issue are held, it will not make sense for us to continue our research because there are no guidelines for researchers to follow and this could end catastrophically. To protect the future of our race we must recognize the inescapable reality that soon the technology to create these beings will be available. Hopefully the human race has the capability of understanding and dealing with the god-like power, and the responsibilities, that this may create.

Works Cited

Bostrom, Nick, and Eliezer Yudkowsky. “The Ethics of Artificial Intelligence.”MACHINE INTELLIGENCE RESEARCH INSTITUTE. MIRI, n.d. Web. 22 Apr. 2015. <http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.363.1045&rep=rep1&type=pdf>.

Condliffe, Jamie. “This Is the First Phone With a 4K Video Camera.” Gizmodo. Kinja, 13 Sept. 2013. Web. 15 May 2015. <http://gizmodo.com/this-is-the-first-phone-with-a-4k-video-camera-1238809421>.

“Isaac Asimov’s “Three Laws of Robotics”” Auburn.edu. Auburn University, 2001. Web. 10 May 2015. <http://www.auburn.edu/~vestmon/robotics.html>.

Kahn, Jr., Peter H., Heather E. Gary, and Solace Shen. “Social And Moral Relationships With Robots: Genetic Epistemology In An Exponentially Increasing Technological World.” Human Development (0018716X) 56.1 (2013): 1–4. Academic Search Elite. Web. 26 Apr. 2015.

Kurzweil, Raymond. “Reverse Engineering A Proven Design: The Human Brain.” The Age of Spiritual Machines: When Computers Exceed Human Intelligence. New York: Penguin, 2000. 118–24. Print.

Norvig, Peter. “Artificial Intelligence.” New Scientist 216.2889 (2012): i-8. Academic Search Elite. Web. 21 Apr. 2015.

Phillips, Steve. “Artificial Intelligence and Men without Chests.” Bioethics @ TIU. Trinity International University, 30 Oct. 2013. Web. 21 Apr. 2015. <http://blogs.tiu.edu/bioethics/2013/10/30/artificial-intelligence-and-men-without-chests/>.

Russell, Stuart J., and Peter Norvig. “27.3: What If We Do Succeed” Artificial Intelligence: A Modern Approach. Englewood Cliffs, NJ: Prentice Hall, 1995. N. pag. Web. <http://www.cin.ufpe.br/~tfl2/artificial-intelligence-modern-approach.9780131038059.25368.pdf>.

Taylor, Nick. “A Modern Smartphone or a Vintage Supercomputer: Which Is More Powerful?” Phonearena.com. Meetgadget, 14 June 2014. Web. 14 May 2015. <http://www.phonearena.com/news/A-modern-smartphone-or-a-vintage-supercomputer-which-is-more-powerful_id57149>.

Westaway, Luke. “What Is 4K?” CNET. CBS Interactive Inc, 12 Dec. 2014. Web. 15 May 2015. <http://www.cnet.com/news/what-is-4k/>.