The Development of Ethical Guideline and Social Practice for Robot and Artificial Intelligence

Traditionally, robot is as an anthropomorphic, autonomous entity that possesses intelligence and acts in a way that mimics human behavior. Instead, the majority of robots are often anchored to one point and consist of a single flexible arm (Murray). The purpose of robotics technology is essentially to carry out repetitive, physically demanding, and potentially dangerous manual activities so that humans are relieved from these tasks. Examples of these chores include working on a factory production line assembly, handling hazardous materials, and dealing with hostile environments like underground mines, underwater construction sites, and explosives plants. Industrial robots can work twenty-four hours a day without a break to maximize productivity in manufacturing environments. Artificial Intelligence (AI) on the other hand is an area of computer science that focus on the development of intelligent system that work and react like humans such as learning, planning, and problem solving. However the different between the AI and the robot is that the AI does not need to be in the humanoid body, it can purely be a system that acts like a brain. These two established field of research are making significant impacts toward becoming mainstream. As Robot and Artificial Intelligence continue to developed and beginning to enter people daily life, the development of an ethical guideline and social practice for those machine by the collaboration of peoples from different fields become essential in order to prevent the future misconduct. In this article, the ethical guideline and social practice is a conduct in any form for human to know how to interact with robot, and also for robot to know how to interact with human and other robot. However, the main focus of this article is not on the content of the guideline, but it is on the necessaries of it, and how it should be created. The important of this guideline is very controversial because even robots are becoming more presence in the society, the robots that are intelligence enough to understand the ethics and moral code are still only exist in the science fiction.

The robot revolution in i, Robot by Isaac Asimov. Picture from Signal-watch

In the state of art in robot and AI development, there has been a huge push to create the industrial robots for serving factories and the social robots for family in Japan. ASIMO by Honda company is one of many famous examples of the advance robots that were created by Japanese company. Since 2007, Japan has actively promoted the issue about human — robot interaction. The national surveys found that Japanese people were more comfortable sharing environments with robots more than with migrant and foreigners (Robertson). As Japanese population continues to decrease and age faster than other industrial country, Japanese people are relying on the automatics robot technology to keep guarding their economy. These movement occurs by an increase support on roboticist and political scientist to educate Japanese people about robots. Not only that Japanese people were unconsciously bombarded by the comics and animations that have robot as the main component of the story. Some of the Japanese animation such as Doraemon features robot as one of the best friend to human, but other stories tend to show robots as the wars machines. One of the famous Japanese animation called “Ghost in the Shell” shows the dystopian futuristic world that was taken over by the group of robots that want to revolutionize Japan and avenge humans. This kind of media spurs a lot of questions about the relation between robot and human in the future. Even now we have ton of robots that are honestly working for us in the industries, how can we maintain this kind of relationship when the robots get smarter is a very important question.

The history of robot development in Japan. Picture from Robots Direct

In order to prevent the conflict between robots and humans, there are ethical researcher that try to develop the ethical guideline and social practice for those machine. Wallach, Colin, and Iva discuss the direction for researching the value and limitations of the morally intelligent agents (the robot that value human’s moral) in their pioneer research paper “Machine Morality: Bottom-Up And Top-Down Approaches For Modelling Human Moral Faculties”. They established two perspective of integrating moral code into the machines, which are the top-down approach; teaching machines the ethical theories, and the bottom-up building of systems that aim at the morally practices which are more practical and not specified by theoretical ideas. Wallach, Colin, and Iva also argue that the implementation of moral code in the artificial intelligence’s decision making system is crucial for the social mechanisms (the system that allows robots to socialize with human) because this process is very sensitive and largely involved with humans. Due to the fact that the robots in 2008 were not ready for encoding morality into their system the researchers could not implement their ideas in any actual robot. Wallach, Colin, and Iva also believe that human would not be able to create robots that are smart enough for their testing soon. However, Wallach, Colin, and Iva think that the concept that they established is crucial for the future engineers who will design the future robots. In 2012 (4 years after Wallach, Colin, and Iva’s paper), Pritchard published his critical response to Wallach, Colin, and Iva’s article. He argues that even Wallach, Colin, and Iva did not suggest that humankind can create robots that are sufficient to learn morality in the soon future, there are other crucial questions that the the first paper did not address. That question is how should human measure the success of implementing moral code in the robot. Pritchard demonstrates that by comparing robot to human baby, he says “we as a human can only hope that the baby will develop into full-fledged moral agents” (p. 1) with out having a scale to measure. The question is how should we expect this to go with less than fully formed robot. Pritchard, Wallach, Colin, and Iva’ s paper are the strong examples of the research projects that demonstrate human efforts to develop the ethical guideline for the robot from the very theoretical standpoint, which demonstrate the need for ahead development of the guideline.

From the engineering and computer science stand point, the researchers in those fields also believe that the machines need the practical guideline in order to make sure that their actions would not do harm to human. In Tonken’s paper, he addresses the challenges for developing AMAs (artificial moral agents) by identifying an ethical framework that is implementable into the machines. Tonkens articulates this issue by performing a critical analysis of Kantian AMAs, the most innovative computational ethical model that was implemented into the real robots. Tonken’s experiment shows that Kantian artificial moral machines in the robot is found to be anti-Kantian which mean the robots were not aware of the moral code, they just memorize it. This make he suggested that there should be other models of ethical framework for implementing into the robots because right now the robot are getting smarter. Instead of focusing on implementing moral code into the robots, Ashrafian argues that robot scholars are only focusing on the human–robot interactions, so they forgot one crucial point, which is the “interactions between intelligent robots themselves” (p. 4). Ashrafian concerns that these exchanges may affect human by pointing out the famous three laws of robotics by science-fiction writer Isaac Asimov. Ashrafian believes that one of the rule that states : robots may not injure humans is not enough anymore. He points out that In the United Kingdom, the Engineering and Physical Sciences Research Council and the Arts and Humanities Research Council already introduced a new set of principles for robot designers. Ashrafian suggests that scientists, philosophers, funders and policy-makers should pay more attention to robot–robot interaction and extend Asimov’s three laws of robotics, which he suggests the forth law “all robots endowed with comparable human reason and conscience should act towards one another in a spirit of brotherhood and sisterhood” (p. 13). Ashrafian’s research paper is breakthrough in the sense that Ashrafian considers robots as human being and he suggest a very interesting way that robot should interact with other robots. Both Tonkan and Ashrafian’s approach on how robots should behave is the example of the guideline that is very important in the future.

The need for the practical guideline for the intelligence robots is not only a concern from the people in the technology field, the laws expert and the leader in society also argue that there need to be a clear instruction and social practice for people in the society to handle the robot. Millner points out in his study that the legal experts are concerning about the laws to govern robotics are not catch-up to the technology and need to be updated. Millner also argues that artificial intelligence has come of age, and that people should begin tackling these problems before they arise. Miller thinks that the current robots can archive tasks in ways that human cannot be anticipate and “robots increasingly blur the line between person and instrument” (p. 4). Millner states that the leader in science and technology, such as Stephen Hawking and Elon Musk, are also concerning about the possible danger of artificial intelligence to humans. In January both of them signed an open letter to AI researchers, the letter state that “without safeguards on the technology, mankind could be heading for a dark future, with millions out of work or even the demise of our species” (p. 7), this demonstrate a greatly need for people to develop a guideline for robots. Beside that, Petersen asks a fundamental question in his study that if one day humankind can create artificial creatures with intelligence comparable to human. “Could it be ethical to use them as unpaid labor?” (p. 1). Petersen argues that there is only few philosophical literature on this topic, “but the consensus so far has been that such robot servitude would merely be a new form of slavery” (p. 1). This controversial draws the conversation to the similar case study, which is the genetic engineering of humans. Petersen asks “if designing eager robot servants is permissible, it should also be permissible to design eager human servants” (p. 1). At the end, Petersen conclusively state that the concept of human engineering for being a slave is illustrative to the concept of robot servants. These examples demonstrate a large scale of the concern that human have for robots, and the peoples that are involved are the leader of society, social and political scientist.

Picture from Huffingtonpost

Even thought there is a lot of evidences that demonstrate the ultimate need for robot ethical guideline, some research suggest that this concern is one of the analysis paralysis. Romm argues that when the first personal come out in 1980s, some people found it so scary and thus the term “computerphobia” arrised. This connect to the current situation on the emerging of robots. Romm points out the research done by Chapman University. A random 1,500 adults ranked their fears of 88 different items on a scale of one (not afraid) to four (very afraid). “The fears were divided into 10 different categories: crime, personal anxieties (like clowns or public speaking), judgment of others, environment, daily life (like romantic rejection or talking to strangers), technology, natural disasters, personal future, man-made disasters, and government” (p. 4). Christopher Bader, a professor of sociology at Chapman and one of the co-authors of the study says “People tend to express the highest level of fear for things they’re dependent on but that they don’t have any control over, and that’s almost a perfect definition of technology” (p. 5). The oddest thing about the data is that the top list of fears, which are reptiles and robots are replacing the workforce and overpopulation and also ranked higher than death, loneliness, and theft. By showing this data, Romm argues that people are concerned about the robots only because of their ignorant on the topic not the potential harm of it. He believe that people should not over analyze the situation and come up with the guidelines, laws, and regulation for the thing that is not yet existed. These kind of counter argument is always important to consider. However, the idea that we should not worry about the thing in the future is also a false because things that we imagine will eventually come true, as Brooks argues : If people in 1985 were told that they would have computers in their kitchen, that would make no sense to them because “computer” was a huge machine that is very difficult to use. But now computers are in microwave and they are totally different from what people in 1985 perceived as a computer. Brooks thinks that the emerging of robots is similar to the evolution of computers, they will morph and change over time, so as people’s attitudes. Brooks believes that “cars will certainly be more robotic. There will be many more robots in our houses, in our hospitals, in our factories, and in the military” (p. 3). It is safe to say that to prevent problems before they happen is the best way to get ready for the future. By saying that, it is then reasonable for people to continue to think about the guideline for living peacefully with robots.

In the conclusion, this article provides different perspective from different groups of people in society that involve in the development of robots. The computer scientist believes that humankind is still far from creating an intelligent robots that will eventually be harmful to human. However, there is concerns from the ethics researchers, the laws experts, the people in society, and also the scientists themselves that having an ethical guideline for robot is the best way to prevent the potential problems in the future. Even all peoples in the article believe in the concept of the guideline, not all of them agree on the same idea of the content in the guideline. For example, Ashrafian believes that robots should be viewed as one form of human being that can have a humane relationship with other robots, Brooks views a robot as the emerging technology like computer. This variety opinions on how human should treat robot, how robot should treat human, and how robot should treat each other is the key to the success of creating the guideline for robot because the more people engages with this discussion, the more idea we would get, which will lead us to the better understanding of the relationship between human and robot, and potentially human and human. It is also possible that we will have intelligences robots become the opinionators on this topic in the future as well.

Robots can live peacefully with humans. Picture from Horizon Magazine

This argument is very futuristic and out of this world. However it spurs curiosity in me as a writer to see how peoples would think ahead and try to develop a solution for the future. If you are a robot or human reading this article at this point you might see my point already that variety is the key element in success. The diversity in opinions, races, and probably machines are the things that make our world vibrant and exciting. No matter who we are, we can live together peacefully if we keep ourselves connected to each other and keep dreaming and discussing for the better world.

Work Cited

Ashrafian, Hutan. “Intelligent Robots Must Uphold Human Rights.” Nature 519.7544 (2015): 391. Academic Search Premier. Web. 14 Oct. 2015.

Brooks, Rodney. “The Robot Invasion Is Coming — and That’s a Good Thing.” Discover Magazine. Kalmbach Publishing Co, 13 Sept. 2010. Web. 26 Oct. 2015.

Beam, Alex. “Should Robots Have Rights?” Boston Globe. Boston Globe Media, 13 Feb. 2014. Web. 18 Oct. 2015.

Hanson, Hilary. Robot and Flowers. Digital image. Scientists Fear Sex Robots Could Be Bad For Society. Huffingtonpost, 15 Sept. 2015. Web. 30 Nov. 2015.

Henig, Robin Marantz. Death by Robot. Digital image. The New York Times. The New York Times, 10 Jan. 2015. Web. 30 Nov. 2015.

I, Robot. Digital image. The Signal Watch: I Finally Watch: I, Robot (2004). N.p., n.d. Web. 30 Nov. 2015.

Merrifield, Rex. Integrating Smart Robots into Society. Digital image. Integrating Smart Robots into Society | Horizon Magazine — European Commission. N.p., 16 Dec. 2014. Web. 30 Nov. 2015.

MIT’s Nexi MDS Robot: First Test of Expression. Dir. Cynthia Breazeal. YouTube. Bot Junkie, 1 Apr. 2008. Web. 30 Nov. 2015.

Millner, Jack. “Should Robots Have Human Rights? Act Now to Regulate Killer Machines before They Multiply and Demand the Right to Vote, Warns Legal Expert.” Daily Mail. Associated Newspapers Ltd, 20 July 2015. Web. 18 Oct. 2015.

Murray, Stephen. “Robots.” Computer Sciences. Ed. Roger R. Flynn. Vol. 2: Software and Hardware. New York: Macmillan Reference USA, 2002. 166–168. Gale Virtual Reference Library. Web. 9 Nov. 2015.

Petersen, Stephen. “The Ethics Of Robot Servitude.” Journal Of Experimental & Theoretical Artificial Intelligence 19.1 (2007): 43–54. Academic Search Premier. Web. 18 Oct. 2015.

Pritchard, Michael. “Moral Machines?.” Science & Engineering Ethics 18.2 (2012): 411–417. Academic Search Premier. Web. 14 Oct. 2015.

Robertson, Jennifer. “Human Rights VS. Robot Rights: Forecasts From Japan.” Critical Asian Studies 46.4 (2014): 571–598. Academic Search Premier. Web. 14 Oct. 2015.

Romm, Cari. “Americans Are More Afraid of Robots Than Death.” The Atlantic. Atlantic Media Company, 16 Oct. 2015. Web. 26 Oct. 2015

“The History of Robot Development in Japan.” Robot Direct. N.p., n.d. Web. 30 Nov. 2015.

Tonkens, Ryan. “A Challenge For Machine Ethics.” Minds & Machines 19.3 (2009): 421–438. Academic Search Premier. Web. 3 Oct. 2015.

Wallach, Wendell, Colin Allen, and Iva Smit. “Machine Morality: Bottom-Up And Top-Down Approaches For Modelling Human Moral Faculties.” AI & Society 22.4 (2008): 565–582. Academic Search Premier. Web. 3 Oct. 2015.