Sami Moustachir
4 min readSep 29, 2016

Following a seminar course on science and ethics, the following papers will be dealing on the relations between robots, or intelligent systems, and humans, in a series called Ethics in AI. Check previous part here.

Popular culture and Artificial Intelligence

AI takeover in popular culture

Artificial intelligence is a recurring theme in popular culture, whether it is through older form of art such as literature, or through contemporary art such as films. To better understand the moral issues intelligent machines are bringing to the table, it is interesting to see how authors offer their vision of robots to the public, which would undoubtedly influence their opinion on the matter.

The forward thinking Isaac Asimov

Isaac Asimov is probably the most emblematic author in science fiction where he placed robots and their interaction with humans at the center of many of his works. He offered the first views on ethics with his famous Three Laws of Robotics in the novel Runaround :

  • A robot may not injure a human being, or through inaction, allow a human being to come to harm
  • A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law

Later, he added the Fourth Law :

  • No robot may harm humanity or, through inaction, allow humanity to come to harm.

One can probably highlight the biblical references , which would almost seem ironical because we try to impose rules on machines we can’t even follow ourselves. Such laws prove the general mindset initiated by the rapid advances in technology : the necessity to regulate innovation. This raises the interesting following question : Do we need a set of Asimov-like laws to regulate this rapid advances ? Or would it just be an obstacle to innovation and the development of robot autonomy ?

Interestingly, Asimov’s laws suppose an intrinsic ethic for robots, and never mention the responsibility of roboticists. This leads to the common moral problem of any maker : should you be taken for responsible of the misconducts or misuses of your creations ? What is Oppenheimer’s responsibility in the use of the A-bomb during WW2 and the massive civilian casualties associated ? Those are complicated questions with answers which will diverge according to the people asked. The only way to actually have a good answer is if there is a powerful enough consortium of thinkers to set the ground rules of ethics for robots.

Pop-culture and AI

Science fiction was an amazing introduction to a broad range of people. From scholars and researchers to households, sci-fi movies surely translated main topics among academicians to the public. Often, they will depict hostile robots, threatening mankind, and pushing us to the limits of what is acceptable according to common wisdom. Gort, the robot of The Day the Earth Still draws the line between the passive-aggressive robot, destroying any aggressor. But one of the most emblematic hostile AI in pop-culture remains HAL 9000 from 2001: A Space Odyssey by Stanley Kubrick.

HAL 9000 refusing an order, 2001: A Space Odyssey

To briefly sum up the behavior of HAL 9000, you need to understand its origins. It was built to fulfill one purpose : the Jupiter mission. The mission consisted in understanding the origins of a strange monolith previously discovered. During the mission, the crew assisting HAL 9000 decides to unplug it following a malfunction and as a result, faces an hostile AI killing almost the whole crew. This behavior seems to indicate that a machine willing to go against humanity in order to fulfill its purpose should be supposedly considered evil. But blaming the machine for cold and calculated actions without considering any human responsability is intellectually wrong. Indeed, it comes down to the fact that you need to classify a set of priorities in the making of the intelligent machine regarding to its moral status.

According to Nick Bolstrom in The Ethics of Artificial Intelligence, an AI needs both Sapience and Sentience to have a moral status :

  • “[Sapience is] the capacity for phenomenal experience or qualia, such as the capacity to feel pain and suffer.”
  • “[Sentience is] a set of capacities associated with higher intelligence, such as self-awareness and being a reason-responsive agent.”

Though, sentience justify then the actions of HAL 9000. We could imagine that self-consciousness expresses itself when we desperately fight for survival. It wouldn’t be surprising to see then a machine implement anything to keep existing. Indeed, a simplistic logic could be the following : if you don’t exist, you can’t accomplish anything; and if you can’t accomplish anything, you can’t fulfill your purpose.

There are clearly no perfect theories to understand and find out the best moral status for machines since it is not even clear for us how roboticists should behave regarding their creations.

In another movie, we are introduced to an even all new concept : when machines will decide for us the answers to the most controversial ethical questions. In the introductory scene of iRobot, by Alex Proyas, a robot saves the life of someone to the detriment of others because this individual was more likely to survive. This exact situation was met not anymore in a science fiction movie but in the real life, introduced by the self-driving car dilemna : should your car kill you to save others ?

The Trolley problem

Extension to the Trolley problem, it is interesting to witness how we are implicitly presented to those thought experiments through pop-culture. The increasing references of AI and its social implications in art embodies the radical changes we need to undertake to better accept AI in our life.

Check the next part here.

Sami Moustachir

Data lover, AI thinker, Founder @biasimpact.org. Eating knowledge one bit at a time 🍎. http://samimoustachir.com