Who is morally responsible when an autonomous robot takes a decision to kill a human being? (Response)

Maggie Zhang
A Study in AI Ethics
6 min readFeb 5, 2020

--

Captain Judith Gallagher displays an anti-IED robot known as the “Dragon Runner” in London. The robot fits in a backpack and has a camera to look under and around vehicles and other obstacles. The images are sent to a control center in real time. Source: https://science.howstuffworks.com/robots-replacing-soldiers.htm

A response to “Killer Robots” (Sparrow, 2007) and “Responsibility for Military Robots” (Lokhorst and van den Hoven, 2012)

Killer Robots

In Robert Sparrow’s argument, one day robots, drones, and other systems will be fully autonomous, meaning they will not be remotely controlled, but will make decisions “on their own.” This leads to an ethical problem because it is thought that autonomy confers moral responsibility. Being autonomous means to be in control of one’s self. It follows that if an autonomous agent decides to perform some action, then that agent is responsible for that action.

For example, an autonomous weapon system (AWS) decides to bomb a group of enemy soldiers who have clearly already surrendered. Who is responsible for this war crime?

Sparrow offers three answers to this question:

  • The robot itself
  • The person who programmed the robot
  • The commanding officer who ordered the use of the robot

However, Sparrow argues that none of these answers are correct because they all fail to meet the precondition for fighting a just war, that someone be held morally responsible for each enemy death that occurs.

Sparrow argues that the programmer cannot be held responsible because the autonomous weapon system is autonomous, meaning it can learn from experiences and make new decisions that are not a direct result of its programming. The commanding officer also cannot be held responsible because the autonomy of the weapon implies that its orders don’t determine its actions, and so military personnel are not directly responsible for decisions they cannot control. Finally, the robot itself cannot be morally responsible for its actions because it cannot be punished or made to suffer for resulting deaths. Furthermore, even if the AWS were advanced enough to be sentient and capable of suffering, it would undermine the motivation of creating machines for war in the first place. The underlying reason we want to deploy machines is so that we reduce the amount of harm to our soldiers. Having a fully moral robot would make their death or suffering just as regrettable as the death of a soldier. Thus, Sparrow concludes, we are left with a responsibility void and cannot assign moral responsibility to any one agent.

Sparrow believes there is an ambiguous space in the spectrum of responsibility where an autonomous weapon may remove some of the responsibility of a commanding officer or soldier, but not be autonomous enough to confer full responsibility to the AWS. He agrees that one way of clarifying the responsibility is to have any decision to kill be approved by a human overseer first. However, he argues that AIs will one day be fully autonomous and there will be cases where it is tempting to hand over control or rely on AIs to make split-second decisions.

Responsibility for Military Robots

In response, Lokhorst and van den Hoven question Sparrow’s assumption that all autonomous weapons are “killer robots” and that they may in fact be designed to avoid killing as much as possible. Since this is a much more attractive option, they propose that these robots are actually morally superior to human soldiers because they are able to temporarily incapacitate rather than have no other option than to kill. They argue that “artificially intelligent military robots that save lives are preferable to humans (or bombs) that kill blindly.”

Lokhorst and van den Hoven then go on to question Sparrow’s initial assumption that the robot cannot be held morally responsible because they cannot be punished. They point out that it is not fair to assume robots will never be able to suffer, and that punishment may not be a desirable way of correcting unwanted behavior. They question why punishment is even necessary, and advocate for the “treatment” or rehabilitation of the robot’s software or hardware to change the behavior of the robot.

They then argue that both the robot and commander are both causally responsible for actions resulting from a commander deliberately seeing to it that a robot takes a certain action, but that only the commander is morally responsible because the robot has no choice but to do what the commander requires. They make an important differentiation between different types of responsibility.

Finally, the authors suggest that even though morally superior intelligent military robots are more preferable to humans, we must think about how to build them, and ethical principles should be precisely built into the hardware or software in a way that controls the behavior. Further, they question whether ethics is a matter of logic or something modeled off of human psychology and decision making. They advocate that robots must be designed as transparent robots that avoid killing to the maximum extent possible and not as inscrutable killer robots.

Further, they conclude that human beings cannot transfer moral responsibility to their products in unexpected results and cannot claim diminished responsibility for consequences resulting from their products. They claim that designers of autonomous robots are “design responsible” in all cases. They also claim that “designers, producers, managers, overseers, and users are and remain always responsible” but that it is difficult to apportion responsibility.

Conclusions

While Sparrow’s approach is cautious in assigning responsibility to any one entity, his assumptions are too simplistic and don’t account for circumstances in which the robot is programmed to avoid killing as much as possible. For example the hope that robots can actually make ‘safer’ decisions than humans is the main reason behind promoting autonomous vehicles and artificial intelligence. Sparrow also refrains from apportioning responsibility in specific ways, and as Lokhorst and van den Hoven show, one can be held responsible in moral, causal, and design-specific ways, which could help in closing the responsibility void. Sparrow also narrowly assumes that punishment is the only way of holding an entity morally responsible, however as Lokhorst points out, punishment may actually be counter-effective when trying to correct unwanted behavior in autonomous systems. Rewriting software, performing tests in closed systems, and other forms of “rehabilitation” may be better ways of ensuring a system is operating favorably.

Importantly, Lokhorst and van den Hoven try to highlight that everyone involved in making a decision to kill, including the designer, producer, user, or robot is responsible in some way. However, only certain entities can be held morally responsible. But, the question of assigning “how much” responsibility may become hairy. For example, in the case of the commander controlling a subordinate AWS, the commander has no control over the particular outcome by the AWS, but a commander has control over a probabilistic outcome, and therefore a risk of imposing harm. If we take this probabilistic approach, how do we define risk in terms of probabilities?

Lokhorst and van den Hoven subscribe to the classical view that moral responsibility consists of causal responsibility and intention. It is questionable whether machines will ever have the notion of “intention” and for that reason a robot could never be ascribed moral responsibility. However, there are other ways of viewing moral responsibility — as a continuum from no to full responsibility, for example. Until the notion of whether moral responsibility is objective or a relative quality, there is no easy way of saying whether a fully autonomous robot is morally responsible for a decision to kill.

However, if the continuum view is adopted, then the boundaries between moral responsibility, task, role, and legal responsibility are blurred. Our willingness to assign moral responsibility to robots depends on its degree of autonomous power, since autonomy holds the key factors of causality and intention as mentioned above. Therefore, a robot’s behavior should be judged for its moral quality and be seen as one of the ways we evaluate robots. While it is still difficult to assign responsibility in objective measures, it is clear that robots should be designed with ethics-based control systems before they reach a point of fully autonomous power.

--

--