On the Determination of Moral Responsibility Using Fully Autonomous Weapons Systems

Maya Guru
A Study in AI Ethics
5 min readFeb 5, 2020

In determining where responsibility lies in cases involving artificial intelligence, particularly autonomous weapons systems, it is first important to define what is really meant by the term “autonomous”. For example, a system that still receives orders from humans in order to determine targets is not an independently thinking system, yet it still conducts operations independently. Suppose that a system is autonomous if and only if both its decision making and actions are processes carried out independently of any overseer. By this definition, it is only logical to hold the machine itself morally responsible for its own actions; however, the implications of the moral responsibility of an autonomous system differ vastly from that of a human.

Firstly, it is necessary to justify why moral responsibility must be allocated in the case of autonomous weapons systems. Many authors argue that as stated by jus in bello, it is a necessary condition before entering any war (Gert-Jan Lokhorst and Jeroen van den Hoven 2012). While this remains true, a larger issue surrounding these autonomous weapons systems is without a consideration of morality and the ethical implications of the actions carried out by these systems, when mistakes occur and lives are unnecessarily lost, it will be impossible to know how to prevent these cases with no one held morally responsible. A thorough consideration of the ethics of systems that have the potential to take human lives is necessary for their implementation not simply because of societal convention, but also to help reach the larger goal of more ethical decisions in war to help avoid needless deaths.

It is also important to note why the definition chosen above for the term “autonomous” is the most apt. Consider an autonomous system that receives orders on whom or what to target from a human yet carries out actions independently. Now, suppose that this system carries out a deed that is not compliant with legal code regarding military operations. Although the system carried out the action, it never had the option to turn down orders, and has, as dubbed by Gert-Jan Lokhorst et. al., “forced moral responsibility”. Lokhorst et. al. states further that the commander Deliberatively Sees To It That (DSTIT), meaning that prior to receiving its order, it did not have the option to decline the order (2012). If an “autonomous” weapons system is not able to control what actions it carries out but only how it carries out each action, then nothing truly differentiates it from any other weapon commonly used on a battlefield, and it should be treated as any other weapon would be in terms of moral responsibility. In order to call a weapons system truly autonomous, it must be both causally and deliberatively responsible for all of its actions. In other words, all of its actions are in accordance with its own free-thinking will.

Given this definition of autonomous, autonomous weapons systems must be responsible for their own actions due to the definition of their nature. It is illogical to hold another party responsible for the actions of any autonomously thinking party. Counter arguments tend to conflate moral responsibility with legal responsibility, which is not included within this argument. As stated by Robert Sparrow in his discussion of the ethics of autonomous weapons systems, “…the possibility that an autonomous system will make choices other than those predicted and encouraged by its programmers is inherent in the claim that it is autonomous” (2007). This conclusion is sound, given that an autonomous system must be operating out of its own belief system.

Next, one must determine how to go about dealing with cases involving the moral responsibility of an autonomous weapons system. One of the flaws in Sparrow’s reasoning is that he assumes that moral responsibility implies legal liability, which in turn implies punishment is in order. He states, “in order to be able to hold a machine morally responsible for its actions it must be possible for us to imagine punishing or rewarding it”(2007). However, the reasoning behind punishing anyone morally responsible for a crime is that humans are averse to pain and suffering, which motivates them against a certain act before it is even performed. Quantifying and programming a punishment and reward system is a lost cause; it achieves nothing in the case of autonomous weapons systems because it would not prevent the bad behavior before it begins. Society cannot be expected to treat autonomous weapons systems as they would humans when dealing with cases of conflict, because the concepts of emotion and empathy are absent. More importantly, the family of those who have died as a result of autonomous weapons systems deserve more than anything to know that cases such as their loved ones’ will not happen again; punishing the robot brings no justice to those who have actually suffered as a result of these systems because humans cannot truly empathize with programmed being, even if it is programmed to “feel”. Thus, the biggest implication of moral responsibility is the necessity to prevent similar cases in the future. This goes back to the reason why moral responsibility was assigned in the first place- to progress towards a society with fewer unethical acts of violence.

In conclusion, the most rational way to assign moral responsibility to a truly autonomous system is by assigning it to that system itself. Although it may seem strange that an inanimate being that was programmed by a human can be responsible for its own actions, the nature of the system renders it illogical and unfair to assign blame to anyone else involved. Given this fact, a decision must then be made about how to go about resolving the conflict of morality that the system has. It is then the programmer’s responsibility to then better the system to avoid such conflicts again. By holding the machines responsible for their actions, society can work towards a common goal in which autonomous weapons systems have heightened accuracy and improved ethical decision-making, creating a new era of war in which lives are not lost due to human flaws.

References:

Lin, P., Abney, K., Bekey, G. A., Lokhorst, G.-J., & van den Hoven, J. (2012). Responsibility for Military Robots. In Robot ethics: the ethical and social implications of robotics. Cambridge, MA: MIT Press.

Sparrow, R. (2007), Killer Robots. Journal of Applied Philosophy, 24: 62–77. doi:10.1111/j.1468–5930.2007.00346.x

--

--