IEET Ongoing Dissertation Award 2022

Photo by Deepmind on Unsplash

Artificial intelligence (AI) is currently used in numerous applications and deployed in diverse domains, such as healthcare, e-commerce and the legal sector. In my research, I focus on the military use of AI in the context of the development of autonomous weapons systems. The increase in the degree of autonomy in some decision-making systems has led to discussions on the possible future use of lethal autonomous weapons systems (LAWS). Debates around LAWS at the international level often run high. Various parties such as the Campaign against Killer Robots, a list of 30 countries in the UN and numerous scientists advocate a full preventive ban on LAWS without exception. Others such as the United States, Russia and Israel believe that the current International Humanitarian Law (IHL) framework is sufficient. A central issue in these discussions is the attribution of responsibility for bad outcomes caused by LAWS. The aim of my research is to contribute to the ongoing debate by exploring the morality of the use of LAWS and the problem of assigning responsibility.

Radical technology?

Throughout history, new weapons technologies have significantly influenced the way people wage war. With the discoveries and improvements in AI, the possibility of lethal autonomous weapons systems (LAWS) came into view. Despite various attempts by the Convention on Certain Conventional Weapons (CCW), there is still no universal definition of such systems, so several definitions are currently in circulation. Although there are many differences, most understand LAWS to mean the following: systems that once activated can select and engage targets without further intervention by a human operator. The Campaign to stop killer robots has launched fictious footage about their possible future use and to warn about the danger. At first glance, these systems seem radically new and represent a radical revolution in the history of weapons technology. A closer look reveals that LAWS are a gradual development and not discontinuous in the history of weapons technology, but that they do have particular characteristics, such as scalability.

Let us first focus on the fact that there is a certain kind of continuation with semi-autonomous weapons. Consider the many advantages mentioned for the introduction of LAWS that already applied to semi-autonomous weapons. These include both operational benefits such as the ability to remain airborne for long periods of time allowing for more surveillance, the ability to conduct more stealth operations and faster execution of tasks, as well as economic benefits with a reduction in the cost of warfare by cutting operational costs through more efficient use of human resources. The biggest benefit of LAWS that is often put forward is the radical reduction in risk to the human operator which would result in less physical and mental suffering for soldiers. Many of these arguments are also often used in the opposite direction. For example, a reduction in personnel costs could also result in a greater inclination to go to war, and there are studies that point to the suffering of killing at a distance because of the high-tech cameras and visible details on camera. The history of weaponry teaches us that man has been increasingly kept out of the loop. Starting from the bow and arrow and the gun to the advent of air and naval power and their corresponding systems such as aircraft carriers, unpiloted aircrafts, and cruise missiles. Even in the case of nuclear weapons, the argument of reducing the risk for soldiers was used. After the experience of the high casualties during the battle of Okinawa Island during the Pacific War between American and Japanese forces, with American casualties numbered over 12 000 killed, one of the central arguments at the end of the second world war for the atomic bomb attack on Japan was that it would save the lives of many (American) soldiers.

It is often argued that the true purpose behind the development and future use of autonomous weapons is motivated by the goal of removing human error and that this purpose would distinguish LAWS from other technologies. But in any war, in addition to the goal of protecting its own personnel, the goal is to eliminate enemies as effectively as possible and to minimize collateral damage. To that end, human operators make use of a multitude of tools in both the planning and the execution of operations. These include tools for finding, identifying, and engaging targets, for processing incoming information, calculating risks, etc. All these tools reduce the likelihood of human error, increase accuracy, and contribute to IHL compliance. One might note that the reduction of human error has so far always occurred by introducing better tools and not through the replacement of humans themselves. However, that replacement of humans is not an end in itself. Many of the unique features associated with autonomous weapons, such as reducing risk to the human operator and eliminating human error, are inherent in the development of all weapons technology. The underlying goal for the development of LAWS appears to be the same as for the development of other weapons, namely, to maximize damage to the target to achieve the expected military advantage, with as little collateral damage as possible. To achieve this, the means remain the same: being able to observe for longer times, improving response times, getting a more accurate vision, increasing the reach etc.

New ethical problems?

Where then does the concern come from that this technology represents radical disruption and will create new major ethical problems? One trace can be found in the power of scalability. Stuart Russell refers to LAWS as scalable weapons of mass destruction. Scalable here means that a system is able to perform more tasks, not by deploying more human resources but only by adding more hardware. Unlike semi-autonomous weapons, human operators are no longer needed to operate or supervise such systems. Further, unlike nuclear weapons, the raw materials for their development are readily available and affordable, the development does not require impossible infrastructure, and it mostly involves combining existing techniques. Nuclear, chemical, and biological weapons of mass destruction are indiscriminate in nature and are prohibited under various international treaties. Because of their enormous and partly unpredictable consequences (e.g., due to their high dependence on weather conditions), they tend to be used more for dissuasive purposes. In addition to scalability and savings in personnel costs, another advantage of autonomous weapons would be their ability to operate without a communications link while maintaining accuracy. This allows for operations in areas that were previously limited for reasons of connectivity or when the communication link is jammed. In short, it seems possible to identify characteristics that distinguish autonomous weapons from earlier advancements that could fundamentally change the nature of strategy and warfare. But will this also necessarily lead to new moral problems?

Systems with a high degree of autonomy have so far been used mainly in demarcated areas or in areas with less chance of obstacles, such as at sea and in the air. An urban environment such as a city or village with many people and therefore a high probability of changes seems much less suitable for the deployment of autonomous weapons systems. Furthermore, the requirement for a high degree of precision is also an obstacle. A system that needs to recognize a certain object that has a high degree of uniformity and always comes from the same direction is easier to develop than one that must distinguish people. This becomes even more complicated in the case of individual targeting or situational targeting, where the identification of enemies cannot be done solely based on certain distinctive signs but must be inferred from the role of a particular individual with respect to the hostilities or from observed behavior. The most advanced systems we have known so far were only capable of performing relatively simple tasks in relatively simple environments. In the longer term, LAWS could overcome this paradigm. This has led and continues to lead to several debates in the international community about the ability to comply with the rules of international humanitarian law, human dignity, and responsibility. I believe that society can thrive by implementing artificial intelligence in the right way, but the more functions artificial intelligence acquires, the more attention we must pay to the normative challenges of the technology. This is especially true in a field like the military. My doctoral research seeks to make a modest contribution in this regard.

Past research

One of the central questions is whether LAWS will be able to meet the jus in bello requirements of distinction, proportionality, and necessity. For example, there is concern that LAWS will not be able to distinguish between combatants and non-combatants. These issues are related to and highly dependent on future technological developments, and it seems that once the technology meets the required threshold in humanitarian law, there is no longer a legal obstacle to the future use of certain systems. In addition, the issue of the allocation of responsibility is often raised. The aim of my research is to contribute to the ongoing debate on attributing responsibility for errors made by LAWS by investigating whether it would be necessary to alter our current concept of moral responsibility.

The system’s autonomy and high degree of self-learning capacity seems to result in less control and thus less responsibility for human actors. Some authors even go so far as to claim that the increasing level of autonomy in weapons systems will lead to a so-called ‘responsibility gap’. According to this view, it is impossible to identify anyone who can be held responsible for harms caused by LAWS. Others emphasize the fact that LAWS are no creation ex nihilo and that it is therefore possible that some people will and should always be held responsible. To move forward in the research around LAWS and the problem of responsibility, it is important to increase our understanding of the different perspectives and discussions in this debate. To date I have sought to do so by disentangling the various underlying arguments.

Some ways in which autonomous and semi-autonomous systems can fail do not fit neatly into the current paradigm of individual moral responsibility, since in many cases human beings interacting with the weapons systems would not satisfy the requisite intent requirement under existing doctrine. This is especially the case in so-called ‘hard cases’ where no individual acts intentionally but LAWS nevertheless take an action that causes serious harm. In recent months I have been exploring ways in which it is possible to hold human actors responsible for mistakes made by LAWS. I have extensively examined the responsibility of military commanders and looked at the difference in powers commanders have to curtail the autonomy of LAWS compared to the power over human soldiers and how this would affect their responsibility.

Future research

So far, I have focused mainly on the concept of control and how it relates to moral responsibility. However, there is a second condition that is crucial to the attribution of moral responsibility, on the one hand, but on the other hand may pose problems in connection with LAWS. This is often referred to as the epistemic or knowledge condition. According to an ordinary conception of the attribution of responsibility, it is fitting to hold someone responsible only if the agent can foresee that the device will or is likely to cause a certain kind of outcome. There is no agreement among philosophers as to what are the necessary and sufficient conditions for the attribution of moral responsibility. However, many philosophers agree that such an epistemic condition is necessary. In the case of LAWS, this poses difficulties for the attribution of responsibility to humans. The intrinsic complexity of the operation of software on LAWS makes it difficult to determine (and prove!) the degree of knowledge that a person should possess and the degree of available information such that it is possible (and fair!) to hold him or her responsible. In the coming months, I plan to look specifically at this condition and come up with a threshold or framework.

Besides investing the equitable attribution of responsibility to all human actors involved in the use of LAWS, another option is to explore the assignment of responsibility to non-human actors. Until now, most legal systems and scholars qualify AI as a ‘tool’ or ‘material good’ and therefore attribute responsibility only to human actors. However, the legal practice of criminal responsibility for corporations already exists in some countries and parallels could be drawn for LAWS. Therefore, in the second part of my research I plan to focus on this point and examine the extent to which the system itself can be held responsible and what the implications of this would be for the victim’s right to a legal remedy.

Empirical research by Jobin et al (2019) has shown that within the existing guidelines, reference to ‘responsible AI’ is frequent, but the term is hardly defined. Additionally, within those guidelines we see discussions about which actors are responsible for AI’s actions and whether AI itself can or should be held responsible. There is currently much discussion about regulating LAWS, including proposals in the EU. I will try to contribute to this with respect to military applications of AI.

--

--

Ann Katrien Oimann
Institute for Ethics and Emerging Technologies

Doctoral researcher in Philosophy focussing on the intersection of philosophy, technology and law.