AI machines as moral agents, Accountability vs responsibility (part 3)
Continuing from part 2, here I’m establishing the concept of Accountability vs Responsibility by Floridi & Sanders. In most cases, these concepts are synonymous, but here they are defined differently to separate blame from who caused the event.
For a mission statement, see Part 1 — for an index, see the Overview.
2. How and why an artificial agent can be a moral agent
The main theme in this text is to argue that an artificial agent can be a moral agent. The set of moral agents is a subset of agents; moral agents are agents that perform moral acts. I will have an in depth discussion about defining agency in chapter 3, in this chapter I want to argue for why it is a good idea to define an artificial agent[1] as a moral agent. This is because merely saying that the agent can be a moral agent could be easily rejected as a trivial fact which does not significantly affect our moral reasoning. To do this, I will first explain Floridi and Sanders’ distinction between accountability and responsibility because this distinction enables the weaker claim that artificial agents can be accountable, and this should not be confused with the question of whether they can be responsible, which is a stronger claim than the former. I will then analyze Floridi and Sanders’ reply to a counterargument to this distinction and conclude that the distinction is not only valid, it also has moral relevance.
2.1. Accountability vs responsibility
Imagine the following scenario:
Pedestrian crossing A person is crossing the street on a pedestrian crossing and is hurt by a car (controlled by an agent) that failed to stop in time. The pedestrian was aware that a car was heading toward it, but the pedestrian following the traffic laws, was clearly visible and there was plenty of opportunity for the car to stop in time.
The responsibility of the accident is usually distributed between three agents:
1) The driver is responsible for how the car is controlled when it is moving.
2) The owner is responsible for the physical car, paying for parking, insurance and so on.
3) The manufacturer is responsible for making a product that behaves as they claim it behaves — making sure it is safe to use and how it should be operated and maintained to keep it safe.
If the driver was inattentive and the accident could have been avoided if she was attentive, then it is reasonable to distribute some responsibility to the driver. If the owner had skipped due maintenance of the brakes, then the responsibility shifts to the owner instead. If the manufacturer was aware that this car was equipped with substandard brakes but did not offer a fix (maybe because they did not want this fact to be publicly known), the manufacturer should bear some of the blame. Now, when we try to find out who is responsible, do we mean who is the cause of the accident or do we mean who is to blame for the accident?
When it comes to blame, let us imagine that the owner, unbeknownst to the driver, knew the brakes weren’t working properly but lied to the driver that the car was safe. In this case, if the driver tried to brake at an appropriate distance from the pedestrian, but pushing the brake pedal had no effect, the accident was then unavoidable by the driver. As stated before, the blame now shifts from the driver to the owner. How much of the blame that shifts depend on the situation, what measures the driver took to make sure the car was safe or how convincing the owner was when stating it was safe — the blame shift could be from a low to a high degree. Granted that the blame shifts to the owner, can we really say that the owner directly caused the accident? The owner had no control of what speed the car was driving in or that it was following the road or where it was going. The owner had lied about an essential fact to the driver, an act for which the owner should be blamed, but the owner did not cause the accident directly. The owner rather changed the terms for how the driver should evaluate the situation and made no effort to make the driver aware of this, which put the driver in an unfair situation, but this is not the same as directly causing the accident.
This is what Luciano Floridi and J.W Sanders refer to when they say that accountability is different from responsibility (2004, p. 366–367). Accountability in pedestrian crossing concerns the agent that directly caused the accident while responsibility concerns which agent(s) (if any) should be blamed for the accident (or praised, if it was a praiseworthy event)[2]. This distinction is used by Floridi and Sanders to claim that an artificial agent can be an accountable moral agent without being a responsible moral agent. The distinction is important when it comes to artificial agents because “You do not scold your webbot[3], that is obvious” (2004, p. 366). Floridi and Sanders notes that this seems ridiculous because blame is mostly thought of as reserved for humans.
Blame is an essential part of ethics and there are at least two rivaling accounts of why one would praise or blame an agent: the merit-based view and the consequentialist view. The merit-based view says that one should praise or blame the agent because the agent “deserves” it. The consequentialist view says that one should praise or blame the agent if it leads to a desired change in the agent’s behavior (Eshleman, 2016, p. 5). If you blame a webbot, there is no mechanism in the webbot that can understand the concept of blame or what it means to “deserve” something, and if you want the webbot to change its behavior you should just change its programming, blaming adds nothing beyond this. Responsibility also implies legal and compensatory dimensions — if an agent is considered responsible it can be found guilty in a court of law and be sentenced to prison or to compensate for the act the agent was found responsible for, neither of which is applicable to artificial agents[4] (Floridi Sanders, 2004, p. 367–368). Whether or not it is, or will be, possible to produce an artificial agent that could be held responsible is a separate issue from whether it could be held accountable, and I will not explore responsibility further except to make this distinction clear. This should not be interpreted as saying that responsibility is less important than accountability. On the contrary, responsibility is a more important and relevant topic for moral discussions, but that does not mean that what is left to accountability is unimportant and should be ignored. Now that we have looked at some reasons why holding an artificial agent responsible does not seem appropriate, let us turn to what it takes to hold it accountable and leave praise, blame, punishment, compensation and desert to the concept of responsibility.
Essentially, what is left is some of the defining features of being an agent. If an agent were to harm someone with a knife, the knife might be in one sense the cause of or accountable for the wound, but since the act did not originate from the knife, it cannot be held accountable for the act. The knife is not an agent but a person holding it could be. This is true of most artifacts, but if the artifact is autonomous enough artificial agency could be possible, depending on how you define agency. We will assume for now that we have accepted a definition of agency that enables an artifact to be defined as an agent, as I will argue in the next chapter. The argument then turns on whether any artifact can be “free enough” for this “thinner” type of moral agency; accountability.
To answer this question, let us consider one of Aristotle’s two conditions for responsibility[5]. Aristotle’s control condition is this: it must be up to the agent whether to perform the action — the agent must be in control of it, it cannot be compelled externally (Eshleman, 2016, p. 5). My comment here is that it also must be ‘physically possible’ for the agent to perform the action. This is mostly a trivial fact because it seems silly, in e.g. pedestrian crossing, to consider options like “why did not the agent make the car fly over the pedestrian or make it so light that it did not cause any damage” and so on. An unlimited amount of these trivial actions could be said to be outside the control of the agent because it is not physically possible for the agent to perform them. These cases are not of interest here, but rather cases where it is not trivial to say that the agent is not in control, as in the case where the car has no working brakes. This is because assuming that the action was in control of the agent (the car has working brakes and that using them would have avoided the accident) is not only a fair assumption, it also is a strong reason why we do not consider driving cars near pedestrians per definition as morally bad. The point is that it is fair to assume that the car has working brakes, but when we find out that the assumption is incorrect, we can state that ‘the act of braking’ was not in control of the agent. Acts that are not physically possible should not even be considered in the set of acts that should be evaluated to be in or out of control of the agent.
However, the control condition is more about the agent being free enough to be in control of the action. As Eshelman notes, much philosophical debate since then has been about whether (and how) determinism is a problem for the control condition and thereby moral responsibility (2016, p. 5–6). I will separate this problem into two issues: concrete freedom and conceptual freedom because I will argue that concrete freedom is applicable to accountable agents while conceptual freedom is not.
What I call conceptual freedom is the freedom it takes for an agent to be conceptually free enough to act, that is, the freedom that could be threatened by determinism. Determinism is the view that all events are determined completely by existing events and incompatibilists say that if determinism is true, then an agent can never be free in the way the control condition demands (Clark et al, 2017). Now, the general problem raised by conceptual freedom is an important philosophical question, but it asks whether there can be any moral responsibility at all, not what it takes for a specific agent to meet the control condition. I am not taking any stand on determinism or incompatiblism here, my point is that this raises questions that are not relevant here. Also, it is more about if we can hold the agent responsible, not accountable, since the agent still caused the accident in some sense.
By concrete freedom I refer to what Floridi and Sanders explain as the “practical counterfactual”: the agent could have acted differently had they chosen differently, and the agent could have chosen differently because the agent is advanced enough[6] (2004, p. 366). For humans, concrete freedom simply means having a choice and the ability to act on one of those choices. For programmed artificial agents, this poses a serious problem. Imagine that the car in pedestrian crossing is a simplistic version of an autonomous car. It is equipped with a radar that can detect obstacles and brakes to avoid collisions. If the brake system listens to the radar system and initiates the brakes, the accident is avoided. If the brake system for some reason does not get the order to brake, the accident is not avoided. In neither of these cases does the car have concrete freedom. It just does what it is programmed to do. This is what is meant by the control condition’s ‘to be compelled externally’ and what Neely calls an agent that lacks autonomy because the agent’s goals are completely determined by an outside source (2014, p. 102). I will set this problem aside for now and return to it in 4.1, for now it is enough to conclude that this problem needs to be addressed for a programmed artificial agent to meet the control condition.
The conclusion is that it does not seem appropriate to hold an artificial agent responsible, to blame that agent, but holding it accountable could be appropriate. An accountable agent is an agent that originated the moral act and to originate the act it must have concrete freedom to meet the control condition. Let us next consider a counterargument to why this distinction is useful.
Comments:
In retrospect, this subject is in itself too big to only be a chapter in a master thesis. On the other hand, I think it is mainly interesting when connected to artificial agents since the concept of blame is problematic when you visualize AI agents of today — they are simply too dumb. Maybe when we have domestic robots that can at least appear to be hurt by your comments or punishments, blaming them won’t seem as far fetched. But shouting at you robot vacuum cleaner seems a bit silly.
Also, I think the distinction of Accountability and Responsibility should have needed to be more grounded in moral philosophy, but then I would have never been able to fit it all within a master thesis. Anyway, I still think it is interesting and since blame for artificial agents is problematic, this is an interesting way of slowly accepting that some advanced agents should be considered as “light” moral agents.
Go to part 4.
Footnotes:
[1] This of course assumes that an artifact can in fact be an agent. In this chapter I will assume that this is the case. More on this in chapter 3.
[2] It could be possible to be responsible without being blameworthy, and maybe even blameworthy without responsibility, but I will not take any stance any of these statements. For my purpose here, responsibility is what is associated with blame and vice versa.
[3] A software that performs a specific task on internet like filtering spam; “bot” as in “robot”
[4] There are serious discussions on ‘digital persons’ where artificial agents would be granted legal status which might change some of this. This might make it possible for at least monetary compensation. To “punish” an artificial agent, we first need to establish that there is something the artificial agents care about. E.g. Jessica Neely (2014) suggests that ‘interest’ might be such a feature. However, these are suggestions and possibilities for the future and are not established truths.
[5] Of course, Aristotle made no distinction of accountability and responsibility. This means that responsibility here, and when I reference the control condition henceforth, encompasses both responsibility and accountability. I will always use ‘accountability’ according to Floridi and Sanders distinction.
[6] Floridi and Sanders’ exact formulation is that the agent should be “interactive, informed, autonomous and adaptive” which I have summarized as “advanced enough” since I will get more into the details of these criteria in chapter 4.