AI machines as moral agents, A counterargument to the distinction (part 4)

H R Berg Bretz
6 min readNov 20, 2021

--

This section discusses a counterargument to Floridi and Sanders distinction from part 3, which helps to explain it in greater detail.
For a mission statement, see Part 1 — for an index, see the Overview.

Photo by Emily Morter on Unsplash

2.2. AI machines as moral agents,A counterargument to the accountability / responsibility distinction

Floridi and Sanders considers a counterargument to the distinction, that it would be conceptually improper to call an artificial agent a moral agent. The argument says that if the agent cannot be responsible, it is not a moral agent.

A first response to this, they answer, it that this is confusing the identification of X as a moral agent with the evaluation of X as a morally responsible agent. Accountability is here the identification of the moral agent, and responsibility is the evaluation ­– and first we should identify the agent as a moral agent, then we can evaluate whether it is responsible or not. But Floridi and Sanders’ realize that this response is too quick, because the original objection actually says that the identity class of X without responsibility is empty. It does not matter if the artificial agent is accountable because if it cannot be responsible, it is not a moral agent. A similar version of that claim which has the same implication would be that being a moral agent is synonymous with being a morally responsible agent (2004, p. 366–367).

An example of this might make the objection clearer. I believe Neely inadvertently makes this claim in a footnote, when she declares that “I leave open the question of what it would take for a machine to have moral responsibilities and thus be a moral agent” implying that moral responsibility is needed to be a moral agent, and without the responsibility, it is not a moral agent[1]. (2014, p. 98).

Floridi and Sanders reply to this stronger version of the objection by saying that it is a mistake to reduce moral agency discourse to responsibility analysis. An example of this is that we consider it good parenting to identify children as moral agents when they are not (yet) evaluated as responsible moral agents. A second example is how we consider search-and-rescue dogs that track missing people which can save lives. The dogs are playing an important part in the moral game, but we still cannot consider them morally responsible because they most likely do not understand the moral implication of their act. The consequence of their act might be morally significant, but their intent is not in line with the moral result. “Whether they mean to play it, or they know that they are playing it, is relevant only at a second stage, when what we want to know is whether they are morally responsible for their moral actions” (2004, p. 365). The third example is the tragic hero Oedipus from Greek mythology who unbeknownst to him marries his mother and kills his father. He might not be responsible for these acts because he was not aware of the fact that it was his parents, but he is still accountable for committing both acts. That is why the distinction is useful (2004, p. 368–9).

I find two of these examples problematic. The first example with parents is not that clear cut and involves degrees of responsibility. The younger the child is, the less responsible she is. Responsibility also seems to be correlated with understanding, the less understanding the child has, the less responsibility she has. She also has less ‘agency’, less control over her actions which affects her accountability. Causal understanding and control seem to affect her accountability and the autonomy should also be a matter of degrees and under a certain threshold, she would not be considered accountable because she is not an agent. If the level of accountability and responsibility track each other, then the degree and the threshold could be identical, entailing that accountability without responsibility is not possible. Whether or not this is plausible, the fact that the autonomy of children slowly increases as they grow older makes the level of responsibility and accountability become very complicated, and therefore I do not believe this to be a clear example of accountability without responsibility.

In the example of Oedipus, the fact that he was not aware that the person he killed was his father may not lessen the responsibility of murder, which would then also not be a good example of accountability without responsibility[2]. Nonetheless, the example of Oedipus marrying his mother and search-and-rescue dogs are better examples. If we consider ‘Oedipus marrying his mother’ an act that is morally wrong[3] and he was totally unaware of this fact, not holding him accountable for the act cannot be justified. Assuming that no one forced or conned him into marrying her, Oedipus should be held accountable for this act. He was in control of the act of marrying a woman. What he was unaware of was that the woman was also his mother, he was not responsible for that outcome, marrying his mother. This would make him accountable, but not responsible.

Search-and-rescue dogs are interesting because they mirror the argument with artificial agents in the sense that we generally do not consider dogs nor artificial agents morally responsible, but they do participate in moral situations when they help save human lives. I believe most find it intuitive that neither of them is responsible because, for one, dogs or artifacts do not understand what moral is and what acts are morally good or bad. But they both could be accountable in the sense that they perform an act which we consider having a moral effect and as long as they are the agents that directly caused the effect, then we could also hold them morally accountable. And for the agent to be considered as the direct cause, the agent must have had concrete freedom to perform or not perform the act in question.

If we accept that it is possible to say that an artificial agent can be an accountable moral agent, why would we? Does it serve any purpose? If it does not, then it seems only to confuse our current concepts of moral responsibility and create a class of moral agents that is not very interesting for moral philosophy. In the next section I will try to justify its usefulness.

Comments:

The example above with children is particularly interesting as compared to the progress of artificial agents. How does that process really work? When they are really young, morally they are equivalent to dogs in the sense that we know that their moral awareness and understanding is very low and we can excuse immoral behaviour, or know that we repremand them not mainly because they did something wrong, but to learn. And at some later point, they a full fledged moral agents. How would such a process work for an artificial agent? There are of course many ways to argue why artificial agents are substantually different from humans, but do really these argument hold up? More on this in comming sections.

Go to part 5

Footnotes:

[1] Neely’s purpose of the statement was to say that she will focus on moral patiency and not moral agency, meaning her intent was not to make claims about agency. But since she still brings up both agency and responsibility and a clear connection between the two, it at least shows she intuitively connects them this way.

[2] It seems more important to question whether it was permissive to kill someone in an altercation that arose as his father almost ran him over with his carriage. If it was, then the fact that it was his father should not play a significant role when distributing responsibility. However, I have been made aware that parricide (killing one’s parent) was considered a much more serious act than simply killing someone in ancient Greece. This might be what Floridi and Sanders meant and thereby lessening my complaint, although I personally still find this unpersuasive. Nonetheless, it is not important for my argument here. https://en.wikipedia.org/wiki/Oedipus (2019–05–16)

[3] I am not trying to argue that marrying one’s mother is morally bad, I am only trying to justify the original claim using Floridi and Sanders’ own example. In the next section I have an example I find more appropriate, that of biased learning in AI.

--

--

H R Berg Bretz

Philosophy student writing a master thesis on criteria for AI moral agency. Software engineer of twenty years.