AI machines as moral agents, Why artificial moral agents? (part 5)

H R Berg Bretz
5 min readDec 4, 2021

--

In Part 4 I discussed a counterargument to Floridi & Sanders’ distinction. Here is a summary of why AI machines should be considered as moral agents. After this, in the next part, I will dive into the subject of Agency — what is it?
For a mission statement, see Part 1 — for an index, see the Overview.

Photo by Jamie Street on Unsplash

2.3 What is the purpose of saying that artificial agents can be accountable moral agents?

In 2.1 I concluded that the distinction of accountability and responsibility is valid, but here I want to address the usefulness of the distinction in moral reasoning. If we cannot find it useful then it is more a vacuous academic definition than an important distinction. Floridi and Sanders’ argues for its usefulness by saying “Promoting normative action is perfectly reasonable even when there is no responsibility but only moral accountability and the capacity for moral action” (2004, p. 376). If we take the example of Oedipus marrying his mother, the argument can go something like this. If we consider people marrying their mother a moral problem that needs to be addressed and it is the case that: (i) it is not uncommon the people do this, (ii) they are not aware that the person they are marrying are also their mother, that they are accountable but not responsible for the act. Would not that still be a serious moral issue to be concerned about?[1] This might not be as convincing as it could be since ‘people marrying their mother’ is not that common and might not even be considered a moral issue, so let me give you an example that involve artificial agents.

A recent concern when it comes to AI technology is that there have been several instances of systems that are biased[2]. One such discovery is that the companies that produce these systems train them on data that contains much more images of white men than on women of color, causing the systems to correctly identify the gender of white males up to 99%, but for women of color it was as low as 35%. Imagine, for example, that an artificial agent that is controlling the car from pedestrian crossing (See part 3). Let’s say it uses machine learning to identify whether an obstacle is a pedestrian or a non-pedestrian and it incorrectly identify women of color as non-pedestrians. This could cause the agent to engage in negative moral behavior by having more accidents with women of color than white men. Now, finding out who is responsible for this is important, mainly to make sure this does not happen again. But if it is the case that no one could be found responsible, the agent still engages in morally bad behavior. And that is an ethical problem that needs to be addressed and discussed. Even if one finds out who is responsible, as long as there are accountable but non-responsible agents out in the world that engage in those moral activities, that activity should be addressed. There is also a difference in how to address something ‘caused’ by non-agents and something caused by agents because non-agents cannot be the source of the act, and this changes the ethical discussion. This makes the distinction of accountability and responsibility important, and also enables the possibility to call an artificial agent a moral agent.

An objection to this could be that even if these artifacts are very autonomous, they are still not agents by definition, the artifacts are merely part of a phenomenon that has moral implications that should be addressed. This is essentially what Johnson argues in the following quote:

“Computer systems are components in moral action; many moral actions would be unimaginable and impossible without computer systems. When humans act with artifacts, their actions are constituted by their own intentionality and efficacy as well as the intentionality and efficacy of the artifact which in turn has been constituted by the intentionality and efficacy of the artifact designer. All three — designers, artifacts, and users — should be the focus of moral evaluation” (2006, p. 204)

Much of this discussion turns on if we define the artifact as an agent. As artifacts become more and more autonomous, there could some point where these artifacts reach a certain threshold that would make them agents, and if they are considered agents in their own right then there seem to be no reason for why they should be different from any other agent (Johnson, of course, argues against this, as I will explain further in 5.2).

However, definitions of agency from other areas of philosophy suggest that artificial agency is impossible or at least that the criteria for agency are so high that it requires consciousness or properties that are associated with consciousness. This would then be a more fundamental problem when trying to argue that moral agency for an artificial agent is possible because if artifacts cannot be agents then artificial agency becomes a very esoteric and speculative topic[3]. Respectively, if the artifacts have to be so advanced that they more or less are equivalent to humans, then the question whether an artifact can be a moral agent turns into the arguing for what it takes for an advanced artifact to be a human, which is not the same argument[4]. That is why I will now turn to the question of defining agency and artificial agency to address this looming problem.

Comments:
A problem that I find more and more troubling when rereading this is the question: Where does one artificial agent begin and the other end? If you have a fleet of self-driving cars that all interact with the same server, are they one entity or individual agents? In the case of humans, this is not a problem (yet), but when discussing AI machines, the answer is not obvious. And the answer to this question might require a completely updated moral reasoning. Anyway, defining agency is still a very interesting topic in it self as you will see in the next part.

[1] To be convincing, it also needs to be epistemically hard for the person to find out if their future spouse is in fact their mother, because if is easy then there is no reason for why it should be common practice. And to say that it is epistemically hard in most cases is not very convincing. Let us not spend any more time on this example.

[2] Here are some examples: A legal sentencing guiding system is said to predict a higher risk of recidivism in black men than what research show is accurate, Google image search returned 89% percent male CEO’s although that number is 73% in the US today and Facebook’s automatic translation chose an aggressive narrative over a mundane “good-morning” when translating an Arabic users image.

https://www.newscientist.com/article/2166207-discriminating-algorithms-5-times-ai-showed-prejudice/ (2019–05–16).

[3] An analogy would maybe be to discuss how ‘married bachelors’ might behave.

[4] One can, for instance, argue that humans are becoming more and more artificial since we rely so much on artifacts (cars, smartphones, the internet, software in general) and in conjunction with future development in biotech it will make future humans basically so intertwined with its artifacts that they are inseparable, and that future agent is an artificial agent. However, since that involves a lot of speculation (and science fiction) it is not the type of artificial agent that is addressed here.

--

--

H R Berg Bretz
H R Berg Bretz

Written by H R Berg Bretz

Philosophy student writing a master thesis on criteria for AI moral agency. Software engineer of twenty years.