AI machines as moral agents, Is consciousness required? Johnson’s arguments (part 12)

H R Berg Bretz
6 min readMar 3, 2022

--

In Part 11 I delved into Himma’s arguments for why consciousness is a necessary part of agency and moral agency. Now it’s time for Deborah G. Johnson’s arguments.

For a mission statement, see Part 1 — for an index, see the Overview.

Photo by Yuyeung Lau on Unsplash

5.2. Johnson’s arguments

In the paper “Computer systems: Moral entities but not moral agents” (2006) Deborah G. Johnson states that there are five conditions for human behavior to be considered actions that can be morally evaluated. The five conditions are:

1) The entity has internal mental states.

2) There is an outward embodied event.

3) The mental state is the cause of the event.

4) The outward event has an effect.

5) The effect affects a patient. (2006, p. 198)

“In short, computer behavior meets conditions 2–5 as follows: when computers behave, there is an outward, embodied event; an internal state is the cause of the outward event; the embodied event can have an outward effect; and the effect can be on a moral patient.” (2006, p. 198). It is the first condition that computers cannot meet, because it requires the internal states to be mental states and one of the mental states must be an intending to act. An intending to act arises from the agent’s freedom, a freedom the computer does not have. A free agent has a non-deterministic character, and although it is in some ways mysterious, it is distinctly separated from the ways a computer system can be non-deterministic, or more accurately, we have no way of knowing if the computer can be non-deterministic in the same way a human can, and therefore the computer cannot be said to be free (2006, p. 199–200). While computer systems do not therefore have ‘intendings to act’, they do have intentionality[1] since the systems was programmed to behave a certain way by a designer. Although this is not enough to make them moral agents, these artifacts are closer to being moral agents than what natural objects are (2006, p. 201–2).

As an example of this she considers a landmine. A landmine’s moral impact on a child that is killed by the landmine is the result of the intentionality of the designer, the artifact and the user. It is a mistake to think of the behavior of the landmine as unconnected to human behavior and the human’s intentionality. Now, a landmine is a simple artifact, but the same principle applies to computers too, even if they are more complicated. Adding a learning feature in the artifact will make it harder for the designer and user to predict the artifact’s future behavior, but this does not break the connection to the fact that it was designed for a specific purpose. Even when it learns, it learns as it was programmed to learn (2006, p. 203).

The description by Johnson that it is the triad of designer, artifact and user that is morally significant has great relevance for ethical discussions, especially when it comes to modern technology. Nonetheless, the specific question at hand is whether some artifacts can break that connection and become moral agents on their own. Johnson’s argument applies to most artifacts today, maybe all, but can she really say that it is never possible for an artifact to break the connection to the designer and the user? It is hard for me to see how she can.

As she states herself, the landmine is a simple artifact. No one is making the claim that simple artifacts are moral agents, or even minimal agents. Her addition that this “applies to more complex and sophisticated artifacts” is true for most artifacts, but I would say that my distinction between directly and indirectly programmed artifacts is an example of a level of complexity that could make a difference. The design is a multi-stage process. First you produce a neural network with the best potential to learn, then you tell it what it should learn. The second step is, as Davenport argued in the previous section, is similar to how parents bring up their children. The analogy suggests that if the artifact’s moral behavior is shaped by input from the environment in a similar way to how a human’s behavior is shaped by it, then at some threshold level, the connection with its designer is broken. For sure, Johnson is correct in saying that artifacts will always have been designed by humans and this will always differentiate them from humans, but does this fact really exclude them from the possibility of being moral agents? Could there not be a point where the difference between human agents and advanced artifacts is that of mere history, and that the history alone should have no significance whether they should be considered moral agents or not? I conclude that this argument is not strong enough to exclude this possibility.

When it comes to the argument about freedom, I find that it too lacks strength. Agreed, the non-deterministic nature of artificial agents is a complicated question, and it is plausible that it is quite different from human non-determinism, but since the human non-determinism is “mysterious”, it seems to be a bad starting point to explain this necessary condition. This is similar to Himma’s argument that we need to first get a better understanding of free will before we can technologically model it. If the instrumental stance of the belief/desire model is correct, then what needs to be determined is whether the artifact is best described by beliefs and desires, or if it is not. Assuming that computers need to be non-deterministic in the same way humans are seems to be an anthropocentric argument, and as long as human’s freedom is mysterious, it is not clear why something needs to be mysterious in the same way to be free. Floridi and Sanders argue in a similar fashion as I do when they comment that artificial agents “are already free in the sense of being non-deterministic systems.” (2004, p. 366) i.e. they do not have to be non-deterministic in the same way.

The conclusion of this chapter is that Johnson and Himma find the freedom of the deliberation of the agent to be vital for moral reasoning and that this freedom is mysterious and poses tremendous philosophical difficulties. They draw the conclusion that artificial moral agency or artificial agency is impossible. I argue instead that these philosophical properties instead cast doubt if this freedom really exists or if it does, what it consists of. Johnson argues that making a simple artifact more complicated does not give it the freedom it needs to be an agent, but she does not show that this is impossible. Also, indirect programming is a plausible enough analogy that could make it possible for the artifact to achieve this freedom. I used Himma’s argument that ‘fully determined systems can never achieve the freedom needed for moral agency’ to give a counterexample of how a saturated indirectly programed system could fail to be free. I then argued against this counterexample by saying that it is actually a case of direct programming, and that as long as any system relies on indirect programming (a significant degree of the artifacts “programming” relies on external input), it is analogous to how humans develop their moral compass; the system can be a moral agent.

Comments:

Non-determinism is important in this section, and it’s intuitive why. But there are some traits of it that I find a bit confusing. If a system is deterministic, it is just a set of instructions that has no agency. In that sense, humans are non-deterministic. But the normative condition of agency implies that the agent tries to achieve some goal, and in a sense that makes the agent become “more” deterministic. And the non-determinism here isn’t just pure randomness, since complete randomness also isn’t agency. Maybe it is just the concept ‘non-deterministic’ that is too wide.

Next and last part is the conclusion!

Footnotes:

[1] Here it is not in Brentano’s definition of the word, but the ordinary sense of it.

--

--

H R Berg Bretz
H R Berg Bretz

Written by H R Berg Bretz

Philosophy student writing a master thesis on criteria for AI moral agency. Software engineer of twenty years.