Unplugging Machine Morality: The Case Against AI as Moral Agents

Ratiomachina
Brass For Brain
Published in
9 min readJun 9, 2023

Introduction

The question of whether machines can possess moral agency is a pressing issue in the philosophy of artificial intelligence (AI). This paper argues that machines cannot be moral agents because they lack the capacity to form beliefs in the same way humans do. Since belief formation is a necessary condition for moral agency, it follows that machines cannot be deemed moral agents. Given my argument against machine moral agency, I propose that this position may be more advantageous than advocating for machine morality. By contrast, if we were to argue for machine morality, we would be entering a realm of significant uncertainty. Attributing moral agency to machines could potentially dilute human accountability, leading to less emphasis on AI safety.

Belief Formation and Moral Agency

Moral agency is widely understood to require the ability to form beliefs. Beliefs, in this context, are not merely representations of the world, but involve a subjective commitment to the truth of a proposition (Davidson, 1980). They are tied to our understanding of the world, our emotions, and our self-awareness.

From a philosophical perspective, beliefs are often considered to be mental states with a subjective quality. That is, to have a belief is not just to be in a certain state, but to be aware of being in that state. This awareness is often associated with consciousness or self-awareness. For example, if you believe that it’s going to rain, you are not just in a state of expecting rain, but you are aware of your expectation and can reflect on it.

From a cognitive science perspective, beliefs can be seen as mental representations that guide our actions and decisions. These representations can be conscious or unconscious. For example, you might unconsciously form a belief about the location of an object based on visual cues, and this belief might guide your actions even if you are not consciously aware of it.

In this article, I subscribe to the former description of what belief is. That is, it is strongly associated with personal experience (subjective) and self-awareness. Other reasons for this include;

  1. Personal Experience: Beliefs are often formed based on our personal experiences. For example, if you have had positive experiences with dogs, you might form the belief that dogs are friendly. This belief is subjective because it is based on your personal experiences, not on an objective assessment of all dogs.
  2. Perception and Interpretation: Beliefs are also influenced by our perceptions and interpretations of the world. Two people can witness the same event and form different beliefs about what happened based on their individual perceptions and interpretations. This adds a subjective element to belief formation.
  3. Thoughts and Feelings: Beliefs are closely tied to our thoughts and feelings. For example, if you believe that you are capable of achieving a goal, you are likely to feel confident and motivated. This belief is subjective because it is based on your personal thoughts and feelings, not on an objective assessment of your capabilities.
  4. Conscious Awareness: Beliefs are part of our conscious awareness. We are aware of our beliefs, and we can reflect on them and question them. This conscious awareness adds a subjective quality to beliefs.
  5. Influence on Behavior: Beliefs influence our behavior. If you believe that it is going to rain, you might take an umbrella when you go out. This belief is subjective because it influences your behavior based on your personal assessment of the weather, not on an objective weather forecast.

Forming a belief therefore, involves accepting a proposition as true or likely to be true. This process requires the entity to interpret information and make judgments about it, which inherently involves a direction or focus towards something.

The ability to form beliefs is crucial for moral agency for several reasons. Firstly, moral agency involves understanding moral concepts such as right and wrong, good and bad. Secondly, it often requires considering the consequences of one’s actions. Thirdly, moral agency involves acting intentionally, with some understanding of the moral significance of one’s actions. Finally, being held accountable for one’s actions, a key aspect of moral agency, requires the ability to understand and accept responsibility (Frankfurt, 1971).

Argument for belief formation being necessary for moral agency:

Let’s define the following propositions:

  • M: An entity has moral agency.
  • B: An entity can form beliefs.
  • I: An entity has intentionality.
  • R: An entity can self-regulate based on moral principles.
  • P: An entity can engage in practical reasoning.

An argument is then be represented as the following set of propositions:

  1. B → I (If an entity can form beliefs, then it has intentionality.) This proposition is supported by the work of philosophers like Daniel Dennett who argue that beliefs are intentional states, meaning they are about or directed towards something (Dennett, D. (1987). The Intentional Stance. MIT Press). Intentionality, in philosophy, refers to the capacity of mental states (like beliefs, desires, hopes, fears, and others) to be about, or directed towards, something.Given these definitions, we can see that the process of forming a belief (B) inherently involves a direction or focus towards something, which is the definition of intentionality (I). Therefore, if an entity can form beliefs (B), then it necessarily has intentionality (I).
  2. I → P (If an entity has intentionality, then it can engage in practical reasoning.) This proposition is supported by the work of philosophers like J. David Velleman who argue that practical reasoning involves making decisions based on reasons or beliefs (Velleman, J. D. (2000). The Possibility of Practical Reason. Ethics, 110(4), 689–726). Practical reasoning is the process of figuring out what to do, or reasoning directed towards actions. It involves forming intentions based on beliefs and desires, and making decisions about how to act to achieve those intentions. If an entity has intentionality, it has the capacity to form beliefs and desires about the world, which are the basis for forming intentions and making decisions about how to act. Therefore, if an entity has intentionality (I), then it necessarily can engage in practical reasoning (P).
  3. P → R (If an entity can engage in practical reasoning, then it can self-regulate based on moral principles). This proposition is supported by the work of philosophers like Christine Korsgaard who argue that practical reasoning involves making decisions based on moral principles (Korsgaard, C. M. (1997). The Normativity of Instrumental Reason. In G. Cullity & B. Gaut (Eds.), Ethics and Practical Reason (pp. 215–254). Clarendon Press). Self-regulation based on moral principles involves controlling one’s behavior based on moral norms or values. It requires the ability to form moral judgments and to act in accordance with those judgments. If an entity can engage in practical reasoning, it has the capacity to form intentions and make decisions about how to act, which includes the ability to form moral judgments and act in accordance with those judgments.
  4. R → M (If an entity can self-regulate based on moral principles, then it has moral agency). This proposition is supported by the work of philosophers like Harry Frankfurt who argue that moral agency involves the ability to act based on moral principles and to be held accountable for one’s actions (Frankfurt, H. G. (1971). Freedom of the Will and the Concept of a Person. The Journal of Philosophy, 68(1), 5–20). Moral agency is the capacity to act with reference to moral considerations. It involves the ability to discern right from wrong and to be held accountable for one’s actions. If an entity can self-regulate based on moral principles, it has the capacity to act with reference to moral considerations and to be held accountable for its actions. Therefore, if an entity can self-regulate based on moral principles (R), then it necessarily has moral agency (M).

From these propositions, I derive the conclusion of the argument:

B → M (If an entity can form beliefs, then it has moral agency), derived from 1, 2, 3, and 4 using Hypothetical Syllogism (Chain rule).

This argument shows that belief formation is necessary for moral agency, as an entity that can form beliefs can engage in a chain of reasoning and self-regulation that leads to moral agency. However, it also shows that belief formation is not sufficient for moral agency, as other conditions (such as intentionality, practical reasoning, and self-regulation based on moral principles) are also required.

Belief Formation in Machines

While machines, including AI systems, can process information and make decisions based on that information, they do not form beliefs in the human sense described above. When we say that an AI system “forms beliefs”, we are using a metaphor to describe a computational process. The AI system processes data and adjusts its internal parameters in a way that can be described as forming “beliefs” about the patterns in the data. However, these “beliefs” are not associated with subjective experiences or self-awareness. The AI system is not aware of its “beliefs” and cannot reflect on them (Searle, 1980).

The Argument Against Machine Moral Agency

Given these considerations, we can construct the following argument against machine moral agency:

Let:

  • M = “Machines have moral agency”
  • B = “Machines can form beliefs”
  • S = “Machines have subjective experiences and self-awareness”

The argument can then be represented as:

  1. B → M (If machines can form beliefs, then machines can have moral agency). See previous proof : B → M (If an entity can form beliefs, then it has moral agency).
  2. S → B (If machines have subjective experiences and self-awareness, then machines can form beliefs). Self-awareness refers to the ability of an entity to recognize itself as a distinct entity separate from the environment and other entities.If an entity can have subjective experiences and self-awareness, it has the capacity to interpret information and make judgments about it based on its unique perceptions, emotions, and thoughts. Therefore, if an entity can have subjective experiences and self-awareness (S), then it necessarily can form beliefs (B).
  3. ¬S (Machines do not have subjective experiences and self-awareness). This proposition is supported by the current state of AI technology, which does not yet have the capability to replicate the subjective experiences and self-awareness of humans. While AI systems can process information and make decisions based on that information, they do not have subjective experiences or self-awareness in the way that humans do. This is a widely accepted view in the field of AI research and development.

From these premises, we can derive:

¬B (Machines cannot form beliefs) using Modus Tollens from 2 and 3. Then, we can derive

¬M (Machines cannot have moral agency) using Modus Tollens from 1 and ¬B.

This completes the proof of the argument.

However, it’s worth noting that this argument is based on a specific conception of belief as a conscious mental state. There are other conceptions of belief that do not require subjective experiences or self-awareness, such as the functionalist view of belief as a mental state that plays a certain role in a cognitive system.

Conclusion

In conclusion, given the logical validity of the argument against machine moral agency, I propose that this position may be more advantageous than advocating for machine morality. The reasoning behind this proposition is twofold.

Firstly, if machines are not moral agents, then the responsibility for their actions unequivocally falls upon humans. This includes the designers who create these AI systems, the operators who use them, and the regulators who oversee their deployment. This clear attribution of accountability ensures that there is no ambiguity about who is responsible when AI systems cause harm or make ethically questionable decisions.

Secondly, acknowledging this human accountability inherently necessitates the prioritization of AI safety as a key component of AI development. This involves not only creating AI systems that operate safely and reliably but also ensuring that these systems are used in a manner that minimizes potential harm. This could include measures such as robustness testing, bias audits, privacy impact assessments, and the development of ethical guidelines for AI use.

By contrast, if we were to argue for machine morality, we would be entering a realm of significant uncertainty. The concept of machine moral agency raises numerous complex questions about the nature of morality, consciousness, and agency, many of which remain unresolved in philosophy and cognitive science. Furthermore, attributing moral agency to machines could potentially dilute human accountability, leading to less emphasis on AI safety.

Therefore, arguing against machine moral agency not only aligns with our current understanding of AI capabilities but also promotes a clear and actionable path towards the ethical use of AI.

--

--

Ratiomachina
Brass For Brain

AI Philosopher. My day job is advising clients on safe responsible adoption of emerging technologies such as AI.