AI machines as moral agents, introduction (part 2)

H R Berg Bretz
5 min readOct 21, 2021

--

As explained in my mission statement (part 1), this is the bite size version of my master thesis in philosophy, commented. However, the introduction needs no comments. For an index, see the Overview.

Photo by Guilherme Stecanella on Unsplash

1. Introduction

I argue that an artificial agent can be a morally accountable agent. By artificial agent I mean an artifact, man-made object (physical or virtual), that is advanced enough to act independently and can interact with the world around it, usually achieved through software. An accountable moral agent is an agent that is the origin of a moral act, an act that affects a moral patient.

The rise of artificial intelligence (AI) technology has spurred many discussions concerning this subject, for example David Davenport (2014) arguing for moral mechanisms in artificial agents, Steve Torrance (2014) comparing a realist with a social-relation perspective on the moral status of artificial agents and Johnny Hartz Søraker (2014) sketching an alternative way to define moral agency. AI is a broad term used to refer to a number of computational practices that allow machines to perform one or many cognitive functions, including perception, learning, and language use. The use of the word intelligence in artificial intelligence is problematic since it is unclear what exactly defines intelligence. Nevertheless, artifacts like machines are becoming more and more advanced, partly because of machine learning, resulting in some of them being fairly autonomous. Therefore, their behavior increasingly resembles that of intelligent agents like humans and this suggests that they can or could be considered agents in their own right.

Furthermore, the very concept of agency, the property of being an agent, is sometimes said to require consciousness or at least very advanced psychological traits exhibited only by humans (so far), which would mean that by just calling something an ‘artificial agent’ you effectively say that the artifact has or needs to have these properties. On the other hand, sometimes we refer to artifacts in the sense of ‘being the active cause’ e.g. ‘the robot broke the vase’, which implies that the robot is an agent. This shows the need for a clear and unambiguous definition of what is meant by agenthood and artificial agent.

Today there are many different examples of advanced artifacts — systems that can diagnose medical patients[1], identify humans and objects from still and moving images, play recreational games like chess and go, drive cars and so on. Autonomous vehicles (Self-driving cars) often come up in discussions concerning moral reasoning and AI — but then it is usually the morality of the programmer or the guidelines of the manufacturer that is being discussed, not the actual agent. For example, Amitai and Oren Etzioni argue that self-driving cars have no agency and that claiming the contrary is misleading (2017), Deborah G. Johnson says that an artifact can never be a moral agent, but they are “components in human moral action” (2009, p. 195). Floridi and Sanders, on the other hand, argue that an artificial agent can be a moral agent (2004).

I will argue as follows: In the next chapter (chapter 2) I will try to show how it is possible to say that an artificial agent can be a moral agent and that this can facilitate important ethical discussions. This argument is enabled by Floridi and Sanders’ distinction of accountability vs responsibility, which I will explicate in some detail.

In chapter 3, I will address the concept of agency. If by ‘agency’ what is meant is ‘intentionality agency’ with consciousness as a necessary feature, then either ‘artificial agent’ is a class with an empty set or it refers to artificial agents that are conscious. I will argue that agency should not be defined by Brentano’s intentionality and then explicate Barandiaran et al’s conditions of a wider definition of agency and show that this ‘minimal’ agency is preferred in discussions concerning artificial agency.

In chapter 4 I will compare the minimal agency definition with Floridi and Sanders’ criteria for moral agency, and I propose that the learning criterion is what makes the minimal agent also a moral agent.

In chapter 5 I will address the concern that although agency might not be defined by consciousness, consciousness is necessary to achieve moral agency, as suggested by Kenneth Einar Himma (2009) and Deborah G. Johnson (2006). I will argue against this, and instead of engaging in contentious philosophical debates I will try to show that the argument for a reduced agency can avoid these philosophical problems.

I will talk of AI mostly as a technology since it is a very wide and loosely defined term, but an artificial agent could be (and is primarily thought of) as an agent that is the product of AI technology[2]. I will sometimes clarify by using ‘mere agent’ to denote an agent that is not necessarily a moral agent, simply an agent. Lastly, I want to note that although the question of whether collective agents[3] can be moral agents is related to this topic and used as an argument in some of the texts referred to here[4], I will leave them out because I believe this question is too complicated to do it justice as an ancillary argument.

Here is Part 3!

Footnotes:

[1] If they in fact exist is challenged by IBM’s failure in clinical tests in Germany at the Giessen and Marburg hospital. E.g. Zhang Kang has showed that an AI could predict child diseases with 91–97% accuracy. It outperformed junior pediatricians, but not senior ones. https://www.nature.com/articles/s41591-018-0335-9 and https://www.newscientist.com/article/2193361-ai-can-diagnose-childhood-illnesses-better-than-some-doctors/ (2019–05–18)

[2] See Johnson (2006) p196–198 for an overview of the challenges in defining natural and human made entities/artifacts and the distinction between artifacts and technology.

[3] E.g. groups, institutions, corporations that consists of several agents.

[4] Floridi and Sanders claim that their definition of a moral agent enables groups to be moral agents (2004, p. 376). Himma argues that, strictly speaking, groups cannot be moral agents in their own right, the members of the group should be morally evaluated on an individual basis (2009, p. 27). Barandiaran et al does not address it, but consider their minimal agency the groundwork for future discussions of the topic (2009, p. 12).

--

--

H R Berg Bretz
H R Berg Bretz

Written by H R Berg Bretz

Philosophy student writing a master thesis on criteria for AI moral agency. Software engineer of twenty years.