When should AI have moral status?

Kentaro Toyama
AI Heresy
Published in
9 min readDec 5, 2023

--

Image of a robot contemplating the trolley problem. Generated by DALL-E 3. The author does not assert copyright, in line with the non-copyrightability doctrine.
Image created by DALL-E 3. No copyright asserted as per non-copyrightability doctrine.

In a recent talk he gave at a top AI conference, David Chalmers, the philosopher of consciousness, mentioned almost offhand, that “Conscious systems have moral status.” But, is that really the case? I’m not so sure.

Moral status and consciousness are two separate things, though in the everyday world we’ve been familiar with so far, they always occur together in human beings. And, while moral status might seem the kind of philosophical issue that has little practical impact, it matters with the rise of AI. Getting it right sharpens what we need to prove about AI systems, before we start suggesting, say, that it’s morally wrong to turn them off.

First, it’s worth clarifying what’s meant by both “consciousness” and “moral status.” I use Chalmers’ own definition of consciousness as “subjective experience.” This is the same as Thomas Nagel’s notion of “there is something that it is like to be that” entity as noted in his famous “What is it like to be a bat?” article. Conscious entities can feel things and experience things. They might, for example, experience the color blue in a way most human being presumably do but that cameras (also presumably) do not. I don’t assume that consciousness requires intelligence, self-awareness, self-consciousness, or any of a number of other attributes often associated with consciousness, but which I believe go beyond simple subjective experience.

What about “moral status”? Philosophers talk about at least two kinds of moral status, and it’s important to treat each separately, because they are very different. The first kind is about whether a particular entity deserves moral consideration. I call this recipient moral status, because its a status that an entity is bestowed by others. We give other people moral consideration. We tend to give animals some moral consideration, though how much depends on the person and the animal. We don’t generally give inanimate objects, such as rocks or digital cameras moral consideration.[i] This is the sense in which Chalmers made his claim: “Conscious systems have moral status. If fish are conscious, it matters how we treat them.” (To be clear, though, I believe Chalmers’ claim is insufficient for moral status, as I argue below.)

The second kind of moral status is whether an entity should be expected to act morally. Let’s call this agentic moral status, to distinguish it from recipient moral status. An entity with agentic moral status is a potential provider of moral consideration and carries moral responsibility. We generally assume, for example, that adults have this kind of moral status,[ii] but are disinclined to believe it of newborn babies or factory robots. If we’d blame something or someone if it caused harm, then we are acting on the belief that it (or he/she/they) has agentic moral status. The philosopher Seth Lazar appears to have been thinking about this kind of moral status in an article he wrote arguing that “To have moral status is to be self-governing.”

So then, how does consciousness differ from moral status? There are separate answers for each type.

For recipient moral status, the question is whether the entity can feel pain. Can it suffer? In this, I agree with the philosopher Peter Singer, who argues clearly and forcefully in his book, Animal Liberation, that we should give recipient moral status to animals, because they presumably feel pain. To my knowledge, however, Singer doesn’t consider the question of whether conscious entities that don’t feel pain deserve moral consideration. (He wasn’t considering AI when arguing for animal liberation.) Feeling pain requires consciousness, but it’s conceivable that a conscious entity never feels pain. There could, in theory, be an entity that could experience color and vision in much the way that human beings do, but without ever experiencing pain — much in the way that most of the time, our vision doesn’t cause pain per se.[iii] I don’t personally believe AI will ever be conscious, but if it does, it seems possible that it could do so in this way, without an experience of pain.

If it’s not obvious, pain matters because it hurts. It’s only when there’s the possibility of pain that questions of morality arise. Morality is about good and evil, and as Jeremy Bentham wrote, pain is, “without exception, the only evil.”[iv] Everything that we think of as evil, or even mildly ethically questionable, is so only because it causes pain or increases its likelihood, however minor and of whatever type, for some entity down the line. No pain, no evil. No evil, nothing worthy of recipient moral status.

For agentic moral status, Lazar gestures in the right direction. Moral responsibility accrues to entities that are “self-governing,” that can make autonomous decisions of their own free will. Now, it’s not clear exactly what free will is, as it is associated with its own philosophical morass.[v] But, I contend that whatever autonomy and free will are, they do not require consciousness.[vi] The ability to act freely, to make decisions autonomously, does not seem predicated on having subjective experience. Philosophers conceive of “zombies” which are in all ways like human beings, including in our presumed autonomy, but without any subjective experience. It seems right that they have agentic moral status, and should be expected to act morally. (Conversely, it seems entirely possible that one could have subjective experience without having free will, much in the way that when we watch a movie, we are simply experiencing something without direct control over its course. Some interpretations of the Buddhist doctrine of pratītyasamutpāda also suggest that experience without free will is the human condition.[vii])

To summarize the argument so far, for receptive moral status, the ability to feel pain, for which consciousness is a necessary, but insufficient, condition, is the primary criterion; for agentic moral status, it’s autonomous decision-making, which seems possible without consciousness. We human beings, incidentally, have both, because we both feel pain and we make autonomous decisions.

These claims have a direct consequence for the question of whether and when AI deserves moral status. In particular, it sharpens what we need to prove before we start believing in moral consideration or responsibility for AI. To begin, it’s important to acknowledge that it is, at least with current philosophy and science, simply impossible to prove objectively whether either of the above conditions for moral status has been met. Does an entity feel pain? We have no means to verify this, such that others can know for sure.[viii] And, while it makes some sense for us to operate on the basis that others like us (i.e., other people, some animals) experience pain, those are just assumptions — assumptions that are much less credible with things like digital circuitry that are very different from us. Whatever the case, we have to have more than mere reports of pain, however explicit, or the appearance of great suffering as communicated through powerful words, piercing shrieks, or seemingly frantic attempts to escape it. All of these can be easily simulated by even fairly unsophisticated code or robotics that very, very few people would claim to experience actual pain.[ix] And as for whether an entity has free will, i.e., autonomous decision-making power… philosophers and scientists can’t even agree on its definition, to say nothing of verifying whether we have it ourselves.

All of that means that we are nowhere close to being able to prove whether AI should have moral status. But, by separating moral status from consciousness, I do think it lowers the bar, or perhaps sharpens it, for determining moral status. For example, it feels somewhat easier to build an instrument that tests for the existence of subjective pain alone, than one that tests for a less focused conception of consciousness: I could imagine, for example, some cyborg instrument with a numeric display showing the pain experienced by an entity in the immediate vicinity, validated by comparing it against people’s reports of pain they experience near the device.[x] Of course, the existence of pain implies the existence of consciousness, and so a pain detector performs to some extent as a consciousness detector — but, somehow, the focus on pain alone feels technically easier to achieve. (In contrast, a device that indicates whether an entity can see the color blue seems less credible a priori.) And of course, if we can agree on a conception of free will that does not require consciousness, then it might make the task of determining moral responsibility easier.

Conversely, if we can’t even meet these lower bars for assigning moral status, then perhaps humanity is simply not ready to be assigning moral status to non-animals just yet. In a world in which we are radically unable to ensure baseline relief from routine physical distress for all human beings, whose moral status is generally not in doubt, why should we be concerning ourselves with the merely possible (and highly unlikely) moral status of things we didn’t even have to create?

Notes

[i] People sometimes appear to give inanimate objects moral consideration. For example, parents often chide children for not taking care of their belongings, especially the more expensive ones, and these admonitions are undeniably tinged with moral concern. It could thus be interpreted that such objects have recipient moral status, but I think it can readily be argued that these are instances when the ultimate object of moral harm is, again, people. In the case of a child and her belongings, the child (who might suffer from losing a valued toy), or the parents (who might feel hurt by a gift being treated badly, or who might have to cough up the funds to purchase a replacement), or future acquaintances of the child, whose things may be damaged, if the child doesn’t learn the lesson early enough. Another context is when we anthropomorphize things and then give them apparent moral consideration. Again, children do this with their dolls and stuffed animals. Adults do it, too, e.g., in their interactions with items they hold sacred. Here too though, either the ultimate concern is for people, or the question of moral consideration still resolves into the one discussed in the main text.

[ii] However, we place a lot of conditions on this for adults: They should be mentally healthy; they should be free of coercive pressures;

[iii] What I mean is that the ability to see color, shape, and motion doesn’t in and of itself cause pain. Of course, if someone sees their lover cheating, that causes pain, but because of the surrounding emotional context and the act, not because of the visual perception itself. And yes, it’s possible for exceptionally bright lights to cause pain, but for most of us, that is a rare phenomenon. In any case, it’s easy enough to conceive of a consciousness that experiences only non-painful visual perceptions.

[iv] Though I think Bentham was right about this, at least if properly interpreted, and I do think utilitarianism cannot be easily dismissed, either as philosophy or practical guide, I am not espousing utilitarianism here. The arguments in this article don’t require utilitarianism as any kind of assumption.

[v] Most of us have the profoundly strong intuition that we have free will, but it is not at all clear that free will actually exists. Philosophers from Hume to Spinoza, and more recently from Galen Strawson and Derk Pereboom, have made strident arguments against free will.

[vi] There is a possibility that empirically, consciousness and free will cannot be separated, i.e., because of something like the soul. Maybe, the thing in us that experiences pain and pleasure is also the thing in us that decides what we do. This is, of course, the intuition that most of us have about ourselves, but proof of it is hard to come by.

[vii] This is just one interpretation, however. It’s more likely that Buddhist doctrine is compatibilist, as philosopher Mark Siderits has argued.

[viii] … and, then, too, only for the very present moment. Did we actually experience the pain of the toe we stubbed last week? No way to verify.

[ix] A powerful thought experiment about this appears in Terrel Miedaner’s book, The Soul of Anna Klane, in a chapter called “The Soul of the Mark III Beast,” which I first read in Douglas Hofstadter’s book, The Mind’s I.

[x] Note, however, that such a device would presume that some form of telepathy is possible. And, in the absence of good evidence for telepathy, it would seem that my ability to imagine such a device should not be taken too seriously.

--

--

Kentaro Toyama
AI Heresy

W. K. Kellogg Professor, Univ. of Michigan School of Information; author, Geek Heresy; fellow, Dalai Lama Center for Ethics & Transformative Values, MIT.