Sentience, Silicon, and Machine Others

Parts of this article are based on my own work published in the journal Ethics and Information Technology. See here.

What is it about entities that makes them worthy of moral concern? Why can I throw my phone against the wall but not my cat? Intuitively it seems plausible that for something to be genuinely deserving of moral concern it should be able to have affective states: specifically, it should have the capacity to suffer. This tracks the intuition that we do not accord sticks, stones, and shovels any kind of moral concern, but we do extend such consideration to cats, cows and (perhaps) caterpillars. However, it is not my intention to convince you that we should extend our moral circle to include non-human animals (or go through the reasons for doing so), as I take this for granted. Should you not be fully convinced, I invite you head over to the blog of a good friend of mine, where he provides a good overview of this “issue”. Suffice it to say, non-human animals are deserving of moral concern. How far down the phylogenetic tree this concern should extend, however, is contested. Consider: are mollusks deserving of moral concern? What about trees? (See here for a blog dedicated to this issue.) This, to my mind, does not necessarily complicate the claim that non-human entities are worthy of moral concern. However, might (sufficiently complex) machines also enter into this domain of moral concern? To show how this might be the case this I will first outline the most intuitive presupposition of moral concern more generally: that only biological entities are worthy of such ascriptions.

While I am sure many of you agree that non-human animals are worthy of concern, it seems there is a clear demarcation between living beings and artifacts. No matter whether we are talking about mollusks or trees, what binds these entities is that they are biologically alive. This set of biologically alive entities is clearly distinct from the world of “mere” artifacts. While we might have strange attachments to things like cars, cellphones and hardcover books, we still know, deep down, that should any of them perish in a fire, while this would make us sad, the artifacts themselves would not suffer one iota. Merely positing “biology”, however, hardly seems like a sufficient reason (at least on its own) to ground the potential for moral concern. In effect, what I hope to do is outline why such biological chauvinism is philosophically dubious, for both conceptual and epistemic reasons.

The first issue is one I have alluded to already: how are we to go about demarcating where exactly we should stop according moral concern to entities? Often the criteria for determining where we draw this line rest on whether we can infer whether the entity in question is capable of suffering or not. This is why, for example, people might less readily allow mollusks into their moral landscapes but find it easy to include pandas. A more concrete example will drive this point home. Consider the case of fish, who “are rarely considered to be intelligent or phenomenally sentient in a manner akin to humans or even mammals”. Public perception seems to be that fish are not sentient (at least not in the same way that we special mammals are). Consider again: we seem far more likely to think dolphins are sentient as opposed to cod. Fish look and behave very little like us humans: they have gills, use fins for movement, and live exclusively in water. It appears we operate under the guise of a sort of endothermism: that is, we (at least partially) discriminate who or what is worthy of moral concern based on blood temperature. Perhaps we think fish do not have the requisite neural machinery in order to give rise to the “right” kinds of experience. What this seems to allude to is that our perception of an entity’s intelligence also plays a role in whether they are allowed entry into our moral circle. However, in the case of fish, we have ample evidence that they match or exceed other vertebrates in their cognitive abilities. They are capable of complex social organisation and interaction, show signs of cooperation and reconciliation, etc. So, fish exhibit signs of intelligent behaviour. However, it seems that we struggle to empathise with fish as

[w]e cannot hear them vocalise, and they lack recognisable facial expressions both of which are primary cues for human empathy. Because we are not familiar with them, we do not notice behavioural signs indicative of poor welfare

Brown, 2015

To bring this back to my main point: it seems our understanding of things like “sentience” and “intelligence” come with a certain degree conceptual slippage (or baggage), making it difficult to use such terms to coherently pick out the “right” thing in the world. Moreover, it seems that we operate with a biased understanding of the aforementioned concepts, and so our conception of them is already geared towards the ways in which we are sentient and intelligent.

The second class of issues I wish to raise are epistemic in nature. That is, they relate to what we can know about the entity under investigation. Specifically, whether we can reliably distinguish between an ersatz phenomenon and its “true” instantiation. Basically, how are we to know whether an entity is “really” sentient, and not just producing the “correct” behaviour that is indicative of sentience. On my view, when it comes to such questions, we should proceed with an “as-if” approach. What this means, essentially, is that if it looks like a duck, quacks like a duck, walks like a duck, then it’s probably a duck. But why should we go ahead with such a strategy?

The main problem with grounding moral status in “genuine” sentience is that it is difficult, in practice, to measure. The move from external cues to internal states is one that requires a leap of faith. Consider a cat screaming in pain versus a lobster that is being boiled. In the case of the cat we see familiar signs of distress, whereas in the case of the lobster this is not so. However, there is evidence that lobsters do in fact show signs of distress (such as crawling away from literally being burned alive). The appropriate response to such evidence is that we should be careful not to anthropomorphize the suffering of animals: their signs of distress may not be the same as ours. There is, however, the inverse worry that by thinking that lobsters feel pain we are in fact anthropomorphizing. However, it is surely better to incorrectly extend moral concern than to unjustifiably deny it.

The point of this discussion and the introduction of the examples has been to show that which entities ultimately count as “sentient” or “capable of suffering” is itself contested. As we learn more about the behaviour of non-human animals, we are consistently confronted with the fact that we operate with a sense of sentience that is often too closely tethered to our own first person account of what this concept means. By broadening our understanding of the concept, we can avoid future (and present) moral harms. This involves acknowledging that we might have to adopt an “as-if” approach, as we cannot really know whether animals are in pain, and that this is just as true of our ability to know whether other humans are in pain. In the case of humans, we can perhaps justifiably project our own sense of being conscious onto others, but this is not so for animals, and perhaps machines.

Considering the two issues raised above, where does this leave us in relation to complex artificial systems? Well, at the very least it suggests that we need to keep an open mind as to who or what is deserving of moral concern. As noted above, simply claiming that the capacity to suffer is a core requirement means very little if we are unable to reliably cash out exactly what this means. My proposal, therefore, is broadly pragmatic: we should be open to the idea that moral concern might not be tethered to biological constitution at all. While the proposal is pragmatic, the argument presented is, I believe, one with serious philosophical implications. The fact that over time we have had changing criteria for who “counts” as sentient has been the source of our greatest moral progress and failure. The emancipation of slaves, the enfranchisement of woman, and the liberation of animals are all different ways in which we have made moral progress. However, such progress is all too often a response to avoidable errors. With this in mind we should instead err on the side of caution in this regard. This leaves open the possibility of a future with Machine Others.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store