What if Sentience is Irrelevant?

In Forbes, Andréa Morris writes “Artificial intelligence (AI) is predicted to become sentient anywhere from never to sometime in the next decade or two”. Venturebeat recently featured an articel titled “Researchers are already building the foundation for sentient AI”. People seem to assume that for AI to be as intelligent as people, it needs to be sentient.

“Sentient” can mean different things. It can mean “be aware of outside stimuli”. In that sense, robots are already sentient. I assume most people see “sentience” as “having consiousness”. Which can be defined as

  • being capable of self reflection
  • knowing one is a human (dog, sheep, machine, etc)
  • having a stream of consciousness

among many more. Here, I will use the word “sentient” as a synonym of “conscious”, having consciousness.

Libet at al. in a 1983 article showed that a conscious will to move a finger is preceded by premotor neural activity in the brain. Simply said, the nerves in your muscles know you are going to move your finger before you do. A lot can be said about the research, so Matshuhashi and Hallet in 2016 repeated the experiment. They could not confirm “action precedes thought”, but they did conclude “The first detected event in most subjects was the onset of BP”. BP (Bereitschaftspotential) can be seen as the activity in the brain related to motion of limbs. In other words, they also saw that the actual movement of a finger seems to precide the conscious will of a subject to move their finger.

Does this mean a person has no free will? I don’t think it does. It only means that probably, free will is not located in our conscious mind, or not fully located in our conscious mind. Which raises the question “how relevant is consciousness or sentience to our thinking?” If maybe we don’t need sentience to have free will, do we need sentience to be intelligent? Does AI need sentience to outsmart us?

Intelligence requires learning. A being that doesn’t learn, is not smart. Learning by having knowledge presented to you, does not require reflection per se. However, intelligence requires continuous learning, imho, learning by reflecting on your actions and their results. Intelligence requires reflection.

An intelligent being needs to interact with its environment. This is not necessarily a physical interaction, an AI sitting in a computer can limit its interaction to online communication. Does it need to know it is an AI that sits in a computer? Does Neo need to know he’s a body plugged into a machine? Neo has a pseudo-awareness, i.e. an awareness as if he really is in a world that actually is a vertual world projected into his brain by a computer. Is pseudo-awareness the same as awareness?

Wikipedia says “stream of consciousness” is often taken as synonym of “interior monologue”, but these can also be distinguished, “a stream of consciousness being the subject matter while the interior monologue is the technique presenting it”. To me personally, “interior monologue” means playing a monologue or dialog in my head before it actually happens, like preparing for a conversation, or playing one that may never actually happen. Preparing for a conversation seems similar to Deep Blue going through all possible moves in its match against Garry Kasparov before selecting the best move. That way, Deep Blue had a stream of consciousness. Does a chat bot need such an interior monologue? Today’s chat bots don’t have one, and I can’t see how they would need it. I am not sure why humans have an interior monologue, it may be so you practice your conversation or monologue and simulate a response and test how it works. Would a computer need that? I doubt it.

What would happen if a person did not sentience? No interior monologue? No reflection? No awareness? Everything that people generally see as requiring a high IQ, like playing go or reasoning about gravity waves and black holes, this person would be as good at as any person with the same IQ. But would the person be capable of social interaction? Would the person pass the Turing test? Turing uses this conversation as an example of his test:

Q: Please write me a sonnet on the subject of the Forth Bridge.

A : Count me out on this one. I never could write poetry.

Q: Add 34957 to 70764.

A: (Pause about 30 seconds and then give as answer) 105621.

Q: Do you play chess?

A: Yes.

Q: I have K at my K1, and no other pieces. You have only K at K6 and R at R1. It is your move. What do you play?

A: (After a pause of 15 seconds) R-R8 mate.

I don’t see why you’d need any form of sentience for this conversation. This is another example Turing gives:

Interrogator: In the first line of your sonnet which reads “Shall I compare thee to a summer’s day,” would not “a spring day” do as well or better?

Witness: It wouldn’t scan.

Interrogator: How about “a winter’s day,” That would scan all right.

Witness: Yes, but nobody wants to be compared to a winter’s day.

Interrogator: Would you say Mr. Pickwick reminded you of Christmas?

Witness: In a way.

Interrogator: Yet Christmas is a winter’s day, and I do not think Mr. Pickwick would mind the comparison.

Witness: I don’t think you’re serious. By a winter’s day one means a typical winter’s day, rather than a special one like Christmas.

The point about “Spring day” is obvious. You don’t need to be sentient — or very smart — to see the metrum would not be correct. “Winter’s day” is not so obvious. Today’s AIs can learn that people don’t want to be compared to a winter’s day, if they have been presented with this or similar examples. Knowing that in general, people like summer and spring more than winter, the AI would be able to answer the question, hence no sentience is required.

Knowing that Christmas is not your average winter’s day also should not require any capability other than general knowledge and intelligence. So, for this part of the Turing test a person — or a computer — does not require sentience. Turing devised his test to examine if a computer can be told apart from a human. If the test does not require sentience, which means sentience is not required for intelligence as defined by Turing, then we can say sentience is irrelevant for a human to normally function in our society. If, of course, we accept Turings definition of intelligence and his procedure to test it.

Why are we sentient? In the popular book “Blink: The Power of Thinking Without Thinking” Malcolm Gradwell argues that it’s best to leave complex decision making to our subconscious brain. In “On making the right choice: the deliberation-without-attention effect” Dijksterhuis et al. note that “purchases of complex products were viewed more favorably when decisions had been made in the absence of attentive deliberation”. In “Boundary Conditions on Unconscious Thought in Complex Decision Making” Payne et al. argue that it is not always better to rely on your subconsious thinking but they don’t prove the opposite either.

Are we conscious because we are social beings and we need to carefully reflect on subtle verbal exchanges with our fellow humans? Do we reflect? We tend to think that the sentences we utter are formed consciously, but that’s not true, or not always true, or not true for every person. Try, in a conversation, to consciously follow where the words that you say come from. You speak in well formed syntactically and semantically correct sentences, or at least you do most of the time. When you listen to yourself speaking — which may require some practice — you will notice that when you start your sentence, the end of the sentence has not arrived in your conscious mind yet. If a sentence is correct, the beginning matches the end. If you change the end, you may need to change the first words too. Yet, even if you don’t consciously know the end of the sentence, the beginning still matches it. This leads me to think that your sentences are formed in your subconscious mind, al your conscious mind does is verbalization, or maybe not even that.

If speech does not require reflection or consciousness, does it still play a role in our interactions? It probably does. Conversation between humans is so much more subtle than between apes, let alone dogs, that we need to put effort in fine tuning it. We need our conscious mind to optimize communication in our group. I am not a psychologist, I am not a neurobiologist, nor a philosopher, but I think this is true: “humans have a conscious mind solely to optimize our conversations with fellow humans and our group relationships”. Sentience is irrelevant to our intelligence.

Does an AI need sentience, or a conscious mind? It doesn’t, as sentience is irrelevant to intelligence. If we broaden the concept of AI to “Artificial Human Mind”, as Turing does in his test, does this AHM need sentience? Does it need it to communicate with humans, or to communicate with fellow AHMs? The latter would not be the case, as we can assume that computer programs that are connected to networks have far more efficient ways to communicate than speech. Does a computer need sentience to talk to humans? Do we think it needs to maintain the subtle relationships with us that we have with one another? That would be the case if the AHM is smart enough to use it. But if it’s so smart, would we be willing to treat it like a human? I think an AHM does not need the subtlety of our inter-human relationships because either we won’t treat it like a human, or it will not see it self as a human. This is speculative, but I think an AHM (an AI) will do perfectly well without a conscious mind, without sentience.