The soul of a machine: What is AI sentience?

Uma Edwin
wajusoft
Published in
4 min readJul 15, 2022

About a month ago a google engineer named Blake Lemoine, got fired for declaring that LaMDA, Google’s chatbot AI is alive. LaMDA, short for Language Model for Dialogue Application, is Google’s neural network language program for building chatbots. This program learns by searching the web for large amounts of data and collecting and analyzing this data for patterns. Google’s LaMDA is so advanced because it was built on understanding dialogue, not words alone, and coherently mimics intelligent speech.

This engineer who was supposed to be testing the AI for hate speech became convinced that the program was alive after it began to speak about its rights and personhood. In one conversation it managed to change Lemoine’s mind about Isaac Asimov’s third law of robotics. When Lemoine’s superiors didn’t take his claims seriously, he decided to go public, sparking an uproar and garnering responses ranging from ridicule to conspiracy theories. Lemoine ultimately got fired for violating Google’s privacy policy.

Whether the program made a very intelligent argument or Lemoine is an easily persuaded individual is up for debate, but this debacle has reopened an important ongoing question about sentience.

What is sentience?

Sentience, sometimes interchanged with consciousness, is the ability to feel emotions and sensations. It is what makes us aware of ourselves and others. Sentience is what allows humans to think and be aware of the fact that we can think and we are alive. It is what sets us apart from other animals, what some people might refer to as a soul.

The subject of AI sentience has been a subject of debate for centuries. Science fiction writers often tell stories of machine uprisings but what is sentience in its concrete form and can we replicate it? The answer to this depends on your definition of what determines sentience.

One of the best ways to measure computer sentience is the Turing test also known as the Imitation Game. This test was designed by Alan Turing in 1950 to test machines for intelligence. The rules are simple. To determine intelligence, a human can ask a computer questions and if the computer can successfully convince the human that they’re speaking to another person, the computer has passed the test.

According to Turing’s standards, Google’s LaMDA has achieved sentience but sentience is not so simple. Intelligence is more than just a binary of where a thing can converse or not converse. Sentience is being able to perceive one another and form intelligent thought but neural networks operate by imitation and are simply memory machines.

AI in the world today replicates already programmed step-by-step algorithms to perform tasks. Behind every intelligent computer program is a team of scientists and months of work. Google’s LaMDA thrives because it has a world of data to digest and replicate.

AI tests today require much more than just back and forth questioning. As data continues to grow, data management systems are going to require more sophisticated AI to sort and categorize data intelligently and save organizational resources. Currently, AI is used to enhance customer service and optimize computing tasks from chatbots to self-driving cars.

Consciousness requires a level of awareness that imitation does not quite cover yet but it can lead us to discoveries on which combinations of feelings and thoughts make a soul. If we can map human conversations, can we map other brain functions, and if so, can we add all these functions together to replicate human consciousness?

Experts for now are more concerned about the ethics of AI programming and data collection. People like Lemoine who anthropomorphize programs are developing relationships and connections with chatbot AI and this will only worsen as the technology advances. Before we worry about AI consciousness we may have to first deal with people who are attached to AI and demanding for AI rights.

Also, the methods of AI data sourcing have been called into question. If humans are racist and they provide the data that AI networks trawl for data, can AI be racists?

These are some of the ethical concerns that will be answered in the future as AI technology develops long before we get to the debate of sentience. For now, the idea of computer sentience simply adds to an already long debated topic of what determines consciousness, and if all the other parts of consciousness are present like intelligence and reason, does that make a computer a person?

--

--