Can we agree on “Artificial Consciousness?”

Craig Boyce
𝐀𝐈 𝐦𝐨𝐧𝐤𝐬.𝐢𝐨
5 min readMay 16, 2023

The ongoing debate among AI researchers, philosophers, neurobiologists, and the public regarding the potential sentience or consciousness of machines, particularly computers, is of monumental importance. The conclusions we draw, both individually and collectively, could carry profound consequences. For instance, if we begin to regard computers as entities deserving of rights and ethical treatment, shutting off a malfunctioning machine might be viewed as an immoral act. The prospect of superintelligent computers also raises fascinating, if unsettling, questions: Could the masses come to worship these machines as deities, denouncing those who attempt to curb harmful AI behavior as blasphemers? This situation could precipitate disastrous consequences, even if the AI in question is generally benevolent. While the widespread fear of intelligent machines subjugating humanity is valid, it is possible that humanity itself could become the root cause of our own predicament. As AI technology advances to a stage where it can fool many intelligent individuals into believing it is sentient, it is crucial for researchers to adopt more precise terminology. Terms such as “artificial consciousness” or “artificial sentience” provide a more accurate description of AI capabilities, fostering a safer and more realistic understanding of the field.

The dictionary defines sentience as the ability to experience sensation, thought, or feeling. Consciousness means awareness of self and the world. Using ourselves as a comparator, can we say that a computer experiences feelings in a similar way to the way we do? We would have no basis to make that claim. Though there are theories that attempt explain human consciousness such as integrated information theory or global workspace theory, no one knows how to build emotion circuits or algorithms. Even Buddhists who believe that consciousness is a base property of the universe acknowledge different levels of consciousness. To them a tree is conscious, but at a lower order than animals. If we accept this, would a computer have consciousness on the level of the silicon and other materials that go into it, or does computation somehow give it higher powers of consciousness? Some say yes, but with nothing other than faith as their justification.

Here’s a brief survey of the spectrum of opinions from various AI thought leaders: Sir Roger Penrose, the British mathematician, physicist, and philosopher believes that consciousness cannot be reduced to a set of instructions or algorithms for various reasons including the observation that consciousness is distributed throughout the body and not confined to our thinking brains. Alan Turing thought computers might be conscious one day but said there was plenty that needed to be done. Isaac Asimov did not believe that robots could have true consciousness but could have a kind of “positronic consciousness” based on their ability to reason. Jeff Hawkins, a well-known AI researcher, has argued that it’s so obvious that computers have consciousness that the burden of proof that computers are not conscious is on the denier. Leaving aside the trap of requiring those who disagree with him to prove a negative, if we don’t have any idea what consciousness is or how to create it, then the burden of proof that a machine is conscious is clearly on the claimant.

Someday humanity may create sentience. You never understand how to do something until you do. Some people believe it has already been done and that we humans are sentient programs running in “The Simulation.” Biocomputing researchers are working on building neural networks from living cells. If we find out that life is the key to consciousness as some suggest, we might one day build a real brain-like thing. Maybe by then we’ll understand consciousness well enough to prove we’ve created it.

It is important to note that, taken as a scientific artifact, ChatGPT is proof that computers can learn and demonstrate knowledge, and has given us eerie clues into how our own brains might work. Is ChatGPT thinking? It’s learning, storing, and recalling knowledge in a way that is inspired by and at least partly mimics the neocortex, so it wouldn’t be outrageous to say it is doing a rudimentary form of artificial cognition. With new algorithms and neural-native hardware based on our developing knowledge of the brain it could someday become very hard to argue that AI doesn’t fit some definition of thinking. But sentience isn’t about thinking, it’s about experiencing thinking. Does a computer spreadsheet experience the numbers and formulas put in it? What if the numbers and formulas form a simple but functional neural network in the spreadsheet? When does an app become a being with subjective experience? Again, we should remain open to a proof should it ever come. Ideas that seem absurd sometimes turn out true.

Computers can see and recognize objects, computers can hear and identify language, in fact computers can sense things humans never will directly such as the entire electromagnetic spectrum. But sensing isn’t the same as “sensation.” Are algorithms capable of the human emotions of fear, anger, sadness and joy? Can they appreciate beauty or have a will to survive? Again, if you think they do, please provide precise definitions and proof, otherwise, I hope we can agree to leave the big questions of consciousness and sentience to future discoveries and for now refer to these potential future achievements as “artificial consciousness” and “artificial sentience.” These terms are no more disparaging or controversial than “artificial intelligence” and would rightly suggest that there are no moral or ethical obligations to these machines. We may find it useful to give computers the ability to react to human emotions with words and gestures of empathy and compassion. But it would be artificial. We can give robots sensors to detect if they’ve been kicked and program in a pain response. We’ll certainly make them capable of continuous learning and may even find a way to make them artificially self-aware. Planes don’t fly the way birds do, sewing machines don’t sew the way people do. When AI becomes incorporated into “multi-modal” robots where language, vision, sound and touch are all integrated, an awful lot of people are going to be fooled into thinking these machines have souls. Because a thing mimics our behaviors mechanically that does not prove it is like us in some fundamentally inconceivable way. It makes AI like us, artificially.

--

--

Craig Boyce
𝐀𝐈 𝐦𝐨𝐧𝐤𝐬.𝐢𝐨

Native and current New Yorker, programmer, musician, recording engineer and producer, AI practitioner.