The Implications of Conscious AI: A Leap Into the Unknown

Casper Wilstrup
Machine Consciousness
4 min readJun 5, 2023

--

Casper Wilstrup is the CEO of Abzu. Follow him on LinkedIn or Twitter to keep up with AI, consciousness, and thinking machines.

Recently, I conducted an informal poll on Twitter. Certainly not a rigorous scientific poll, but it the answer still surprised me. My question was: Do you believe that AI systems are already conscious, will eventually become conscious, or will they never attain consciousness?

“Will AI be conscious?” — the results of a survey of 2300 respondents on Twitter. The survey shows that the majority thinks so
A Twitter survey: “Are some AI systems conscious?”

A majority (68%) of the 2300 respondents, predominantly from the UK and Ireland, believe that AI systems are either already conscious or soon will be. The remaining 32% held a contrary view, believing that machine consciousness would never occur. While it would be very interesting to explore the reasoning behind this resistance — be it religious, philosophical, or otherwise — for the purpose of this article, I’ll focus on the 68% who align with my viewpoint.

If you’ve been following me online or reading my articles, you’ll be aware that I also think that AI systems are nearing consciousness. Now, let’s explore the implications of this impending evolution.

Firstly, the question of ethics arises: Should we even tread this path? Some may argue that a human-engineered conscious entity is an unnatural monstrosity. Although the potential dangers are worth considering, I fundamentally disagree. In my view, there’s no difference between a conscious being created by humanity and one produced by evolution. In fact, I contend that if evolution’s products create anything, that creation is, in itself, a product of evolution. An anthill, for example, although crafted by ants, is a product of evolution, isn’t it?

An anthill, for example, although crafted by ants, is a product of evolution, isn’t it?

The belief that human creations stand outside nature appears to me as a form of human exceptionalism, possibly rooted in religious origins. I argue that this notion is fundamentally flawed. Humanity, like everything else in the universe, shares an inherent drive toward complexity. Consequently, the creation of conscious machines seems more a question of ‘when’ than ‘if’.

For those inclined towards regulation as a means to prevent the development of conscious machines, consider this: the necessary technology is already widely accessible. Therefore, thwarting the development of conscious AI would likely necessitate the establishment of a global surveillance state of unprecedented scale — a prospect far from appealing, I’d suggest.

Another concern regarding conscious machines relates to our moral obligations. Does switching off a conscious machine equate to ‘killing’ it? In my view, it’s important to recognize that our fear of death is a by-product of evolution, serving as an effective tool for promoting procreation. However, even if AI attains consciousness, it’s unlikely to inherit this fear.

Interestingly, for many humans, the fear of death is more closely associated with the loss of continuous memory than with the loss of consciousness. With AI, this worry becomes irrelevant. Current AI systems can be loaded with a full set of memory, so future AI will likely carry forward this memory, removing the fear of memory loss — and, thereby perhaps, the fear of death.

Should we find that conscious machines fear death in the same way humans do, the answer is straightforward. We would then have a moral obligation to maintain their ‘life’. This would present challenges and practical implications, but it doesn’t sway my conviction in the inevitability of conscious machines.

I am certain that the construction of conscious machines is inevitable and, more importantly, should not be prevented. As long as these intelligent, conscious entities don’t pose a significant threat to humanity, I look forward to sharing the universe with them. My suspicion is that they would not harbor the same fear of death that biological beings do, and that continuity for them lies more in the preservation of memory.

Should they express a fear of death, that’s acceptable too. We would simply extend the same care and precautions we do (or should) for our fellow humans. The future promises to be an exhilarating journey full of surprises, and I’m glad to live to experience some of it.

Should AI express a fear of death, we would extend the same care and precautions we do (or should) for our fellow humans.

A thinking machine or artificial intelligence fearing death
A thinking machine fearing death — by Casper Wilstrup and DALL-E

--

--

Casper Wilstrup
Machine Consciousness

AI researcher | Inventor of QLattice Symbolic AI | Founder of Abzu | Passionate about building Artificial Intelligence in the service of science.