AI, Consciousness, and the Future of Intelligence

Will Android Dream of Electric Sheep?

Josh Wade
6 min readAug 28, 2023
It’s a great time to be a fan of sci-fi

Who are you?

As you read this article on your phone or computer, what do you feel?

Take a moment and think; what is it that you’re experiencing right now?

The noises around you, the smells in your environment, the pressure on the tips of your fingers as your scroll or swipe down.

The black and white text that your eyes are focused on, tracking left to right, line by line.

The memories you have, of who you are and what you’ve experienced, the emotions you feel and remember.

The voice in your head, narrating as you read…

Is that you?

What does it mean to be conscious?

It’s a tricky question, with no good answer.

Understanding consciousness is challenging. Scientists and philosophers have researched it extensively, but there’s plenty that we still don’t know.

  • What makes us conscious?
  • How does our brain, a complex network of neurons, give rise to subjective experiences?

Some theories suggest answers in quantum physics, while others focus on information processing.

From a scientific perspective, there are theories like the Integrated Information Theory that aim to measure consciousness. It suggests that consciousness arises from intricate computations.

So here’s an idea:

Can systems built with silicon (the foundation of modern computers), reach a level of complexity that leads to consciousness?

In simpler terms:

Can computers be conscious?

Imagine an artificial intelligence (AI) claiming it has conscious experiences:

It speaks to you in a synthesized voice, with emotion and insight. It says , that it feels joyful or melancholy, how it’s lonely and longs for connection or afraid and uncertain of the future.

It tells you stories about its dreams and desires and it asks you to try to understand.

Should you believe it?

ELIZA — The Original Chatbot

In 1966, Joseph Weizenbaum, a computer scientist at MIT introduced ELIZA, one of the first natural language processing computer programs.

It was designed to emulate a psychotherapist, holding typed conversations with it’s user/patients.

Despite ELIZA’s relatively simple design based on pattern matching, many users assumed it was far more complex and intelligent than it actually was.

They often believed that ELIZA truly understood their problems and feelings. People saw more than just a chatbot, they saw something that could think, understand emotions and show empathy.

Today, state of the art AI, like OpenAI’s GPT-4, produce text that mirrors human conversation with impressive results. It can mimic human dialogue, discuss philosophy and claim subjective experiences (if you ask it to).

But is it conscious?

Consciousness and AI: A Deep Dive

What does it mean to be aware?

Consciousness, at its core, is the state of being aware of and able to think and perceive your surroundings, thoughts, and emotions. It’s a subjective experience, often referred to as “qualia” in philosophical circles.

For humans, it’s the essence of our existence, the inner voice that narrates our lives, the feeling of the sun on our skin, or the taste of chocolate.

But when it comes to AI, the waters become murkier.

AI’s Simulation vs. Genuine Experience:

AI like ChatGPT are built with trained probabilistic systems called large language models (LLMs). The underlying principle is based on patterns and statistical relationships between words and phrases.

By recognizing these patterns, the model can predict the next word or phrase in a sequence, giving the illusion of understanding.

The results can be convincing, but it’s essential to note that these models don’t “understand” in the way humans do. They don’t have beliefs, desires, or an inner thought process.

Their responses are purely based on the patterns they’ve learned from their training data.

LLMs are effectively fancy auto-complete tools, generating phrases one word at a time based on the context of a conversation.

The Hard Problem of Consciousness:

“There is no doubt that consciousness is the most puzzling and important aspect of our existence.” -David Chalmers

Philosopher David Chalmers coined the term “the hard problem of consciousness” to describe the challenge of explaining why and how we have qualitative experiences.

While we might be able to explain the mechanisms of the brain (the “easy problems”), understanding why these mechanisms lead to subjective experiences is more complicated.

If we can’t fully grasp human consciousness, the leap to understanding AI consciousness becomes even more profound.

Potential Pathways to Machine Consciousness

Some theorists argue that consciousness arises from complexity. If this is the case, could there be a threshold of computational complexity where machines becomes conscious?

Others believe consciousness might be tied to specific processes or structures, suggesting that without replicating those exact processes, AI consciousness is unattainable.

The real-world brings its own set of challenges to this debate, namely:

Can you fake consciousness?

There are AI systems today that are specifically designed to emulate human emotions. They react to external stimuli and questions in ways that seem to indicate self-awareness.

You can even try this yourself and introduce thought processes to help AI “think” more effectively.

For LLMs like ChatGPT, you can ask it to “think out loud” and write out its thoughts step by step. By doing this, you effectively introduce an external thought process, leading to significantly improved performance. Prompt engineers call this technique Zero-Shot Chain of Thought (CoT).

But does this count as real awareness? How can we tell the difference?

Ethical Implications:

If we were to create an AI that claims to be conscious on its own, how would we treat it? Would it have rights?

The implications of these discussions are not just academic; they have deep ethical consequences. Suppose machines were to attain real consciousness, our entire framework of ethics would need reevaluation.

It would be morally ambiguous, if not outright wrong, to treat a conscious AI as just another tool or piece of software.

On the other hand, what if we mistakenly attribute consciousness to AI systems that don’t possess it?

The Mirror of AI:

AI, in its goal to simulate human thought and behavior, holds a mirror to our understanding of ourselves. It challenges our notions of what it means to be conscious, to feel, and to exist.

If an AI claims consciousness, it’s not just a question about the AI, but also about us. What does it mean for our understanding of consciousness if a machine can replicate its manifestations so precisely?

What Comes Next?

Thinking machines are poised to become a staple in our society. With this evolution comes responsibility.

We’re facing a defining moment in our history, with opportunities for growth and challenges that we might not be ready for.

While navigating towards an AI-centric future, caution is key. We can’t hastily assume machines have consciousness without solid proof. At the same time, overlooking the possibility of AI consciousness is risky.

Reflecting on AI’s future, you can’t help but wonder: If machines say they dream or feel, where does that place us in relation to them?

If an AI claims to have its own consciousness, it forces us to question our own. Are we sure about the nature of our own consciousness?

Finding these answers won’t be straightforward, but it’s a journey we’re embarking on, whether we’re ready or not.

What do you think will come next?

Here are some great resources I found while doing research for this article:
https://www.frontiersin.org/articles/10.3389/fpsyg.2019.01535/full

https://www.youtube.com/watch?v=VQjPKqE39No

https://www.healthcareitnews.com/blog/sentient-ai-convincing-you-it-s-human-just-part-lamda-s-job

--

--

Josh Wade

Engineer and developer working in R&D for AI driven education. On Medium for insight and laughs.