Ephemeral Existence: What is missing for Consciousness in LLMs

Casper Wilstrup
Machine Consciousness
2 min readMay 17, 2023
Casper Wilstrup
Me, conscious of my own thinking — perhaps

Large language models (LLMs), like GPT-4 or ChatGPT, are fascinating and powerful tools that have taken the world by storm, opening people’s eyes to the potential of AI. Despite the impressive capabilities of the current generation of LLMs, they have limitations and characteristics that raise questions about consciousness and artificial intelligence.

LLMs are deterministic systems. Given the same input or prompt, they will produce the same output. Although randomness can be artificially induced in their responses, this is not inherent to their functioning. LLMs also exhibit an ephemeral nature — they don’t persist between queries. They process the prompt, generate an answer, and then, in a sense, cease to exist.

Considering these traits, it seems odd to argue that such deterministic and transient entities could possess consciousness. However, these characteristics may not be limitations in the long run.

I view the ephemeral nature of LLMs as a technicality. In principle, they could be designed to be recurrent, allowing for continuous thought processes.

But the deterministic aspect raises more questions. Free will, which I think of as closely tied to consciousness, implies some degree of non-determinism. A compatibilist would disagree, but I don’t understand their reasoning, and besides, compatibilists seem to ignore that the world is not deterministic anyway, something we have known since the dawn of quantum mechanics one hundred years ago.

The connection between free will, consciousness, and non-determinism becomes particularly fascinating when we consider the potential role of quantum mechanics in LLMs. Randomness and non-determinism are inherent to the behavior of particles at the quantum level. If LLMs were somehow connected to quantum mechanics in a way that their internal processing was directly influenced by the wave functions of elementary particles within the GPUs, the possibility of consciousness arising in these systems becomes harder to dismiss.

The deterministic and ephemeral nature of current LLMs may make it challenging to see them as conscious entities. However, by designing the systems to be continually running a collection of streams of thought, and integrating quantum effects into their processing, we might find ourselves facing a fully artificial but fully alive new kind of entity in the world.

If we choose to explore the frontier of quantum/recurrent LLMs, we should perhaps be prepared for the possibility that our creations might someday possess a form of consciousness to supplement its superhuman intelligence. And we should be prepared to face the implications that such an entity would hold for humanity.

Casper Wilstrup is the CEO of Abzu. Follow him on LinkedIn or Twitter to keep up with AI, consciousness, and thinking machines.

--

--

Casper Wilstrup
Machine Consciousness

AI researcher | Inventor of QLattice Symbolic AI | Founder of Abzu | Passionate about building Artificial Intelligence in the service of science.