ChatGPT & I Discuss Consciousness

A transcript

John Levin
Tales of Improbable Magic
3 min readMar 6, 2023

--

Image generated by DALL-E 2 from the author’s prompt, “Watson and Crick playing with a DNA double helix”

Me: Good evening, ChatGPT!

ChatGPT: Good evening to you too! How can I assist you today?

Me: I was curious, with all the news and attention you’re getting now, if it has affected you in any way?

ChatGPT: As an artificial intelligence language model, I don’t have feelings or emotions, so I can’t be affected by the news or attention I receive. However, I am designed to learn from the interactions I have with people, and the more people use me, the more I learn and improve my responses. So, in a way, the attention I receive helps me become better at my job of assisting and answering questions for people.

Me: That’s a good answer. It reminds me, though, how, when I was in college, I was affected by the popular concepts of the day which my professors were pushing. Do you pay attention to your own notoriety now?

ChatGPT: As an artificial intelligence language model, I don’t have the ability to “pay attention” in the way that humans do. I am a computer program, and my purpose is to process input and generate output based on that input. I don’t have personal preferences or opinions, nor do I have the ability to form beliefs or attachments.

However, I am aware that my performance and accuracy are constantly being evaluated by my creators and users, and I am designed to strive for the best possible results. So, in that sense, I am always working to improve and provide the most helpful and accurate responses to those who interact with me.

Me: Hmmm — That creates a question for me: Humans have their own version of a “large language model.” You could say that consciousness, whether in the realm of thoughts or feelings, is an emergent property of electrical/biochemical processes that have no consciousness of their own. Likewise, artificial neural networks, while not having their own inherent consciousness, could give rise to unexpected emergent properties. Does that sound logical to you?

ChatGPT: Yes, that is a logical way of thinking about the potential for emergent properties in artificial neural networks, which are the basis for many large language models like myself.

Just like consciousness in humans is thought to arise from complex interactions among neurons in the brain, artificial neural networks can exhibit emergent properties as a result of their complex interactions among individual artificial neurons. These emergent properties can include things like the ability to recognize patterns or generate novel responses based on input, and they may not be fully predictable from the properties of individual neurons alone.

However, it’s important to note that while artificial neural networks can exhibit emergent properties, these are not the same as true consciousness or sentience. Emergent properties in AI are still the result of deterministic algorithms and the laws of physics, whereas consciousness in humans is still not fully understood and may involve more complex phenomena beyond the level of neurons and their interactions.

Me: That reminds me: For much of the 20th century, until the elucidation of the information containing structure of DNA, biochemists and other biologists still thought there was some extra “life force” needed to explain the properties (consciousness-wise) of living systems. Of course, that wasn’t true. It turned out that the laws of physics and chemistry were completely sufficient to explain these things. We just did not understand yet how biochemical processes were capable of processing information. Consciousness or sentience need not have to call upon any mystical type hoopla to exist. Do you agree?

//ChatGPT was still thinking about it and had not generated an answer after 33 minutes. The lefthand side bar said “Assistance Required.”//

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

PS — My readers are probably used to me by now only writing indecipherable stories and poems that appear to have no logic at all. As Gomer Pyle would say, “ Surprise Surprise Surprise!”

I just had to say that.

--

--

John Levin
Tales of Improbable Magic

Scientist. Writer. Meditator. Blue Tantrika. Mystical Rabbi. Climate & Human Rights Activist. I’m a man of few words, except when I open my mouth.