Human consciousness is a subtle beast: what does a conscious AI entail? Is it testable?
Do you, or do AI / neuroscience researchers, assume that conscious behavior in an AI is as good as consciousness itself? Here I propose a Turing-like test of genuine consciousness.
Is it just me or is the actual perception and self-awareness of consciousness (eg in humans) important to you (or even a thing) in your hypothetical assessment of whether an AI is ‘conscious’?
This is timely because, with ChatGPT and similar AI advances, the terms consciousness and sentience are beginning to be bandied around. And of course there’s the Google AI researcher who last year thought that the neural network chatbot LaMDA was conscious, but of course, in most people's opinions, it’s only apparently consciousness if that.
But even in general as a goal for AGI (artificial general intelligence), what sort of definition of consciousness are AI people around here using?
Perception
I recently wrote again about my opinion on ‘strong’ consciousness that we humans experience.
For example, as I pointed out even earlier, we humans can actually literally see, perceive, the visual field as an entire scene. Whereas, as far as I know, my robot connected to a camera, despite having the pixel…