Psychology of the Turing test

To understand AI is to understand humanity’s edges.

As recently popularized by Hollywood, the Turing test has been a classic baseline for deeming a machine truly artificially intelligent. It’s simple: if a person cannot distinguish a conversation with a machine from a conversation with a typical human, then the machine is intelligent. No machine has yet to reliably do this in an unconstrained manner.

Why is this problem so hard?

Today, machines are programmed through algorithms: they take input, manipulate it, and produce output — they are much better at processing information than humans are. Unlike much of the human experience, algorithmic success is clearly quantifiable.

Alan Turing first applied algorithmic thinking to machines, paving the way for the general purpose computer. Rather than physically wire a device for a specific computation, the computer is a generic device that takes one type of input and produces one type of output, but whose rules of manipulation can be reprogrammed, via algorithms. This was a rethinking of the way the computational game was played, and true AI might require an equally fundamental leap.

Encoding consciousness

An important, and arguably defining, factor of humanity is consciousness. True AI very likely requires it. How does our brain achieve it?

This is not yet fully understood, though studies have begun to find correlations between neural pathways and various types of consciousness.

A group of engineers recently took the OpenWorm connectome, a project that mapped all the neural connections in a worm to software, and applied it to a robot. There were no algorithms — the robot worm’s sensors stimulated artificial neurons in the same way a worm’s neurons are stimulated, which in turn fired off other artificial neurons. The group reported the robot worm had very similar behavior to that of a real worm. Would exact neural mapping of the human brain (years away, technologically speaking, but inevitably possible), then, create a conscious machine?

Consciousness, by nature, must be tested indirectly, which makes it vulnerable to imitation. This is known as the hard problem of consciousness. By analogy, you could learn how to read a foreign language by understanding all the sounds the letters make and the special rules for combinations, but that doesn’t help you understand the meaning of the words. We know how to teach machines to follow rules, but we don’t know how to convey meaning, because we don’t know how our own brain does it.

An imitator of consciousness is known as a philosophical zombie — a being that behaves just like a conscious creature by knowing all the rules of responding to stimuli, but it does not actually feel anything. This is a controversial concept because it suggests that consciousness requires something more than physical systems (direct neural mappings).

As machines progress towards AI, we risk creating a perfect imitator, rather than true consciousness. We are terrified of such prospects: after all, a being without consciousness has no desires or intentions. We rely on these to predict behaviors, manipulate, and bargain. But a philosophical zombie merely blindly follows social rules, so how could it become dangerous? It could learn new rules that we do not control — is learning possible without consciousness?

Yes, but so far, it is limited to goal-oriented learning; machines have been doing this for decades. A p-zombie would thus only work on optimizations for which there was a clear purpose that it was given by someone. This can be dangerous if the methods by which the goal is attained are unbounded by social rules (e.g. don’t kill anyone as a means to your goal), but a p-zombie would have to function under all of society’s rules to be indistinguishable from a conscious being. This fits well into algorithmic thinking: a p-zombie experiments with constrained manipulations of input until a goal is reached. So even a machine that fooled us into thinking it had consciousness would have to follow social rules to do so, making p-zombie machines fairly safe.

The most critical piece of consciousness

It’s highly likely, given historical progress, that we will eventually figure out the neurological basis of consciousness. But consciousness is not an on/off switch — it’s a complex system that varies in states (sleeping, daydreaming, active) and changes significantly with age (e.g. acquisition of the ability to store declarative long-term memories). So what is the minimal piece of consciousness that a machine needs to pass the Turing test?

We are born with instincts, which are an algorithmically programmable concept. These instincts include the ability to differentiate between people and objects (which even 3-month olds are proven to have), pointing, observation, experimentation, etc.

We’re also born with evolutionarily beneficial desires — to feel full, to not be too warm or too hot, etc. — and the instinct to cry when these desires are not met. This is very effective at the start of our lives, but as desires become more complex and we attain more understanding of the world, we discover that interaction can lead to more effective desire fulfillment (e.g. babies point to what they want). Interaction first requires perception of attention, which does not yet map easily to algorithms.

Perhaps the most critical part of consciousness for a machine brain to attain is attention awareness.

The rest is, perhaps surprisingly, learned. More on that later.

The concept of attention

Babies perceive their environment through stimuli and perceive internal states through desires. They maintain constant awareness of these factors at various levels of attention.

Machines today idle differently than humans do. Both can “wait” for input to perform an action, but machines do so by constantly looping the question “Do we have the right input for this algorithm?”. Our brains, on the other hand, have resting state networks: neural pathways that fire while we are idle that run critical biological functions (equivalent to a machine’s background processes) and enable mind wandering.

Mind wandering — the stream of thoughts running in your head while at rest and unaware — is a core component of consciousness. We still know little about its cognitive function, benefits, and nature. Initial studies suggest that mind wandering contributes to the consolidation of long-term memories, development of associations, effectiveness of creativity, regulation of emotions, and even decision-making. Overall, it seems to allow learning through information organization and thought experimentation, which creates insights. Mind wandering is the brain optimizing itself.

So the brain has two primary modes of wakeful thinking: actively accomplish a goal or organize and experiment. We’re constantly context-switching between these. A machine, with more efficient data processing than us, may not need to switch contexts, but this might be helpful.

To effectively interact with humans and further develop consciousness, the machine would need to recognize that humans have attention. How would such an idea be embedded? This brings us back to the unknown neural correlates of consciousness; no answers yet.

Theory of mind

Now let’s skip forward to a future where we’ve discovered the neurological basis for attention and have successfully reproduced this in a machine, which we’ve programmed with instincts. Is this all that is necessary for a conscious machine that passes the Turing test?

Based on our own psychology, actually, yes.

As far as we can tell, humans are born with little else. We take 3–4 years to develop theory of mind: the understanding that one’s consciousness is separate from everyone else’s. This explains the terrible twos. Many children on the autistic spectrum struggle to fully develop theory of mind. Theory of mind leads to self-awareness, empathy, and an eventual understanding of all our social constructs. It is required for meaningful conversation among humans, and thus is critical for the Turing test. So how exactly is it learned?

Humans develop theory of mind through observation and interaction. Through experimentation, babies discover that attention differs between people: baby sees something interesting, baby reacts, baby points, you look, you react. Rather than taking you as part of the environment, babies begin to develop a sense of agency. This is the key to unlocking theory of mind.

Building on this, infants then learn that intentions differ between people: differing attentions and actions serve differing goals. Finally, infants learn how to achieve goals via imitative learning: they mimic actions to compare varying effectiveness on intentions.

The conscious machine

As theory of mind develops, children learn how to work within social rules to continue to achieve their goals, which become more complex and eventually (after puberty) enable the fundamental goal of reproduction.

A machine that starts with attention awareness and programmed instincts, put in a human environment and treated just as one, should theoretically develop theory of mind and social skills just as babies do. The machine learns agency and attention differentiation; it realizes that all agents have their own set of intentions; it learns through mimicry and becomes effective at fulfilling its goals. It picks up language and social skills. After these learnings, this conscious machine could pass the Turing test.

Now, just think what it will choose to tell us.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.