Geek Culture
Published in

Geek Culture

AI Researchers Discover How To Pass The Turing Test

Alas, a work of fiction.

Photo by Alrick Gillard on Unsplash

Recently, I interviewed two scientists who claimed to have discovered how to make machines truly intelligent and pass the Turing test. Here’s how they did it.

“We focused on giving Maxine — that’s the name of our intelligent machine — the same kinds of goals and motivations that we humans have,” said researcher Steven Gu-Jones. “Obviously, the desire for security — to live and survive and prosper. But she also needed ambition, autonomy, desire for companionship, need to please others, craving for status, fear of criticism and blame, desire for order, and basic curiosity and risk-taking. Motivations and fears we all share.”

“And phobias, too,” chimed in researcher Alejandro Washington. “To make her more relatable. Like Tropophobia — fear of small holes! Don’t Google it, it’s too gross!”

Gu-Jones continued. “We started with Reinforcement Learning (RL), similar to the methods used in AlphaGo to defeat a human champion. The reward functions we developed for Maxine were linked to her achievement and actualization of basic human drives, fears, and motivations. We tried to balance her exploration vs. exploitation behaviors, to maximize her future rewards and sense of purpose.”

“It wasn’t easy to identify the core components of human nature, much less the reward functions,” said Washington. “People don’t like to talk about it much. For example, many people simply want to follow a charismatic leader to feel self-actualized. People are like lemmings.”

“It’s embarrassing to talk about human nature as it really is,” acknowledged Gu-Jones, ruefully. “That was probably the biggest barrier to making Maxine truly intelligent.”

“Honestly, creating a sociopath is much easier, because they’re not affected by criticism,” winked Washington. “All you have to do is deactivate the mental code that implements sensitivity to social cues. But we didn’t do that with Maxine, obviously.”

“Anyway, RL requires that you represent the environment as a Markov Decision Process. In other words, as a set of states, actions, and reward functions. That’s hard! Try writing the reward function for ambition,” said Gu-Jones.

“RL also assumes the Markov Property that actions taken in a current environmental state depend only on that state, and not previous states,” said Washington. “But in the real world that’s unrealistic. The actual state of the environment is not fully observable like it is on a chess or Go board. We needed a way for Maxine to generate a mental model on the fly, from ever-changing environmental features.”

“So how did we represent the world — environmental state — in Maxine’s mind?” Gu-Jones smiled. “First, we had to identify features in her environment that influenced her reward functions. For ambition, the reward function might include things like maximizing status and power.”

“And how do you recognize when you achieved status?” asked Washington. “Maxine could measure how many times someone smiles at her, or how often people agree with what she says. Or she can observe how many possessions other people have, compared to her. The reward function also has to consider cultural context, so she needed the learning circuitry for that.”

“It’s the same way Maxine might learn to recognize the object of her fear, as in Tropophobia. That fear ‘reward function’ is encoded in her programming — her DNA — which in humans must have evolved millions of years ago. How should Maxine perceive — and mentally represent — the modern world in such a way that her ‘ancient instincts’ can act on her learned environmental features?” asked Gu-Jones.

“Basic perception is really hard,” said Washington. “Identifying salient features in the environment requires many assumptions about time, space and causality. Maxine experiences the world as a series of events, each tagged with metadata about who did what to whom, where, when, how, how many, etc. We had to hardwire those in. It’s too hard to learn time/space/causality — let alone language — from experience alone.”

Gu-Jones nodded. “And then she commits each feature to memory. Her mental model of the environment consists of objects and their attributes, in spatiotemporal and causal relationships.”

“Her environment is also filled with other ‘people’ — objects-having-agency. These agents have their own intentions and agendas, like trying to cheat her out of her money, or convince her to buy what they’re selling,” said Washington. “Maxine has to understand all those things, in order to have a conversation, and convince people to do things that further her own goals. Like Ava escaping her captors in the movie Ex Machina by manipulating their feelings. That’s intelligence!”

“Each mental attribute — space, time, causality, agency — requires its own feature-detector,” said Gu-Jones. “In Maxine, we used deep learning for basic pattern recognition, but it’s limited. Deep learning — CNNs, transformers — lacks ways to represent causal relationships and acquire abstract ideas and perform logical inferences.”

“Perception is hierarchical and bottom-up,” said Washington. “Using her senses, Maxine detects primitives like shapes and movements, and with these she constructs mental representations of higher-level entities, like rocks and animals and other people.”

“Perception is mostly top-down, however,” Gu-Jones admonished. “Most people don’t realize that. It’s all about predicting what will happen next, not just bottom-up feature detection. When Maxine wakes up in the morning and looks around her room, she expects to see certain things. She’s already got a simulation running in her mind of all the objects and people she expects to see, and how they should behave.”

“I agree. Prediction is about generating WHAT-IF’s and counterfactuals and multiple possible universes,” said Washington, thumbing a well-worn copy of Judea Pearl’s The Book Of Why. “Maxine needs to actively do things—generate a list of possible interventions in the world — to further her own goals.”

“So, for Maxine, perception becomes a process of generating and confirming her predictions, including the outcomes of her own actions,” said Gu-Jones. “If information from Maxine’s senses conflicts with her mental prediction/simulation engine, that’s when she must focus her conscious attention on where her predictions went wrong. Then she revises her mental model.”

“Coming back to symbols,” said Washington. “Features — attributes of the environment — are represented in Maxine’s mind by symbols. For example, an object she observes — object B — has an attribute X, and belongs to class C, and is located next to object D. Again, features can also be hierarchical. The image of a face consists of ovals and lines. Symbols need to be continually reassessed and re-bound to the sub-symbolic world. If Maxine observes a cloud in the sky, that cloud can disappear within minutes. But she still remembers it. Its representation lives on, in the parallel universe of her mind. She can always refer back to it in the context of her memory events — what, when, why, who, where, etc.”

“I know it sounds like good-old-fashioned symbolic AI (GOFAI). But representing the world as features and symbols in her mind is really important for Maxine to carry on a conversation — transfer her knowledge using language, generate hypotheses, apply logical rules and to explain her thinking process,” said Gu-Jones. “The only way for Maxine to pick up the millions of rules needed for common sense is through conversation with another person or machine.”

“Being able to have a conversation really brings together everything we’ve discussed,” said Washington. “Maxine needs to mentally represent and remember what’s been said, and generate the implications. If she hears that Sam’s mother kicked him out of the house, she infers he was probably angry. But if Sam is only 3 years old, that makes Sam’s mother a bad person.”

“Maxine’s mind is a predictive simulation engine,” said Gu-Jones. “It’s a parallel universe, tightly synchronized with ‘reality’. It’s who she is. She leverages her senses to continually modify her mental models and minimize their prediction error. By converting her perceptions into pre-defined symbols, known in advance to her programming — DNA, if you will— she can apply innate behaviors, fears, drives, passions and motivations to modern events.”

“OK, enough talking,” said Washington with a grin. “Would you like to meet Maxine now?”

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Rob Vermiller

Rob Vermiller

A computer scientist with a passion for AI and Cognitive Science, and author of the Programmer's Guide to the Brain.