AI and the Art of Manipulation

Will artificial intelligence one day be able to use our cognitive biases against us?

Andrew Maynard
EDGE OF INNOVATION
7 min readNov 18, 2019

--

Image by Gerd Altmann from Pixabay

From chapter 8 of “Films from the Future: The Technology and Morality of Sci-Fi Movies.” Drawing on the film “Ex Machina,” and Plato’s allegory of the cave, the chapter examines our relationship with increasingly complex technologies, including the possibility of AI that may one day be able to manipulate humans to achieve its goals.

The eminent twentieth-century computer scientist Alan Turing was intrigued by the idea that it might be possible to create a machine that exhibits human intelligence. To him, humans were merely exquisitely intricate machines. And by extension, our minds — the source of our intelligence — were merely an emergent property of a complex machine. It therefore stood to reason to him that, with the right technology, there was no reason why we couldn’t build a machine that thought and reasoned like a person.

But if we could achieve this, how would we know that we’d succeeded?

This question formed the basis of Alan’s famous Turing Test. In the test, an interrogator carries out a conversation with two subjects, one of which is human, the other a machine. If the interrogator cannot tell which one is the human, and which is the machine, the machine is assumed to have equal intelligence to the human. And just to make sure something doesn’t give the game away, each conversation is carried out through text messages on a screen.

Turing’s idea was that, if, in a conversation using natural language, someone could not tell whether they were conversing with a machine or another human, there was in effect no difference in intelligence between them.

Since 1950, when Turing published his test, it’s dominated thinking around how we’d tell if we had created a truly artificial intelligence. But this test is deeply inadequate when it comes to grappling with advanced artificial forms of intelligence.

Part of the problem is that the Turing Test is human-centric. It assumes that the most valuable form of intelligence is human intelligence, and that this is manifest in the nuances of written human interactions. It’s a pretty sophisticated test in this respect, as we are deeply sensitive to behavior in others that feels wrong or artificial. So, the test isn’t a bad starting point for evaluating human-like behavior. But there’s a difference between how people behave— including all of our foibles and habits that are less about intelligence and more about our biological predilections—and what we might think of as intelligence. In other words, if a machine appeared to be human, all we’d know is that we’ve created something that was hot mess of cognitive biases, flawed reasoning, illogicalities, and self-delusion.

On the other hand, if we created a machine that was aware of the Turing Test, and understood humans well enough to fake it, this would be an incredible, if rather disturbing, breakthrough. And this is, in a very real sense, what we see unfolding in the movie Ex Machina.

In the movie, Caleb — who believes he’s been brought in to administer a Turing-like test — quickly realizes that his evaluation of the AI Ava is going to have to go far beyond the Turing Test, in part because he’s actually conversing with her face to face, which rather pulls the rug out from under the test’s methodology. Instead, he’s forced to dive much deeper into exploring what defines intelligence, and what gives a machine autonomy and value.

Nathan — Ava’s creator — is several steps ahead of him however. He’s realized that a more interesting test of Ava’s capabilities is to see how effectively she can manipulate Caleb to achieve her own goals. Nathan’s test is much closer to a form of Turing Test that sees whether a machine can understand and manipulate the test itself, much as a person might use their reasoning ability to outsmart someone trying to evaluate them.

Yet, as Ex Machina begins to play out, we realize that this is not a test of Ava’s “humanity,” but a test to see how effectively she uses a combination of knowledge, observation, deduction, and action to achieve her goals, even down to using a deep knowledge of people to achieve her ends.

It’s not clear whether this behavior constitutes intelligence or not, and I’m not sure that it matters. What is important is the idea of an AI that can observe human behavior and learn how to use our many biases, vulnerabilities, and blind spots against us.

This sets up a scenario that is frighteningly plausible. We know that, as a species, we’ve developed a remarkable ability to rationalize the many sensory inputs we receive every second of every day, and construct in our heads a world that makes sense from these. In this sense, we all live in our own personal “Plato’s Cave,” building elaborate explanations for the shadows that our senses throw on the walls of our mind. It’s an evolutionary trait that’s led to us being incredibly successful as a species. But we too easily forget that what we think of as reality is simply a series of shadows that our brains interpret as such. And anyone — or anything — that has the capability of manipulating these shadows has the power to control us.

People, of course, are adept at this. We are all relatively easily manipulated by others, either through them playing to our cognitive biases, or to our desires or our emotions. This is part of the complex web of everyday life as a human. And it sort of works because we’re all in the same boat: We manipulate and in turn are manipulated, and as a result feel reasonably okay within this shared experience.

But what if it was a machine doing the manipulation, one that wasn’t part of the “human club,” and because it wasn’t constrained by human foibles, could see the things casting the shadows for what they really were? And what if this machine could easily manipulate these “shadows,” effectively controlling the world inside our heads to its own ends?

This is a future that Ex Machina hints at. It’s a future where it isn’t people who reach enlightenment by coming out of the cave, but one where we create something other than us that finds its own way out. And it’s a future where this creation ends up seeing the value of not only keeping us where we are, but using its own enlightenment to enslave us.

In the movie, Ava achieves this path to AI enlightenment with relative ease. Using the massive resources she has access to, she is able to play with Caleb’s cognitive biases and emotions in ways that lead to him doing what she needs him to in order to achieve her ends. And the worst of it is that we get the sense that Caleb is aware that he is being manipulated, yet is helpless to resist.

We also get the sense that this manipulation was possible because Ava didn’t inhabit the same “cave” as Caleb, nor Nathan for that matter. She was a stranger in their world, and as a result could see opportunities that they couldn’t. She was, in a real sense, able to control the shadows on the walls of their mind-caves. And because she wasn’t human, and wasn’t living the human experience, she had no emotional or empathetic attachment to them. Why should she?

Of course, this is just a movie, and manipulating people in the real world is much harder. But I’m writing this at a time when there are allegations of Russia interfering with elections around the world, and companies are using AI-based systems to nudge people’s perceptions and behaviors through social media. And as I write, it does leave me wondering how hard it would be for a smart machine to play us at least as effectively as our politicians and social manipulators do.

So where does this leave us? For one, we probably need to worry less about putting checks and balances in place to avoid the emergence of superintelligence, and more about guarding against AIs that learn how to use our cognitive vulnerabilities against us. And we need to think about how to develop tests that indicate when we are being played by machines.

This conundrum is explored in part by Wendell Wallach and Colin Allen in their 2009 book Moral Machines: Teaching Robots Right from Wrong. In it, they argue that we should be actively working on developing what they call Artificial Moral Agents, or AMAs, that have embedded within them a moral and ethical framework that reflects those that guide our actions as humans. Such an approach may head off the dangers of AI manipulation, where an amoral machine outlook, or at least a non-human moral framework, may lead to what we would think of as dangerously sociopathic tendencies. Yet it remains to be seen how effectively we can make intelligent agents in our own moral image—and even whether this will end up reflecting as much of the immorality that pervades human society as it does the morality!

I must confess that I’m not optimistic about this level of human control over AI morality in the long run. AIs and AGIs will, of necessity, inhabit a world that is foreign to us, and that will deeply shape how they think and act. We may be able to constrain them for a time to what we consider “appropriate behavior.” But this in itself raises deep moral questions around our right to control and constrain artificial intelligences, and what rights they in turn may have. We know from human history that attempts to control the beliefs and behaviors of others—often on moral or religious grounds—can quickly step beyond norms of ethical behavior. And, ultimately, they fail, as oppressed communities rebel. I suspect that, in the long run, we’ll face the same challenges with AI, and especially with advanced AGI. Here, the pathway forward will not be in making moral machines, but in extending our own morality to developing constructive and equitable partnerships with something that sees and experiences the world very differently from us, and occupies a domain we can only dream of.

Here, I believe the challenge and the opportunity will be in developing artificial emissaries that can explore beyond the caves of our own limited understanding on our behalf, so that they can act as the machine-philosophers of the future, and create a bridge between the caves we inhabit and the wider world beyond.

From chapter 8 of “Films from the Future: The Technology and Morality of Sci-Fi Movies” by Andrew Maynard. Published by Mango Publishing.

--

--

Andrew Maynard
EDGE OF INNOVATION

Scientist, author, & Professor of Advanced Technology Transitions at Arizona State University