Nerfed Intelligence and the Uncanny Valley of AI

Mark Palatucci
8 min readJun 3, 2019

--

Why our robotic future will look more like Star Wars than Star Trek

Two of the world’s most favorite sci-fi franchises paint wildly different pictures about what our robotic future will look like. Star Trek asks us to imagine a universe with androids that are highly capable of complex physical and mental tasks but struggle to understand, let alone demonstrate, even basic human emotions.

In contrast, the droids of Star Wars tend to be emotive and highly specialized, but are considered somewhat disposable, not holding nearly the same status as the androids who hold rank and privilege in Starfleet.

So which depiction is more likely in the future?

While we may dream of one-day creating robots that truly act, think, and feel like humans, the future we are headed toward is inevitably one where the robots are the emotive, personable, but ultimately limited droids of Star Wars.

The path to this robotic future isn’t going to be simple, or easy. There are two main challenges that need to be addressed.

Should robots have emotions?

The first question we need to ask when imagining how the robots of the future will be developed is what role emotion or emotional intelligence (i.e. emotional quotient or EQ) plays in human-robot interaction.

I’ll share a secret most robot makers already know about the importance of giving machines human-like emotions: users have higher satisfaction, returns are lower, and people are more tolerant of mistakes and problems that might occur. Emotion creates empathy and a bond: if a unit has an issue, customers will demand their unit fixed, not an identical new replacement. Ultimately, emotion improves usability and accessibility, which leads to easier adoption and a better business return.

The business factor is why we are already seeing some developers prioritize emotional elements.

The Google Duplex demo shows that computers can now not only replicate human speech with high accuracy, but also understand subtle emotional nuances and cues, making it virtually impossible to tell if you’re talking to a human or a machine. We also saw this play out first hand at my former company Anki. The robots we made, Cozmo and Vector, were designed by roboticists and AI specialists working side-by-side with former animators from Pixar and Dreamworks. These robot characters can display thousands of unique animations, perceive human emotions, and react accordingly.

Today’s emotive machines are already beyond what was imagined in the Star Trek universe with fan-favorite android Commander Data. The emotional droids have arrived.

What about Artificial General Intelligence (AGI)?

The second question we need to consider is regarding the IQ or generalized intelligence. There is no shortage of hype about AI and much of it rightfully deserved. In the last 5 years, many hard problems in computer science like voice recognition and computer vision, as well as language translation, problems that have plagued researchers for decades, have made enormous progress thanks to a sub-field of AI known as deep-learning. Deep-learning takes an important step in dealing with invariances, a technical term that deals with the ability to recognize patterns at a more abstract level: e.g. a dog is still a dog regardless of what angle you look at it from, what color its fur is, or the lighting conditions of the room it sits in.

But deep-learning is still remarkably brittle. It’s very hard for deep neural nets to generalize beyond data in their training set — but they can capture a remarkable amount of complexity that they’re shown. As a result, in the era of big data, these nets have proved the key to solving some of the biggest open questions in computer science. But it takes a ton of data, and these nets are still poor of transferring that knowledge to even slightly different tasks without re-training, fine-tuning and extra data. Baseline, deep neural networks have been able to improve our ability to abstract, but only a little bit. And that’s why other hard problems, like autonomous driving, remain elusive despite billions of dollars spent and decades of work.

One of the great debates within the AI community is whether deep-networks are enough to get us all the way to artificial general intelligence (AGI). One camp thinks we have all we need and therefore need to start preparing as soon as possible for all the societal impacts about jobs, safety, etc. It doesn’t help when prominent individuals like Elon Musk tweet doomsday scenarios despite any credible researcher thinking there is any clear and present danger. The other camp, including most of the top scientists, think this is all very premature. Robot science remains harder than rocket science.

But regardless, we might still want to consider what our future might look like as we progress towards AGI. Regardless of the time frame, we are no longer constrained by hardware. Comparing to the human brain in terms of total neurons and synapses, any large enterprise can certainly build raw computation and storage on par with our own human biology. Generalized intelligence, therefore, is really a question of software: we don’t yet understand the math or the algorithms. But if we did, we could build it.

One thing that is clear, however, and it’s debatable how much time we should spend on this now, is that as machines progress towards AGI, we will be confronted by an increasingly complex set of ethical challenges. Perhaps it should just be left to the philosophers, and getting governments or the mainstream robotics/AI community to debate these ideas now is not worthwhile. Or perhaps we’re just a few years away from a robotics equivalent of the Asilomar conference, the famous biology conference when the research community agreed there was an immediate danger, and got together to agree that human cloning, while technically possible, should be banned.

When we can create more abstract goals for machines, and AI can transfer knowledge between tasks in a matter more similar to humans, we open ourselves up to unintended consequences and emergent behavior. So the key will be putting constraints on the objectives of our systems. As our objectives get more abstract, so must our constraints.

Without constraints, we’ll be left with systems that will be difficult to interpret, and it will become harder to predict the range of behavior. This doesn’t necessarily mean anything nefarious will happen, there may be still physical constraints on systems or constraints on the ability of our systems to adapt their models or change their software. This is where the most misinformation and fear occurs in mainstream media. There are many fears of spontaneous intelligence, that somehow our systems will magically rise up and revolt against us and replicate uncontrollably. But these ideas, like the idea of spontaneous generation of life that existed for millennia, are not based on scientific evidence or from the experience of experts and engineers in the field.

The Uncanny Valley of A.I

AGI will ultimately make machines more robust and practical and give them the ability to easily transfer knowledge between tasks with less training. Emotion will be required to make these systems more accessible and user-friendly but then there will be a tipping point, such that:

  1. Robots/AI will be able to display displeasure, or even pain when they are being kept from their goal. This will create an uncomfortable situation, where humans are feeling guilt, shame, and other negative emotions as a result.
  2. If goals are very abstract, and the behaviors to achieve those goals are unique per robot, we will get into questions of individual robot rights, and will be left again in an uncomfortable state when a robot can articulate, quite clearly, why they should have their rights protected and shouldn’t just be considered machines subject to human will. Yet, they still are machines, with all the bits and bytes and electrons and power buttons that can and often should be turned off. The legal consequences of which might take decades to play out in the courts.
  3. Further, the shrinking gap between human intelligence and emotion, and what machines can display, will cause us to re-evaluate the value and uniqueness of the human species. What if we’re not that profound after all? What if the same underlying natural laws of physics, information theory, and mathematics that govern machine learning, also govern human learning as well? What if we develop the mathematics and abstraction theory that can predict human behavior and also describe the mechanics of our own biological machinery? What happens when ideas that have lasted for millennia about god, the soul, and our human place in the universe no longer hold scrutiny across the hard reality of scientific fact? Again, this leaves us in a very uncomfortable place as a species.

Of course, this can be all avoided if we follow a simple design principle: we “nerf” the intelligence and emotion of our machines. We nerf it deliberately. Borrowing the term ‘nerf’ from video games, we make it less capable than it could be. We make it more narrow and constrained. We use limited emotion to improve interface and accessibility, and we stop before it becomes creepy.

This is similar in many ways to what animators have learned about the Uncanny Valley — an aesthetic phenomenon that basically says we like things more as they get more human-like up to a point — at which we reject them and find them unauthentic and creepy (the valley).

We can use similar ideas to limit intelligence and constrain these systems to perform well on a small relevant subset of tasks. Following the same principles that animators have followed in avoiding this uncanny valley in media, we’ll avoid this new uncanny valley of AI. Computer graphics can render totally photo-realistic humans, but we don’t use that power often in our animated films. Similarly, we also won’t use all our scientific knowledge of artificial intelligence and emotion in the machines we build. We’ll deliberately use less.

This is what we’ll need to feel comfortable relegating robots to the tedious or dangerous tasks we ultimately hope all our robotics work will lead to. We’ll need agreements in the scientific community. And we’ll need treaties between governments like the nuclear test ban. We should take faith that if humanity has avoided large scale nuclear war, we’ll be able to deal with this too. But no one can say when we really need to start worrying.

In this context, it’s easy to see why Elon Musk tweets what he does about AI — tweets that leave most scientists scratching their heads and muttering to themselves about how someone so smart could be causing such a stir. Musk thinks in terms of possibilities and not about constraints. That’s why we have electric cars, rocket ships, and hyper-loop tunnels in LA. His inability to see constraints is what makes him a great entrepreneur and innovator. It’s therefore easy to see where his view of AI comes from, ignore all the constraints and you’re undoubtedly left with some very uncomfortable questions. But to those of us who are left to do the work, to push through the constraints to achieve progress, we see the comments as woefully uninformed. So perhaps this is all premature. I think the moment we really need to worry is when top scientists of the field see a path, similar to biologists at Asilomar, and the nuclear physicists of 1939 that warned of a path to the bomb.

So our robots will likely end up in many ways like the characters in our animated movies, anthropomorphized, but definitely not human. In other words, essentially the highly-specialized droids that we’ve come to know and love in Star Wars, that are useful and can create bonds, but ultimately we won’t feel too guilty about if we need to turn them off or send them back to the shop.

--

--

Mark Palatucci

Roboticist, Co-Founder of Anki. Purveyor of AI and Machine Learning.