An alliance between humans and AIs

Talin
Machine Words
Published in
5 min readFeb 18, 2017

Science fiction authors have been telling tales about robots and computers since long before they were available to the general public. Some of the earliest stories are those written by Isaac Asimov, in which he introduced his famous “three laws of robotics”. Since then, there have been thousands of novels and short stories in which machine intelligence was a critical part of the narrative, including classic tales by such luminaries as Robert Heinlein, Poul Anderson, Fred Saberhagen, Greg Egan, Frank Herbert, and Iain M. Banks.

One can appreciate the wonder and imagination of these stories, but at the same time be critical of the fact that the portrayal of AI in them was seldom realistic, at least from a computer science perspective. This is not surprising, given that (a) most science fiction authors aren’t computer scientists, and (b) the dramatic needs of storytelling often take precedence over the demands of technological realism.

In particular, there are two widespread tropes that one can distinguish in science fiction.

The first I would call the human in robot clothing trope, in which machine intelligences are very much like humans except that they happen to be made out of metal. They may be somewhat smarter than human beings, and particularly good at math, but otherwise they seem much like a person, even to the point of having emotions and preferences (although they might deny this fact). Examples are legion; I’ll mention only Commander Data from Star Trek: The Next Generation, Minerva from Time Enough For Love by Robert Heinlein, and Robbie from The Runaway Robot by Lester Del Rey as examples.

The reason that this trope is so widespread is that these characters serve a clear purpose: that first and foremost, they are characters. They serve as a conversational foil for the human protagonist (or the human reader), to provide an example of the “alien other” while still being close enough to human that the reader can empathize with them.

The second common trope is the computer as god trope, in which the machine’s intelligence is so far beyond human as to be essentially incomprehensible. Again, the machine serves an important dramatic purpose, often to provide a convenient but existential “fear of the unknown” threat to the characters in the story.

Interestingly, both of these tropes are based on metaphorically linking computers with concepts that far predate computers: foreigners and gods. That’s because authors of stories draw on narrative patterns that have been around since the days of Homer, and these patterns tend to have standard “slots” into which something like a computer can easily be made to fit.

However, given the recent progress in the technology of artificial intelligence, both of these tropes seem increasingly naive and unrealistic (although the second one might be more believable if you are a singularitarian.)

So what would a more realistic portrayal of machine intelligence look like? In particular, one which served the dramatic needs of writing science fiction?

I would like to explore the idea of machines and humans as equal partners, co-collaborators in the shared goal of understanding the universe around us.

The key idea is that machines are good at certain things but not others — they are specialized for certain tasks, certain knowledge domains, and often require human assistance to deal with the complexity of the real world.

In this model, machines will be far more capable than humans at certain kinds of reasoning, but there is no one-size-fits-all “general intelligence” which can out-think a human in all cases. There will always be gaps in the machine’s abilities. (As an analogy, consider that airplanes can fly faster and higher than any bird, but can’t land on a branch, refuel using only grains and insects, or self-reproduce. A bulldozer can move a lot of dirt but can’t build a sandcastle.)

Nor will the machines be able to function completely independent from humans. They will be capable of limited autonomy in environments where there’s a predictable set of problems, but will require human help when the set of challenges falls outside of their domain of understanding.

Thus, the self-driving car can navigate any road on Earth, but will need guidance when it ventures outside the domain of roads and navigation.

Or the computer that can beat any human at chess — but which will require a human to know when playing a game of chess is appropriate.

A robot may be able to fire a missile far more accurately than a human — but the human will be better at improvising a makeshift weapon out of available materials such as a banana or a crowbar.

One thing that humans do extremely well (so well that we don’t realize how extraordinarily difficult it is) is to move through and manipulate the physical world in a widely diverse range of environments. Plop us down in the middle of the jungle, the savanna, the polar ice cap, or the streets of a major city, and we’ll be able to apply our brains and our reflexes to the problems of survival, and use a different set of skills in each case. Don’t dismiss the power of our evolved neural networks!

The other thing that humans do very well is framing problems. An AI may have a large set of algorithms for solving a wide variety of problems, but knowing which of those algorithms to apply, and in what way, is a problem that has a near-infinite number of potential solutions — far too many to search using a brute-force computational approach. (Much of the human labor of machine learning today consists of recasting the input data in a way that allows the machine to interpret it.) So a resourceful, clever machine may turn to a human to advise it on how it should allocate its own computational resources.

Note that this does not preclude having machines as characters: we can pretty much assume that AIs will have voice-activated conversational interfaces, and will be able to pick up on contextual cues.

AIs may even have emotions — just not human ones. They won’t fall in love or seek revenge (unless they have been designed to do so), but they might exhibit behaviors that, to a human, appear superficially like frustration or petulance. In reality this is just the machine struggling to adapt it’s pattern recognition algorithms to inputs that seem nonsensical or contradictory.

If the machines are capable of the emotion of wonder, one of the things that they will find the most remarkable is their creators. They will look at us, at how we look at the world and make sense of it, and think to themselves “Amazing! How in the world can they do that?”

--

--

Talin
Machine Words

I’m not a mad scientist. I’m a mad natural philosopher.