Is it just me, or is the question of whether Artificial Intelligence comparable to human intelligence a more common topic, on an almost daily basis? I’m even encountering questions of whether computers have souls now. I believe this (badly misguided) concept is fundamentally because of two things: first, a basic misunderstanding caused by a poor choice of words, second, because we tend to mysticize things we don’t understand. I will try to show how in this case these actually are the same thing.

The history of AI goes back a very, very long way. The Greek god Hephaestus was the god of blacksmiths, and built not just weapons with godly powers and intelligence, but automatons that did work, guarded palaces and defended against attackers. Pygmalion sculpted a statue brought to life by Aphrodite and became Galatea. Our stories have since then, and quite likely long before that time, contained all kinds of artificial creations breathed to life by gods, lightning, human ingenuity or fairy dust. It’s really no wonder that computers were seen as the next frontier for Intelligence from their very early days. Unfortunately, Intelligence means so many things — almost none of them achievable by a computer.

That is, Intelligence is a property. Or as one of the pioneers of AI, Rodney Brooks explains here, it’s in fact a large collection of more-or-less related properties, nearly never actually used to describe the same thing twice. Properties lend themselves well to mysticism, because most of them are intrinsic, extremely hard to measure, even harder to objectively evaluate.

A very common comparison for AI is to say that humans learned to fly, not by emulating birds, exactly, but by observing them and inventing a way to achieve something similar. This argument is intended to show that though artificial flight isn’t the same behavior as natural flight, nor does it have the same objectives or performance envelope, it nonetheless takes objects as well as people airborne and lets them travel like birds. The fallacy of this comparison is that flight is an observable behavior, where intelligence is not.

I have previously suggested that we should instead call AI by a name more related to its observable actions: Automated Decisions. Indeed. But what of of the mysticism? Do the Decision Automata have souls? Will they strip humans of all work? Will they cause a utopian nirvana, or a dystopian desert of lack of meaning? What will become of us, when Automata are Intelligent?

These questions come from not from lack of understanding of AI, but lack of understanding of Human Intelligence. What’s going on here is that computers are not becoming intelligent, but that many tasks we previously thought can not be successfully executed without human intelligence actually turn out to be fairly easily automated. Very few of these newly automatable tasks actually mean jobs will be lost; on the contrary, the automation makes the job more effective, less expensive, and thus more desirable. None do anything beyond automating a routine task. This does affect people, change job descriptions, concentrate human attention to parts which have proven either harder or more expensive to automate.

It does not take Human Intelligence to fairly successfully classify loans to risky vs safe. In fact, an Automated multi-variate Support Vector Machine can do that work much faster than people, and with carefully curated training data, with far higher repeated reliability and lower chance of biased decisions.

It does not take Human Intelligence to detect which photos have blue sky, or cats, or cars, though sometimes Automata can be fooled to label digital white noise as an armadillo. Unlike an average-intelligence human, that Automaton can not simultaneusly describe the behavior of cats nor explain why skies can be both blue and white. That takes other kinds of Automata.

It does not take Human Intelligence to keep a vehicle inside a lane and not colliding with other vehicles, while transporting people. At least, not in bright daylight, or while vehicles are not bicycles. Okay, sorry, those were snipes. Automated vehicles are making exciting progress, and may very well be 99% ready for wide deployment — but the last one (or few) percent may be extremely difficult to clear.

What so far does take Human Intelligence is dealing with exceptional situations. Automata will be able to master those which are merely rare, because unlike human intelligence, Automata can share the data effortlessly across all instances, making the same training available globally. Our world however is filled to the brim not by merely rare situations, but those which are truly one of a kind. Because the Automata are merely that, automating what has already been observed and decided upon before, they are not very strong at all in that human ability to transfer experience from one situation to apply it in a completely different context. That is, in Understanding.

Whether understanding can be achieved at all without simultaneously achieving those far more elusive properties like emotion or empathy is an open question. Count me among the dubious.

Meanwhile, let’s continue to Automate what can be automated, removing the burden of routine from the daily tasklist of Human Intelligence.