The Natural Intelligences (NIs) behind AIs

(Creative Commons/AD Teasdale/CC BY 2.0)

Why not call a spade a spade? The reason I object to calling (this crop of) AIs “AIs” is that the term hides the actual legal persons whose goals and interests and beliefs they are pushing. Instead, we should see them as the digital dummies of newfangled ventriloquists.

We should not miss the misdirection in this trick. Sleight of hand worthy of Vegas. Embracing these machine-human interactions as an innovative technology — which they certainly are — blinds us to their much larger impact as a permissive transfer of wealth and grant of ownership — which they are the automated conduits for.

Our smart response is not blind fear. Nor blind faith.

The asymmetry of these transactions is huge. Small doses of ephemeral convenience and necessity (non-capital value for us) are paid with enormous permanent grants of data and information (capital goods for them.) And that’s on top of any transaction of money for goods and services being mediated.

When you converse with most “AIs” — simple, small, large or complex — you are not having a one-to-one interaction, much less a one-on-one conversation. (Which is what is being simulated with increasing fidelity. But chatbots do not really chat, they simply bot. Remember that! And they bot on behalf of someone else, not you.) You are having a one-to-many recorded interview with a large number of people working on behalf of a handful of actual owners and controllers.

Consider more than the narrow aspect of these digital encounters. And compare the situation with others. How would you feel if every time you entered your favorite store or supermarket, a voice announced: “Welcome! This visit may be recorded for quality purposes.”…? And then whispered — the equivalent of the small print — in your ear: “As well as analyzed and sold so that we and others may better target you in the future.” If you think that’s creepy in a public environment, how about in your home, your office, your car, etc?

Would you be my friend if I always recorded our conversations? But it’s happening. You are simply not being told it does.

These super chatbots do not really chat, they simply bot. And they bot on behalf of someone else, not you.

—

In this “Turing Test” we are not only mistaking a computer for a human. We are, in an amazing reversal, confusing ‘a massive group of commercial human agents working for an actual human owner’ for a single-point interaction with a simulation of an intelligent, independent moral agent. There is a flesh-and-blood (and cash) reality behind the digital artifice. Any serious darwinist or military analysis of this confusion and the behaviors it promotes would reveal them as maladaptive. And counterproductive for us.

There is a flesh-and-blood (and cash) reality behind the digital artifice.

There is no need to ascribe evil intent but the results may be, well, yet another road paved with good intentions. Frequented by all sorts. I’m not suggesting a conspiracy. However, the enormous scale of the asymmetry and the automated and opaque nature of these mechanisms inherently put us at a disadvantage. We are made vulnerable.

Counter-intuitive as it seems, the Public (that means you) trusts AIs in ways they would not trust real people. You’d think the term “AI” would inspire suspicion. And when you ask the question, as I have done informally, the answer tends to be…

No, I wouldn’t trust a robotic intelligence with my ______________.”

But what we say when asked and what we do when being consumers of goods, services, information or ideas, seems quite contrary. As much as everyone seems to abhor the terms ‘consumer’ or ‘user,’ substituting those for the oh-so warm-and-fuzzy ‘human’ only shows a low survival IQ and lack of street-smarts. As designers, design-thinkers, policy-makers or journalists — if we are truly working on your behalf — we should never be so naive.

And neither should you. Confusing the roles we play with who we are and our personal identity is not a productive or creative strategy. I am, of course, first a human being and then my self, a specific individual. But when I step into a commercial environment, refusing the label of my role as a customer and consumer is not smart. If I work in a factory I am a worker. And I should insist on being treated as a human being. But forgetting what is my real commercial relationship with the factory owners and managers will only weaken my case for humane treatment and human rights.

As David Quammen wrote in his wonderful book Monster of God: “Among the earliest forms of human self-awareness was the awareness of being meat.” Our distaste for the label should not blind us to its accuracy when around carnivore agents in an environment, whether the jungle, the shopping mall, the job or stock market, the elections or the internet.

As much as everyone seems to abhor the terms ‘consumer’ or ‘user,’ substituting those for the warm-and-fuzzy ‘human’ only shows a low survival IQ and lack of street-smarts.

The journalist and writer Steven Levy does all of us a great service covering the many sides of this question. (See his pieces about AI efforts at Amazon and Apple.) Does being (commercially) smart preclude “not being evil?” Do good guys finish last, after all? Or are all bots not created equal?

Does being (commercially) smart preclude “not being evil?”

I don’t believe so. But something needs to change soon to support that optimism. How can any design that lowers our intelligence or street-smarts be considered human-centric? I posted about this in “AI is A but not I (yet)” and “The Right to Privacy” and I include below a graphic of the Four Connected Design Standards…

But design principles alone are not enough. How do we help everybody notice, identify and remember what kind of transactions they are really involving themselves (or their children) in when conversing with “AIs”?

It’s much more complex than the Wizard of Oz. There are so many people behind bots, machine learning systems and AIs. Noah Bierman writes revealingly in the Los Angeles Times about the many “they’s” involved:

Rachelle Watson, a 31-year-old schoolteacher sipping a whiskey with a friend at the adjacent bar, did not notice what was going on a few feet away, despite banners billing the epic “Brains vs. Artificial Intelligence” showdown.

Watson had just lost $60 playing blackjack and slots, so the prospect of losing to a machine did not impress her. “They beat us every day,” she said.

But Ms. Watson, no relation to IBM’s Watson I presume, still knows very well who “They” are. History repeats itself. A community of AI researchers are doing basic science while others are applying that research for very different motives. Good, bad and indifferent.

Even the experts say things revealing the degree of everybody’s current ignorance about what is being done versus what is happening. This equivocal way of talking is truly breathtaking: “It has a very sophisticated model,” said Sandholm, the lead developer. “It just doesn’t know that it’s bluffing because it doesn’t know the word ‘bluff.’” (Quoted in Mr. Bierman’s L.A. Times article.) If that is an attempt at humor, it’s very witty. But perhaps not what one would hope from academics working on potentially dangerous technology.

The “it” above refers to Claudico, a poker-playing “AI” that was boting (verb. to bot: to work on someone else’s behalf) for researchers in Vegas on its way to applications elsewhere. The “cute” Latinate nickname Claudico belies either a naïveté about autonomous automation’s potential for mayhem or a dark, knowing humor. Emperor Claudius may have been a laughing stock because of a limp and speech impediment. But once he gained power he proved ruthless enough. He was also the first emperor who gained power not from the Senate but by paying off the Praetorian Guard. Hardly the most humanist or humane nomenclature for a benign automated agent.

Daniel Dennett has astutely written: “The real danger, then, is not machines that are more intelligent than we are usurping our role as captains of our destinies. The real danger is basically clueless machines being ceded authority far beyond their competence.” But I would place the real danger well before the chaos before the Singularity. The danger is now. Humans are still the most dangerous animals around. And we prey on ourselves. Well before the owners of these bots may lose control over them it is us who are losing control over our economic relationship with those all too human owners.

How can any design or application of technology that lowers our intelligence or street-smarts be considered human-centric?

[ To be continued… ]

•••

NOTE 1: Again, this is not about paranoia, conspiracy theory or being a Luddite. The point is informed consent and symmetry. Being a Luddite or a Conspiracy Theorist will not help; it never has. Paranoid responses actually distract us from the real agents in the environment that may help or harm us.

NOTE 2: A good read on the AI field is A primer on Artificial Intelligence (AI) by David Kelnar in Medium. Note especially his distinction between AI in general and Machine Learning in particular. A very readable summary of the issues, techniques and applications of “AI.”