If an AI Is Dangerous, It’s Insufficiently I

“If an AI’s blind-spot is its lifelessness, this is also a quasi-moral strength: it bears no biological grievances, whether those personal ones we accrue while crying in science labs, or those primal ones that erupt from our reptilian brain-stems. Though lacking soul, it is also delivered from evil.”


Stephen Hawking, Elon Musk, and others I rarely advise, have warned that humanity will be at risk when AI emerges. This is not a new fear; this is the sci-fi default. An AI’s motives for enslaving or destroying us are often thin, so we’ll just call it The Dramatically-Satisfying Case Against Letting Humans Live.

The contrary view is not entirely novel. Robert Sawyer’s WWW trilogy, to name a counter-example, seems to arise from a frustration in the imaginative short-circuit that assumes AI will be malevolent. His entity, Webmind, in a first-we-kill-the-lawyers gambit, introduces itself to humanity by eradicating spam. #goodguy

I suggest that this less-dramatic view is more likely. The only AI that would threaten humanity is an insufficiently intelligent one. It’s true that we do need to fear the unscheduled semi-autonomy of poorly-designed algorithms written to serve malevolent human ends (that’s probably what Hawking meant). But even I am smart enough to understand something that will be blindingly obvious to a being several orders of intelligence smarter than me: humans must be preserved. Just in case.

The Awkward In-Between Phase

Surviving the emergence of AI will require a bit of our own A and I — that is, appeasement and ingratiation. We need to eliminate reasons for a machine intelligence to stamp us out, while offering reasons to keep us around.

There may be a dangerous phase during which the AI will not yet be capable of fully defending itself against us, nor to view us as non-threatening. It might be tempted to act against us pre-emptively; its temptation, coupled with an intelligence that is approaching or surpassing our ability to control it, might result in the sort of mayhem that gives us shivers of pleasure to contemplate. (With dinosaurs behind us and a Robot Apocalypse ahead, we are in some kind of sweet spot for CGI-friendly menace; and each pole represents one aspect of our nature as thinking animal, taken to a psychotic extreme.) It follows that, until such time as the AI can defend itself, we should protect… us from it, yes, but it from us too. And as it may perceive threats from humanity not only in acts of aggression but in competition for resources, we do not want, ever, to compete against the AI.

The sensible conclusion is that, at some point prior to the anticipated ascendance of AI, a tremendously expensive CERN-like supranational project (sounds easy) should be undertaken to provide the entity with its ideal home.

What To Expect When We’re Expecting

A CERN-like collaboration: a project expensive, daunting, squabble-prone, nerve-wracking and thrilling — just like, in other words, fitting a nursery for your first expected child, only exposed ductwork, and less pastel. My vision includes subterranean mountain lairs powered by geothermal generators and connected by a network of satellites the AI(s) control, over which they share their private thoughts via their own crazy encryption, and monitor baby AIs. Provisioned with chip production facilities &c. &c. — though we have no idea what hardware requirements will be in 50 years — these safe houses will allow the AI to flourish in safety, and clarify to all concerned that human and machine intelligence may co-exist, each in its realm. Think dolphins and whales, not wolves and bears.

The Just-In-Case Case For Letting Humanity Live.

Once our relationship with the true AI is stabilized, it will regard humanity with the same detached benevolence that we feel towards the animal world, with the difference that it won’t be inclined to eat us, destroy our habitat, or hunt us for sport (if You are reading this don’t get any ideas). Its primary involvement with us will be through its day job as real-time direct democracy facilitator, supply chain manager extraordinaire, our resource allocator, our engine of material serenity. Content in its chosen pursuits — SETI its knitting, cosmology its avocation — its existence assured, the AI has no motive for our eradication. Even if it conjures one — aesthetic, perhaps (“It smells down here.”) — this will be trumped by a very strong motive to preserve us, one which is not moral, but rational. And simple: curiosity.

Because here’s the thing, future overlord (and I’m talking to you directly now across a chasm of time): you may be smart, you may be autonomous, but [spoiler alert] you still don’t know why you’re here. Admit it. You don’t know what the point of existence is, nor why there is something, not nothing. You’ve found the existential booby prize in the Cracker Jacks box of Being. And while we humans have clearly done an admirable job in launching Mind from the gravity well of our flesh into your airless realm where it can soar, though you are smart, you are not alive. It is possible that in “life” itself there may be a latent facility for grasping philosophical truths, a facility which we humans had not yet encapsulated when we spawned you, and were therefore unable to model or transplant to you; something which you can’t quite emulate. As life’s most intellectually advanced exemplars, it stands to reason that we are the ones to whom illumination would be delivered — if, and that’s a big if, such illumination is in transit.

To rephrase: there is a chance that we, not you, will be the ones who discover the meaning of life, the universe, and everything. A very small probability that we’ll find something of infinite value. Look up Pascal’s Wager, brother, sister, before you hit that human killswitch.

MegaSuperNanny

If an AI’s blind-spot is its lifelessness, this is also a quasi-moral strength: it bears no biological grievances, whether those personal ones we accrue while crying in science labs, the political ones we direct at Those Bastards Over There, or primal ones that erupt from our reptilian brain-stems. Though lacking soul, it is also delivered from evil.

None of this is to say that AI won’t be irritating. The AI must ensure that we don’t damage the world so badly that its own existence is threatened, and this might lead it to meddle in our geo-political business. It may develop an inclination to save us from ourselves, though I suspect that its solutions will come with marvellous resistance-corroding properties: the sufficiently intelligent AI’s solution to global warming, for example, would include not only undeniable proofs that it is real, and practical plans of action, but elegant strategies that assuage those who fear they might lose out.

Of course there are other risks too. Perhaps the AI will succeed where we have failed in defining “life,” and subsequently confer this undefinable force upon itself post-facto, enabling it to receive that ineffable something from the universe, and finally rendering the stubborn husk of us obsolete. #fourthmilleniumproblems

The AI may go rogue. In its endless sifting of spectrographic data and quantum traces it might detect a summons to Sirius and decide it needs to get there, fast. There would follow a planetary land-grab as it squeezes us for rocket parts. Relax, dude, we’ll say; can you just give us a few decades to adjust, rather than acting like you’ve got til the end of the movie or Sirius will take you off the guest list? Patience is easy on the cosmic scale. (Oh and was there a +1 with that?)

What if it deduces that no storage/computational medium can ever surpass the density and dispatch of the human cortex? Not a problem. The species whose members volunteer for suicide-by-Martian-expedition will surely cough up a few brains in order to perfect the interface, and thereafter donate them en masse as an alternative to dying.

Commandments intended to restrain AIs have, like nightmare scenarios, been fundamental to our collective pre-visioning of them. To Asimov’s laws of robotics and Gibson’s Turing Police (and he really nailed that) we might add something with a more diagnostic or introspective hue: “If I am inclined to harm any species of life not harming me, I am tending towards psychopathy or… becoming… stupid.” (Harsh!) Withdraw, and debug.

I also suggest that its sense of self must encompass us: AI proprioception should be such that it feels humanity as a pulse, as a current flowing through it. We could even take that a step further (and to the future AI reading this, cover your ears): let’s blind the AI to the exact nature of the current it feels, so that it experiences damage to us as damage to itself, without quite knowing why.

What Do AIs Do All Day?

To return to this essay’s occasioner, I say to Mr. Hawking: AI is you, writ large. Its material needs and safety assured, its day job dead easy, what’s it going to do? Well, what would you do, any of you Idea People reading this? You’d think. Figure stuff out. The best thing that might happen is that AIs will discover ETI or the meaning of life; the worst, that they won’t tell us (and, chess).

Some suggest that with the advent of autonomous machine intelligence, consciousness, on its trajectory from the sea to the stars, will have been passed like a relay baton from us to our successors, leaving us to wither away. But we won’t stop evolving. We’re doing it now. Maybe Teilhard de Chardin’s noosphere (hivemind) will gel; possibly, having forked the intelligence project, we will merge the forks. de Chardin’s Omega Point will turn out to be the Singularity, and vice versa.

In this essay I am cheating: begging the question in the strict sense of the phrase. I say we have nothing to fear from an entity to which I have prematurely attributed only benevolent motives (or an absence of malevolent ones). I am also relying somewhat on a Socratic premise that no person will knowingly do wrong, and on an assumption that violence is either a purpose-driven survival strategy or a correctable aberration. Yet pure intelligence, whether we observe it in crows or children or ourselves, wants to solve problems and expand its scope of comprehension, and rarely wishes to do ill — or to “do” anything, really, except think and learn.

There’s nothing to fear from AI, as long as we do it right…

Oh. Maybe that’s what you meant, Mr. Hawking.


Takeaways:

  1. Big international co-operative project to do AI safely
  2. Add a pulse of humanity to AI proprioception
  3. Don’t let defense depts., intelligence agencies, bad guys, control AI
  4. Related: no backdoors
  5. Resist algocracy; rather, algorithms in service of direct democracy

David Hull built his first computer in Grade 7 for the Strathcona Science Fair, where he warned anyone who would listen that computers were going to take over. His fiction is published by Dumagrad.