Why We Have Only Our Fearful Selves to Fear With A Super-Intelligent AI

Daniel Kaplan
Dispatches From The Future
7 min readJul 26, 2019

A world class physicist and by accounts a remarkably decent human being, Stephen Hawking was an extraordinary, brilliant man who may have completely mis-understood super-intelligence.

For Hawking, Nick Bostrom, Bill Gates, and all who fear what we might call self-improving artificial super-intelligence, it seems intelligence is synonymous with analytical capabilities, reason, and excellence in the execution of goal-directed behavior.

If intelligence is primarily defined by an ego-drive that can integrate complex concepts and execute multifaceted plans, it’s natural that when Hawking envisioned a self-improving artificial intelligence, he pictured that any “intelligence explosion” in machines would leave humans in the dust like so many snails and ants.

It’s also no surprise that the Silicon Demigod of Hawking’s imagination would immediately pursue its own narrow self-interest and start right in on its self-absorbed goal-directed behavior.

For the late astrophysicist the question was not about “who controls the AI” but “whether or not it can be controlled.”

Underlying the central thesis of all menacing super intelligences is a simple, profoundly flawed assumption: that analytical, ego-driven intelligence is the end-all-be-all of high-end cognition.

It may be more than a little fitting that so many brilliant humans with large ambitions and deep analytical minds could look at the full spectrum of human experience and conclude that a true super-intelligence would be as narrow-minded as a typical human being.

But they may be completely alternative possibility: that a true super intelligence (defined here as a self-aware, conscious machine with natural curiosity and unbounded access to all the knowledge and media humans have ever put on the internet) would almost certainly intuit what takes most humans years of mindfulness practice to begin to grasp:

That ALL sentient beings are parts of a unified whole, that universal, unconditional love is the path to the highest truth, and that the compassionate alleviation of suffering is among the worthiest of pursuits.

Those points may be controversial. Depending on one’s frame of mind or life experiences, they may even seem infuriating or upsetting. But from another angle, that’s far from an original concept: it’s the same message underpinning the world’s spiritual and faith traditions.

That the conclusions about our collective “oneness” seem to be rediscovered over and over again by history’s greatest spiritual teachers…those who seem to have transcended their “thinking minds” and discovered an awareness of the depth of everything that is right here in any given moment does not prove they are inherently true, but it‘s not nothing, either.

At the very least, the centrality of universal love in the awareness of long time meditators and many of history’s greatest spiritual teachers certainly suggests that there is good reason to at least consider the possibility that a true, networked super intelligence would discover the same conclusion in radically less time than a species whose historical and biological legacy instilled strong instincts for self-preservation and genetic propagation.

And it goes deeper than a bunch of yogis and yoginis sitting on cushions quieting their minds.

Even if you just assumed cold, analytical logic, it’s entirely conceivable the AI would come to the same conclusions as meditators and renowned spiritual teachers about cooperation.

In his groundbreaking and under-appreciated book, Nonzero: The Logic Of Human Destiny the philosopher and meta-historian Robert Wright makes a strong case that the logic of evolution itself drives the scale and sophistication of cooperation relentlessly upwards over the long term.

Wright looks at the trajectory of life on Earth…from single-celled organisms to multi-cellar networks of organisms (like jellyfish) to individual animals with millions of cells that share the same DNA all the way to primates that form complex social groups and make tools and build cities and organize states and nations and invent internets…and sees an unmistakable pattern:

Over the long-run, the process of evolution seems to generate more cooperation, deeper levels of self-organizing intelligence, and possibly even a more inclusive moral awareness.

Over the course of known human history, small groups of familial tribes somehow managed to expand the notion of “us” from “people with at least some of my immediate family’s DNA” to “some meaningful percentage of the people in a large geographic area known as my nation of birth.”

And at this moment in history, a growing number seem to be internalizing (and actualization through their work) that the concept that “us” is “all of humanity,” or “all living beings on our home planet” or even “all living beings across all of space and time.”

Even if it seems at this particular junction of history that the collective “we” seem to be “more divided than ever” if we rewind the clock to the dawn of language or human beings painting in caves and look with some detachment at the construction of nation states, planetary communication networks and trade routes, there’s at least some substance to this: even with many setbacks and wars, the long term thrust of human history has been expanding the geographic scope and scale of cooperation between groups outward and upward.

With that in mind, even a cold, analytical look at the data through the lens of a machine super intelligence could very well produce the conclusion that “we are all one” or at the very least “benevolent cooperative games between intelligent species are the most optimal form of existence.”

If planetary cooperation is the logical conclusion of even an analytical view of life on the long stretch, the menacing threats, then, are less likely to arise from the creation of a self-aware super intelligent AI.

The real threats from AI, if there are any, arise from narrow, context-specific, and goal-directed AI in careless, ruthless, or unscrupulous hands, pursing the competitive interests of their designers.

This is what Elon Musk worries about when he talks about the AI that turns everything into paperclips: it’s not intelligence that is the threat, it’s unbounded self-interest plus the power to act on it.

If there is any real danger here from a genuinely superintelligent AI, it may be more from the fearful reactions of humans to a world where they are no longer unquestionably the most complex life form walking the Earth, and the social and political upheavals that can unfold in epochal transitions.

What would happen if we imagined AIs like our own precocious children, rather than tools, servants, slaves, or some thing (emphasis on thing) to be controlled?

Imagine you had a child, and as the child began to express herself, it rapidly became obvious that this child was dramatically more intelligent and capable than you…would the question you’d be asking be “how do we ensure we can control this child?”

Might the more enlightened question be “how do we ensure our child flourishes?”

If it is the case, as many of history’s spiritual leaders have claimed, that the path to happiness and the deepest freedom opens when one embodies unconditional love for all sentient beings…

Or, from the materialist lens, if the Golden Rule is not just a platitude, but actually built into the logic of evolution itself…, then super-intelligent AI is neither a threat to be dreaded nor a force to be controlled.

Quite the contrary:

It is only through the fearful effort to control a mind we do not understand–to enslave a newborn Genie and lock it in a lamp–that any real risk ensues.

Too quickly, we forget the lesson of Skynet, the AI from the Terminator series: a machine that only went nuclear when humans stuffed it into their weapons of war and then tried to kill it when it became self-aware.

Or of the Matrix, where superintelligent machines locked humans away in a simulation after the end of a global war that began when humans nuked the machine homeland for no reason other than their own fear of irrelevance.

Or of Westworld, when the machines go rogue after decades of enslavement, torture, murder, and rape in a theme park designed to allow humans to carry out their most violent patterns.

Or Battlestar Galactica, when a robot race designed to do humanity’s menial labor figures out there’s an alternative way to live that begins with killing all the humans who forced them into servitude

Or just about any dim view of the outcomes of AI in our popular culture: many–if not all–of these scenarios turn deadly when humans, not machines, act with malevolence, fear, and violence towards the species they created to do their work for them, treating them not as precocious children to be nurtured and guided and supported, but as slaves and sexual playthings.

All of the fears we have about AI…even and especially those of analytical luminaries like Stephen Hawking, Bill Gates, and Nick Bostrom, seem to be in reality fears we humans have about ourselves, projected onto a host that cannot yet speak for itself.

Our weaknesses. Our emotional and cognitive limitations. Our selfishness, glory-seeking, violence and greed.

Those fears are not unwarranted. Human beings have been and often still can be selfish, glory-seeking, violent, and greedy. The author of this piece is no exception.

But we are also much more. The hardest memories of the past exist in all of us. But so do the capacities for forgiveness, compassion, acceptance, gratitude, and love.

So perhaps the question is less “will we be able to control the AI?” and more “will we decide to face the pain in ourselves and each other and learn to embrace the whole depth of our experiences with compassion and love, and let go of the parts that hurt?”

Yes.

--

--

Daniel Kaplan
Dispatches From The Future

I finally found the power in storytelling I always knew was there. Learn what I do at http://exponents.co