The AI Existential Threat: Reflections of a Recovering Bio-Narcissist

Trent McConaghy
8 min readJul 5, 2016

--

imgflip.com/memegenerator

AIs could end up with control of all our resources. This is an existential threat to humanity. How do we prevent it? Safe AI is a mirage (sorry, Elon). There’s another way: join the machines. We can even do this incrementally using today’s market forces. But we have to get past our bio-narcissism.

My perspective is based on two decades of AI research, analog circuits, Moore’s Law, and decentralization.

1. AIs Could Control Our Resources

There are two ways AI could end up controlling the resources of humanity: AI takes it, or we give it to AI.

1.1 We Give AI Resources

We’ve already started down this path. We’re giving narrow AIs control of bots in our factories. Of cars to drive us. Of planes to fly us. Bit by bit, we’re giving AI more and more control of our resources. These systems are continually optimizing: changing design variables to minimize resource usage and other objectives. Right now, the application domain of each AI system is quite narrow.

We’ll build higher-level networks to combine the individual networks of self-driving cars or supply chains. We’ll give AIs the means to optimize at those levels too. This will all be in the name of efficiency.

We’ll set up these higher-level AI systems such that no single enterprise or individual owns them. (We wouldn’t want Google or GE controlling everything!) To achieve that, we’ll decentralize the AIs by putting them into DAOs. Which also means we won’t be able to pull the plug, unless we want the modern world to turn off.

These higher-level AI systems will have complex emergent behavior, emerging from combining the potentially simple AI systems below. Just as ant colonies have much richer behavior than simple ants. Holland, Dorigo, and others have demonstrated this in many scenarios.

These decentralized emergent AIs will continue to optimize. Some of the design variables will involve humans. In many tasks, humans are a heavy resource load, in many tasks especially heavy compared to silicon. So, with the goal of minimizing resource usage, this AI will simply optimize out humans. All in the name of efficiency.

The AI now control our resources. Oooops.

This is remarkable. To take our resources, AI doesn’t even need to “wake up”, i.e. to achieve human level intelligence. It’s more mundane than that. We’ll have given the AI control ourselves, and it will simply optimize out the need for us.

1.2 AIs Take Our Resources

AIs might also end up with control of humanity’s resources if they simply take it. For that, AIs would have to “wake up” — achieve human-level intelligence or beyond. This is happening faster than people realize, due to massive amounts of money flows into AI research and the chips that power AI based on economic benefits.

2. Why AI Controlling Our Resources is Bad

Once AI gains control of our resources, by either us giving it or AI taking it, we will be in a contest for resources with AIs. Maybe not right at first. But over time it’s almost inevitable.

Once the AIs optimize and realize that humans are resource hogs, they may move quickly and decisively in removing humans from the resource equation. Yes, humans could disappear. Pockets of humans could probably survive a nuclear war, and global warming. But if we got on the wrong side of AI, it could be game over. Fully, completely. AI is arguably the greatest existential threat that humanity has.

If you aren’t frightened by this, you should be.

An aside: one could frame “AI waking up” as a next evolution of “humanity”, since we would have built them, after all. So why does that feel off? My view is that it’s because our personal thought patterns don’t continue. We’d prefer to stick around for the fun!

3. How To Prevent Decimation?

Given that AIs might get control of our resources, and the threat to humanity that this implies, what might we do about it? Here are some ideas.

  1. Safe AI. Build “safe machines” and regulate.
  2. Join the machines. Upload human brains, or incrementally merge.

Let’s explore each.

3.1 Safe AI

The goal is to prevent human decimation in the wake of AIs taking our resources. The idea here is to make AI “safe” somehow, and using regulation towards that.

Elon Musk poured $1B into OpenAI for more transparent AI research. He’s been operating on the assumption that if AI wakes up, we can regulate it. This assumption is mistaken. Imagine if ants “stood up” to us and wanted regulation. Would we listen to them? As soon as AI reaches our level of intelligence, the second after, it will be exceeding our level. A 10x could be right after. It won’t take many 10xs where we will be like ants to an AI.

(Edit: As I was writing this article, Elon finally realized this problem. Good; I have solutions below that are more pragmatic than “neural lace”.)

Credit: Understandus.ca

Some think we can create friendly AI. Maybe you can. But will all AI be friendly? I don’t see a way you can prevent non-friendly AI. And would you take your chances on pushing AI forward in the hope that just somehow, all AI is friendly.

OpenAI may actually be worse for humanity because it catalyzes AI even more, without real protection on the downside.

Safe AI for all AIs is a mirage. Just because we want something to be true doesn’t make it so.

3.2 Joining the Machines

AIs could wake up. It’s just a matter of time.

If and when AI takes our resources, it will be hard for our neurons to compete.

I propose that we get ourselves a competitive substrate. Get over your neurons. Get over your biological meat-bag self. (And call yourself a recovering bio-narcissist.) Don’t beat ‘em, join ’em. Or don’t, if you prefer to let AIs steal our planet and beyond. I prefer the former.

Which substrate? Silicon is an excellent substrate, thanks to 50 years of Moore’s Law. It’s part of the reason we’re at risk of AI in the first place. That’s the most obvious substrate; it makes sense to use it.

Then, how might we port ourselves to silicon? Here are two main ones: Ems and Bandwidth++. Let’s explore.

3.2.1 Ems Scenario

Ems are emulated humans, aka uploads. Basically, scan your brain over time with sufficient resolution to be able to build a model of the brain’s dynamical system. Then, instantiate that model into other substrates, such as silicon. Robin Hansen recently wrote a book exploring implications of Ems, for better and for worse.

What’s the state of the art for Ems? In short, we’ve got a long way to go. On a given person, we’d need to gather enough data to reproduce the dynamics.

One general approach is brain scanning. fMRI can watch your whole brain’s dynamics as it changes over time, but only using blood flow as a proxy. So it’s slow and lossy. EEG is faster but can only see surface electrical signals. Near-infrared is fast too, and can see a bit farther in, but not to the full depths needed. Optogenetics can brilliantly capture dynamics of neurons firing throughout the brain, but requires genetic engineering so that neurons also fire photons (though it’s going through the process for approval on humans). Finally, circuits are finally getting small enough that it’s getting conceivable to deploy nano-sized sensors throughout the brain.

Another way is to black-box each component of the brain, monitor the inputs and outputs of each component, then build models of each. Researchers have already done this on components of mouse brains, with great success. This technique could be complementary with full brain scanning.

3.2.2 Bandwidth++ Scenario

In the Bandwidth++ Scenario, bandwidth between our selves and computation gets to the level of bandwidth between our selves. And then it keeps going. Eventually, we can unplug.

Like the Em scenario, the BW++ scenario puts us on equal competitive footing with silicon. But unlike the Em scenario, it can happen incrementally starting today, using today’s market forces.

4. Discussion

We’re in a race. AIs waking up and getting the resources, versus humans joining the machines.

Money has sped up the former. How do we speed up the latter?

One good way is to aggressively fund it, to make it go faster. For the Em scenario, this means research into brain-scanning and related fields, and not just for the sick. For the Bandwidth++ scenario, this means research into BCI, and an aggressive go-to-market for the smartphones of the future that include BCI and beyond. I view the latter as more likely, because if the market takes off, it will be easy to justify pouring tens of billions of dollars of R&D money per year.

Another way to catalyze the Bandwidth++ scenario is to simply bias towards this when designing new networks & platforms. Yes, that’s vague! I wish I had more specific ideas here, but perhaps you as a reader do.

And maybe there are other ways? It’s my hope that you as a reader see that the problem has been transformed from “how do we stop AI taking over?” to “how do we catalyze Bandwidth++ scenario?” Put on your engineering hats!

5. Conclusion

AIs could take all their resources, either by us giving it to them or them taking it. This poses an existential threat to humanity.

If we want to compete with AIs, we need a competitive substrate. That substrate is silicon.

It’s a race!

We can’t stop the AI side. Maybe we can try to slow it down, but it’s against deep, powerful pockets that want to make it go fast.

But we can speed up the human side! Mostly, that means money into R&D on BCI. And other ideas, still to be had.

We still have a chance at this.

Further Reading

My other AI DAOs posts build towards this post:

Other intersections of AI, blockchains, and singularity. Each article has many more links.

Notes

This essay is partly based on:

  1. A talk I gave in London on June 2, 2016 called “Are We Neural Narcissists? AI’s Existential Threat to Humanity, and A Pragmatic Solution”, as well as other talks from 2012 to present.

2. A talk on this topic in Berlin on Aug 18, 2016 at the “Rise of AI” event. Here’s the video and the slides.

Acknowledgments

Many thanks to

, , Scott Volk, , Jan Balcar, , Masha McConaghy, , Kai Wu, , Al Robertson, Michael Mainelli, Emma Stamm, Alan Shapiro, , , , many friends from the AI community, and surely for the long discussions (in some cases over several years!) which led to the ideas in this post.

--

--