All AGI efforts so far are fundamentally unsafe. OpenCog included.

Steve Hazel
3 min readFeb 23, 2020

--

It’s unfortunate that we don’t know how HAL-9000 came to be.

What Google is doing with DeepMind and Brain is unsafe. Whatever OpenAI is doing is unsafe. Same with OpenCog, Cyc, and all the rest you’ve never held in your hand. Put simply, every AGI effort you’ve heard of is fundamentally unsafe. Same with every AGI effort you haven’t heard of.

Why? Because everyone knows how safe systems happen, and none of the above are doing it the safe way.

Air travel is safe. Food from the grocery store is safe. Driving, in North America at least, is safe. Pets are safe. Electricity is safe. Even pharmaceuticals are safe. None of these are perfect, but we trust them because we’ve gradually integrated them into our lives.

So we know what is safe, and we know how to create safe systems. Safe systems grow, in public and in the open, one small step at a time.

With air travel, food, driving, pets, electricity, and pharmaceuticals we made many early mistakes and caused the occasional disaster. But because we grew those systems slowly and transparently, the impact of each mistake was contained and constrained. From the beginning, individuals continually re-evaluated the impact on their lives in a wide variety of real-world environments. When adjustments were needed, people yelled until it got fixed.

Nobody is developing AGI this way. Or if they are, they need a bigger marketing budget.

The noises that AGI developers make about safety like the notorious Open Letter, OpenAI’s charter, DeepMind’s Big Red Button, and Stuart Russell’s three principles are all like lipstick on a pig. While there’s value in thinking about safety, it’s somewhere between disingenuous and dangerous to slap a “safe” sticker on what we can all see is a fundamentally unsafe approach.

Besides, if we know AGI is not safe because it doesn’t walk like safe or quack like safe, we won’t trust it. Those who are insisting on an unsafe approach are shooting themselves in the foot and we shouldn’t allow them to shoot the rest of us in the foot too.

One of the keys to smoothly integrating one complex system (like AGI) into another (like modern society) is to begin small, simple, accessible, and available. The more regular people who can be involved, the better. If you’re working on an AGI and the total installed base is somewhere between zero and ten, you’re creating a problem and not a solution. However, if your installed base is in the millions, you have a chance to avoid fundamentally unsafe AGI.

By installed base, I mean that real people have your AGI running on hardware they control using data they possess. They’re able to peer in and see what’s going on, and can experience the impact first-hand. If you explain to them how it works, they’ll grasp it, at least vaguely, because they’re an intelligent being.

Sadly, all AGI efforts these days are impossible for most people to understand. Sure, there’s the occasional math-heavy research paper to read, and perhaps an undocumented GitHub repository, but most people are not willing to put in that much effort. Even when a person has the necessary aptitude, education, and experience, they probably won’t have the compute resources or the data-set to accomplish anything useful.

It’s also impossible for most people to download a budding AGI and see how well it works for them. Even installing OpenCog, due to its complexity, is beyond all but a select few highly motivated, extremely technical individuals.

Put simply, overly complex systems will not be broadly tested and so they cannot be made safe.

Safe and trusted AGI is possible only by beginning with a simple system and allowing it to grow in the hands millions of regular people so that we can see what we’re getting ourselves into. If we allow anything else to happen, we have only ourselves to blame for the inevitable fast-moving disaster.

--

--