The Robot Non-Pocalypse

So a lot of rich and/or smart folks think the Robot Apocalypse is upon us. Bill Gates, Elon Musk, Stephen Hawking, Steve Wozniak. and Ray, “You too will be assimilated” Kurtzweil.
And I think they are wrong. As arrogant and foolish as this may seem, I have an argument. This week on the Seanachai, I’m going explain to you why I think Skynet is the last thing you should be afraid of.

In a nutshell, I have two points. Everyone who thinks that the machines will rise up and displace us is

  1. Confused about economics and
  2. Anthropomorphizing technology. Not just ascribing agency where none currently exists, but assuming that all forms of consciousness will be flawed in the same ways that we are. And there’s no evidence for it.

It’s also interesting to note that all these people live work and breathe technology. But they are afraid that what they do could be the end of the world. You can easily make the case that Bill Gates is one of the most powerful and important men in the world — but the idea that his field is the difference between life and death for the entire species? That’s a little too convenient and arrogant. Even for a billionaire. “I’m the most important man in the world,” is riddled with hubris.

Since it is logically impossible to prove a negative — a principal that all apocalypse hoaxes rely on — we have to ask ourselves, is it likely that machines will become self-aware, rise up and wipe us out?

Let’s look at this from the standpoint of the sun rising not rising tomorrow. Just because it has happened doesn’t mean that it won’t. I can’t prove the sun won’t rise tomorrow, but if we know something about celestial mechanics — if we can agree that the sun is constant, and only appears to rise and set because the world is round and spinning, we go a long way towards dispelling mysticism.

Now we can ask a whole bunch of other useful questions. Can the earth stop spinning in an evening? Could the sun just wink out? Could it explode? And based on our understanding of physics and observations made in the real world, we could develop a sense of how likely those things are. And hopefully sleep better at night as a result.

As a starting point, let’s take a recent statement of Steve Wozniak’s.

“If we build these devices to take care of everything for us, eventually they’ll think faster than us and they’ll get rid of the slow humans to run companies more efficiently.”

Is this likely? Sure, why not? But is it, as stated, a problem?

The answer, I think, is no. And to understand why, we need to understand the fundamental wellspring of human economic activity. Why do companies exist and what do they do? A company produces a product or service for people. If is a good company, it is a good product or service that people want at a price they are willing to pay. If it is a bad company it either goes out of business, or exploits regulatory protection to the detriment of consumers. For example, the cable company.

Defer, for a moment, the idea of companies of robots producing goods and services for robot consumers. I promise, I will dispatch that fallacy in due course, but to stake that vampire we need to look a little deeper into how an economy works.

People have wants and needs. The world is so complicated that they are pretty much indistinguishable from each other. Things that we see as a basic needs in the United States, don’t exist much in sub-saharan Africa. And a lot of them probably didn’t exist before 1850 or so. But the thing with people is no matter how much we have, we always, always want more. Sure, one car is good, but why not two? A convertible to drive on sunny days. If it was cheap enough, or you were rich enough, you’d buy one. Or any one of a thousand other damnfool things that your grandparents were perfectly happy doing without.

No matter how much we have, a little bit more will always make us happier. You could argue that a person only needs one pair of shoes. But the availability of shoes coupled with the inherent greed and vanity of humanity is how we wind up with Air Jordan’s and Manolo Blanco’s et al.
Now you might be more enlightened than this — I certainly hope that Saints and Bodhisattvas are among my listening audience, but odds are, you are not. And the entire history of mankind is a pretty damning chronicle of appetite and avarice. Deep inside everyone there is a voice that screams for MOAR!

I believe that coming to grips with this inner appetite is key to true happiness. And I am in good company on this. Lao Tzu, wrote, “He who knows when he has enough, is truly rich.”

So, the most fundamental problem with the robot apocalypse scenario is that economic production is arranged for the satisfaction of people. We already live in a world of abundant capital. And our problem is not having enough stuff to make something, it’s knowing which things to make. (and how to distribute and market them) Robots taking over all production is just MORE abundance.

And that is unequivocally good. People don’t spin yarn into thread by hand anymore. Machines do it. And there are more people working in the textile industry now than before machines took it over. They’re just not doing the boring parts.

And, as we can already see that in a world of abundant production, the scarce resource is ideas. Specifically, ideas about what people want and how to make people’s lives better.

I don’t mean this in a Mother Teresa way. I mean it in a practical way. In a-wheels-on-luggage kind of way.

Wheels on Luggage

Depending on how you want to pick the start, civilization is some six to nine thousand years old. The use of the wheel for transportation purposes is dated to 3500 BC — so, we’ve had the wheel for 5500 years. And it wasn’t until 1972 that anybody really added wheels to luggage.

There’s not an algorithm to come up with insights like that, or the insights that create profound art. It takes a deeper insight of what it means to be human than most humans have. Shakespeare (pen name or not) had a profound understanding of what it meant to be human. In fact, Harold Bloom argues that Shakespeare invented modern human consciousness. It’s a little crazy, but after you read his book Shakespeare the Invention of the Human, it’s not as crazy as it first sounds.

And that is more disturbing to me than the Robot Apocalypse.

But let us say, for a moment, that robots learn how to put wheels on luggage. All they are, is better at satisfying our wants and needs.

For me, the problem with that is pretty obvious. It’s the Wall-e problem. We become a race of fat, useless, stupid, incapable slobs. In animation this is cute. In the real world, this is very sad and ugly.

Isaac Asimov wrote about this very problem, with great eloquence and sensitivity in the robot novels and his Foundation series. Humans who colonized the first planets outside earth relied on robots for everything and became, not only incapable, but estranged from each other. And they stopped reproducing.

This is, I think, a real and terrifying threat of ascendant technology. But that’s not a robot apocalypse, it something we would choose to do to ourselves. Like drug addiction.

But what about greedy, self-aware machines?

Accept, for a moment, that machines will be come self-aware. But as you do recognize that this is a gigantic leap. We have no literally idea of what consciousness is. Or for that matter what dark matter is — And dark matter makes up 84.5% of the universe, so we literally don’t know what the vast majority of reality is composed of either physically, or phenomenologically.

But cast the glaring defects in our knowledge aside and say that machines develop consciousness, Consciousness being againt, a thing which we can’t define. Why would they be greedy? And why would they be greedy for the things we need?

My argument is two-pronged here:

  1. We are assuming that all consciousness is or would be like our consciousness.
  2. And that the flaws in our makeup are somehow inherent in all consciousness. It’s base anthropomorphism. And I don’t think there’s any reason to believe that’s the case. Dolphins are conscious. They are very smart. Are they greedy? Are they even, in consciousness, inferior to us? Why? Because we make computers and scare ourselves with them?

The consciousness of machines may well be far higher than ours. Which would be humbling, wouldn’t it?

And we already have the a glimpse of something like this in an answer to the Fermi Paradox.

The Fermi Paradox

The Fermi paradox goes like this:

The apparent size and age of the universe suggest that many technologically advanced extraterrestrial civilizations ought to exist. However, this hypothesis seems inconsistent with the lack of observational evidence to support it.

In other words, where the hell is everybody?

One of the answers offered to this paradox is that civilizations grow to a point and then annihilate themselves with their technology. Nuclear war, biological warfare, nanotech, robot apocalypse — all the fun stuff.

But, that also leaves the possibility that some civilizations would evolve past this point of self-destruction. Developing a higher consciousness, if you will. And this is the kicker, developing to that state of higher consciousness means that they don’t need or want to go anywhere or conquer anybody.

What about immortality? The reason people have kids is so something of themselves and their DNA goes on. What if you know you aren’t going to die? What if we evolved to the point where we never faced death?

For a human this probably is an impossible to fathom question, but for consciousness that lives on a chip, what does death look like, if looks like anything at all? Why reproduce? Just because we have a very strong drive to do so, doesn’t mean that a robot does or will?

And for those who say that we will create our digital children in our own image — I say, you fool yourself about both children and consciousness. If a thing has free will, it will make choices you don’t like, want or expect. Such is parenthood.

I think part of the reason people are afraid of robots is that in nature, a fitter species will displace an inferior species at an exponential rate. But this statement is incomplete. This will happen only if the species are competing for the same resource base. And would we be, really? Maybe electricity? The faster and more powerful computers become, the smaller they get. Would the most powerful computer ever be the smallest and most efficient?

And we desire computers to be faster for our purposes. For crunching large amounts of data and rendering 3d graphics. But what would be the purposes of self-aware computers?

We don’t know — and not knowing is scary. And scary is great for the purposes of writing thrillers and controlling people who are easily frightened and not so swift in the critical thinking department.

What are my qualifications to talk about any of this? None really. I have an understanding of economics, but all the people I mentioned at the beginning of this essay are all waaaaay smarter than me. But ultimately, the veracity of an idea isn’t about qualifications. Nobody cared about Einstein’s qualifications, they cared about the quality of his thought.

I’m no Einstein, but I think I just knocked a couple of holes in the theory of the Robot Apocalypse. And until somebody fills them in, I’m going to go back to worrying about the Zombie Apocalypse. It’s probably just as unlikely, but it’s waaaay more entertaining.