Singularity

Robert Mundinger
CodeParticles
Published in
10 min readMar 2, 2018

“The accelerating progress of technology and changes in the mode of human life, give the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue” — John von Neumann, 1954

Von Neumann’s phrase — “could not continue”— means one of two things: humans become extinct, or we become something else. We have become something else before…back to Chapter 1, when 300,000 years ago Homo Sapiens somehow broke off from the rest of other human-like species (and soon after, destroyed all others). But there is a big difference between creating language and hammers and Generalized Artificial Intelligence.

In other words, the leap from left to right in this picture is a lot smaller than going from the image on the right to superintelligent AI. A LOT smaller.

What makes humans special?

Or what can humans do that AI currently can’t? As John MacCormick puts it in 9 Algorithms that Changed the Future:

Slowly, but surely, however, AI has been chipping away at the collection of thought processes that might be defined as uniquely human.

Even just a few years ago, computers couldn’t recognize faces, but now, when I upload a picture to Facebook, suddenly I see all my friend’s names pop up. A few years ago, it couldn’t recognize my voice, but now I can order tacos from my Alexa. A few years ago, it couldn’t figure out what was happening in a picture, but now it knows if a picture shows a man throwing a frisbee or a woman walking a dog.

So we have to take intelligence off that list of things humans can do that AI can’t. Creativity has been taken off that list too — although painting, music and poetry created by computers is generally terrible. So if we redefine and say ‘things humans can do better than AI’ — you can keep creativity on that list.

But sadly, it is likely that computers will get better at those things that we think are special to us — like music, poetry, writing, painting.

What about meaning? Context, sarcasm, laughter? As Tim Urban in Wait But Why puts it, how do we know that Braveheart is great and The Patriot is terrible? It’s obvious to us — the Braveheart button on my remote is worn out and my The Patriot button is squeaky clean. A computer couldn’t tell us that. Tim quotes computer scientist Donald Knuth: “AI has by now succeeded in doing essentially everything that requires ‘thinking’ but has failed to do most of what people and animals do ‘without thinking.’”

That leaves us with a few things. We have consciousness. This is such a philosophical mine field I will not discuss it, but will instead move onto something more understandable: goals.

Humans have goals built into us. A computer has a goal, but only because we gave it one. Our major advantage in this game is that we tell computers what to do. And with AI, we have to be VERY careful about what we tell them to do.

What are human goals?

We have values, we have intentions and drives. What is our main goal? To recreate? Be happy? Get into Heaven? Become immortal? Have the most kick ass house on the block?

Mostly, we want to be happy. But often, we aren’t very good at this. And we aren’t terribly good at knowing what’s going to make us happy. We often work in jobs we don’t like, marry people we don’t love, join clubs to look happy, but often we fail in this goal.

When do we become ‘satisfied’? We could have automated away our jobs long ago if we were content. All we need is enough food, shelter — we could have settled already. And if we can’t now, we certainly could settle before coming up with the type of terrifying AI that could exterminate us. And we could likely use technology to get to a point where we have enough food, shelter and very little disease or crime.

In 1930, John Maynard Keynes once wrote an essay titled “Economic Possibilities For Our Grandchildren” that stated that we’d be working 15-hour weeks by now. He underestimated the human value of status and our motivations toward that goal.

So why do we go on? It’s clearly in our nature. There will be a race to create the first scary AI because of the same human drives that made people invent anything. Do we take a step back and consider what we’re doing or do we plow ahead without consideration, not thinking about if we could but if we should?

Evolution states that we do everything we need to survive and happiness is a natural symptom of that. Happier people survive more — but we have overtaken evolution. We are now improving ourselves…ourselves. 🙃

We now have medicine and surgery designed to beat natural selection. And soon we will have CRISPR to edit our DNA and nanotechnology to make biological changes at an even smaller level. What will this do to our goals? If evolution’s goal is for humans to survive as a species, then death is no big deal. As long as we get old enough to reproduce and raise our kids — then once we do this, we are longer needed. But what if we can technically solve our way out of having to die? Will our goals change?

So what is the Singularity?

This is partly an issue of perception — simply conceiving of ideas. A dog cannot conceive of the idea of a concept like say, existentialism…its brain just isn’t built for that. For millions of years, Homo Erectus couldn’t conceive of anything better than rock tools. Their capacity for intelligence plateaued there. In the same way, an ape can learn about 450 words, a dog 165, but that’s it. A baby grows in stages…you can sit there all day trying to train a 1.5 year old to learn colors, but they simply can’t do it yet.

We anthropomorphize AI (make it look human) because that’s the limit of what our conception allows. All aliens in movies basically look like us, because we have a very hard time conceiving of things that aren’t similar to how they are now. In the same way, this is why it’s hard for us to understand the exponential nature of technological change (it’s coming sooner than we think). We are terrible at understanding changes…we see self driving cars, and think of our existing highways with the same type of cars, but it’s more likely that there is going to be completely different infrastructure. We’re just as likely to see a Subway sandwich driving itself to your apartment as you are a Ford Excursion barreling down a 4 lane highway.

We currently have narrow Artificial Intelligence where it can beat us at Chess and Go and recognize our faces in pictures. This is conceivable because it is here, but what happens beyond that is above our level. It is something so transformative, perhaps the most transformative thing in the history of humanity, at least since our frontal lobes grew the ability for imagination and humans became Homo Sapiens.

Beyond our current narrow AI, we are looking at broad AI — what is called AGI (Artificial Generalized Intelligence) and this is where things get scary. To this highly intelligent program, a human coming up with a theory of relativity will appear as backwards to it as a mouse getting through a maze does to us. Cute, but not exactly rocket science (another analogy we’ll have to update). From here:

Vernor Vinge wrote in his 1993 essay The Coming Technological Singularity that this event would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate.

It learns to get better at learning. And its processing speed is far beyond the capacity of humans (electronic circuits move a million times faster than biochemical ones). It has the ability to not only beat humans in specialized, programmed tasks, but it will be able to solve any and all manner of problems on its own. Much like I discussed in my article about Intelligence, it’s conceptually similar to the jump from calculators to computers, but at a level so far beyond this that we can’t comprehend it. Calculators can do math very well, but computers can do many things very well. But we still have to tell them what to do.

Setting intentions and goals

We will have to give AI intentions and goals.

As stated earlier, humans have many differing goals and have used the same technology in different ways to realize them. The Nazis wanted an Aryan world. Americans said they wanted equality for all, even if ‘all’ was a more narrow definition than the word is supposed to encapsulate. If ‘The Arc of the Moral Universe Is Long, But It Bends Toward Justice’, then what about amorality? Machines will have no conception of morality, they will simply care about ways of getting their goals met (in the same way a certain type of philosopher may say humans only have morality in order to serve their evolutionary purpose).

Some would counter — “But we wouldn’t design them to be evil.” Of course we wouldn’t, but with General Artificial Intelligence their strategies for whatever goals we give them will have escaped our design and our understanding:

It is impossible to enumerate all possible situations a superintelligence might find itself in and to specify for each what action it should take. Similarly, it is impossible to create a list of all possible worlds and assign each of them a value. In any realm significantly more complicated than a game of tic-tac-toe, there are far too many possible states (and state-histories) for exhaustive enumeration to be feasible. A motivation system, therefore, cannot be specified as a comprehensive lookup table. It must instead be expressed more abstractly, as a formula or rule that allows the agent to decide what to do in any given situation. — Nick Bostrom, Superintelligence: Paths, Dangers, Strategies, 2014

This leads to the massive problem of unintended consequences in a system so complex we can’t even begin to understand it. Say we tell it to ‘defeat cancer’ — well, the solution they may come up with is: kill all humans. That’s a very easy way to defeat cancer.

This is the ‘control problem’:

In practice, the control problem — the problem of how to control what the superintelligence would do — looks quite difficult. It also looks like we will only get one chance. Once unfriendly superintelligence exists, it would prevent us from replacing it or changing its preferences. Our fate would be sealed. — Nick Bostrom, Superintelligence: Paths, Dangers, Strategies, 2014

If we give it a goal, no matter how simple, it will go for it. And if we’re in the way AI will be able to see patterns in human behavior just as it sees patterns on a chess board. The more of our behavior it takes in, the better it will be at predicting our intention. It will look through our Facebook likes and Instagram posts and Tinder messages and understand us better than we can ever conceive of understanding ourselves. And so it will know what our moves are going to be. Which is why we won’t just be able to ‘unplug the system.’ It will be a million steps ahead of that (and may already be doing that). This is why some compare Game of Thrones to the rise of AI (and creepily, AI wrote the next Game of Thrones book)…we are bickering between each other about tax rates while the night king is building his army to destroy us.

The goals, values and intentions we give AI will be our ‘last decision’:

This is quite possibly the most important and most daunting challenge humanity has ever faced. And — whether we succeed or fail — it is probably the last challenge we will ever face. — Nick Bostrom, Superintelligence: Paths, Dangers, Strategies

We are like Thomas Jefferson writing the Declaration of Independence, only writing the rules for the fate of our entire species. And we don’t get any amendments.

The end of humans (one way or another)

So…to paint a more rosy picture than our complete annihilation 😬 I will discuss the bright side. Many thinkers in this field are excited about the possibilities of superintelligent AI and how we can utilize it to become something better than we are.

Perhaps our timing with hacking biology and physics matches up well with the coming of AI and we can simply bind ourselves to it — connect our brains to the collective internet (The Merging). Garry Kasparov was defeated by Deep Blue in chess in 1997, and he believes the future is a “human plus machine combination” — merging the brute force of calculation, machines, and algorithms with human experience and strategic overview. Elon Musk, one of the proponents of the scary aspects of AI, has created a company called Neuralink to help do just that.

At this point, will we want to be human? In a sense, we already aren’t really — the amount of time we spend in front of glowing rectangles would no doubt look sad to people in the past. Much as we look on a future of people wearing virtual reality goggles on their couches all day seems sad. Or even more so, sitting in vats of goo with our brains constantly connected to machines that release endorphins in our brains. Perhaps we learn to value sadness, emotion, and feeling more than we ever have — those things that machines can’t understand.

We are at a very strange point. Pandora’s box is sitting in front of us. We can be anything.

What will we do with that power?

Here is a link to some great TED talks about what the future looks like…reading about this and watching these makes it seem like everything we’re doing right now doesn’t matter at all because eventually its all going to be solved or destroyed by AI during our lifetimes. So go out, drink some beers, go for a walk. Enjoy being human, while we’re still here.

--

--