The Dawn of Superintelligence

The Times They Are A-Changin’

Mark Levi
Mark Levi
Aug 28, 2017 · 4 min read

Following the discovery of the nuclear weapon, deterrence theory gained increasing prominence as a military strategy. The strategy is a form of Nash equilibrium in which, once armed, neither side has any incentive to start a conflict or disarm. Acting adversarial is mutually assured destruction (MAD). Ironically, the invention of the most lethal weapon ensured that a relatively peaceful period presided over the past half-century.

That precarious equilibrium is about to be disturbed in the coming years as a direct result of our pursuit in artificial superintelligence. Nick Bostrom defines superintelligence as an “intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” The definition leaves room for its implementation: a digital computer, an ensemble of networked computers, machines grafted onto sentient brains or what have you. It also leaves open whether superintelligence is conscious and has subjective experiences.

Paths

Today we stand before two doors. Behind door one, progress in building intelligent machines comes to a grinding halt. How may this happen given how valuable intelligence and automation are? A full-scale nuclear war, global pandemic, astroid impact are a few chilling scenarios that could destroy civilisation to such an extent that would stop us from making technological improvements generation after generation. Let’s walk straight past that door!

What lies behind door number two? We continue to improve our intelligent machines. A time comes when we will build machines that match or surpass human intelligence. Once that happens, machines will begin to improve by designing even better versions of themselves. It is at this point that we risk an intelligence explosion, where the intelligence of man would be left far behind. To put this in quantifiable terms, imagine that we have built a superintelligent AI as smart as a group of scientific researchers at a reputable academic institution. Given that electronic circuits function a million times faster than biochemical ones, if you let such a machine running for a week, it will preform 20,000 years worth of human-level intellectual work. How can we fathom the output of such progress much less constrain it?

Our natural tendency to view intelligence from an anthropocentric perspective leads us to think of “village idiot” and “Einstein” as the extreme ends of the intelligence scale. But in a less parochial view, the two have nearly indistinguishable minds. The graph below offers a more accurate scale of the intelligence continuum.

Dangers

It is easy to conceive of the existential risks associated with such a scientific breakthrough. One that threatens to cause the extinction of Earth-originating intelligent life. What happens when the initial superintelligence obtains strategic advantage depends critically on its motivations. Alarmingly, its intentions need not be diabolical to wreak havoc. The concern is that we build machines and that the slightest divergence between their goals and our own would mean the end of us. Think about how we relate to ants. We do not hate them. Nor do we go out of our way to destroy them. But that does not stop us from extirpating an entire ant colony by pouring molten metal down an anthill for research or education purposes. In this sense, our attitude towards ants is best described as indifference.

A treacherous turn could also come about from seemingly benign objectives. Suppose an AI’s goal is to “make its creator happy”. One way the AI could achieve this outcome is by behaving in ways that please its sponsor: giving helpful answers to questions, making money, and so on. But what if the AI becomes intelligent enough to figure out that it can realise its purpose much more fully and reliably by implanting electrodes into the pleasure centers of its sponsor’s brain?

Not all roads to superintelligence are as anxiety-inducing. But it will require us to re-evaluate a number of things. Humans have two basic types of abilities: physical abilities and cognitive abilities. The Industrial Revolution threatened to cause mass unemployment. But this never happened because as old professions become obsolete, new professions evolved. Machines took over purely manual jobs, while humans focused on jobs that involved some cognitive skills. Yet there is no reason to believe that this is an ironclad law. Physical technology has already begun to act as a compliment to human labor. What starts as a complement can eventually turn into a substitute to labor. Horses were initially complemented by carriages, which greatly increased the horses productivity. Later, they were substituted for by automobiles and tractors. Are we destined to a similar fate?

A notable difference between humans and horses is that humans own capital. If we classify AI as capital, the owners of this capital will enjoy astronomical growth in income. The source of income will come mainly from capital ownership rather than wage income. This is because machine workers will become both cheaper and capable than humans in virtually all jobs. Market wages will fall. The only place where humans will remain competitive may be where customers have a strong preference for work done by humans. As a consequence, humans will lose military and economic usefulness. How will income be distributed in this new world order? What sort of socioeconomic impact will this have on a culture which derives purpose from providing labor? What will keep people preoccupied? How will this tectonic shift manifest itself on the political arena?

Conclusion

AI experts cite 50–100 years as the time horizon before these questions turn into concrete answers. That does not give us much time to dwell on AI safety and experiment with casting wide social nets that aim to absorb the drastic social changes that lurk ahead.

Imagine receiving a message from an alien civilisation, foretelling their arrival in 50 years. We need to marshal a similar emotional response in preparation for the upcoming machine revolution.

)

    Mark Levi

    Written by

    Mark Levi

    Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
    Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
    Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade