Who’s Right?

Two Major Groups of AI Scientists

Pawel Sysiak
AI Revolution
Published in
5 min readApr 14, 2016


Note 1: You can read an updated version of AI Revolution 101. I edited it slightly and combined each piece into one longer article.
Note 2: This is the 7th part of a short essay series aiming to condense knowledge on the
Artificial Intelligence Revolution. Feel free to start reading here or navigate to Part 1, ← prev|next → essay or table of contents. The project is based on the two-part essay AI Revolution by Tim Urban of Wait But Why. I recreated all images, shortened it x3 and tweaked it a bit. Read more on why/how I wrote it here.

The Confident Corner

Most of what we have discussed so far represents a surprisingly large group of scientists that share extremely optimistic views on the outcome of AI development. “Where their confidence comes from is up for debate. Critics believe it comes from an excitement so blinding that they simply ignore or deny potential negative outcomes. But the believers say it’s naive to conjure up doomsday scenarios when on balance, technology has and will likely end up continuing to help us a lot more than it hurts us.”⁹⁴ Peter Diamandis, Ben Goertezl and Ray Kurzweil are some of the major figures of this group, who have built a vast, dedicated following and regard themselves as Singularitarians.

CC photo by J.D. Lasica

Let’s talk about Ray Kurzweil, who is probably one of the most impressive and polarizing AI theoreticians out there. He attracts both “godlike worship … and eye-rolling contempt … He came up with several breakthrough inventions, including the first flatbed scanner, the first scanner that converted text to speech (allowing the blind to read standard texts), the well-known Kurzweil music synthesizer (the first true electric piano), and the first commercially marketed large-vocabulary speech recognition. He’s well-known for his bold predictions,”⁹⁵ including envisioning that intelligence technology like Deep Blue would be capable of beating a chess grandmaster by 1998. He also anticipated “in the late ’80s, a time when the internet was an obscure thing, that by the early 2000s it would become a global phenomenon.”⁹⁶ Out “of the 147 predictions that Kurzweil has made since the 1990’s, fully 115 of them have turned out to be correct, and another 12 have turned out to be ‘essentially correct’ (off by a year or two), giving his predictions a stunning 86% accuracy rate”⁹⁷. “He’s the author of five national bestselling books … In 2012, Google co-founder Larry Page approached Kurzweil and asked him to be Google’s Director of Engineering. In 2011, he co-founded Singularity University, which is hosted by NASA and sponsored partially by Google. Not bad for one life.”⁹⁸

His biography is important, because if you don’t have this context, he sounds like somebody who’s completely lost his senses. “Kurzweil believes computers will reach AGI [AI that reaches human-level intelligence] by 2029 and that by 2045 we’ll have not only ASI [AI that achieves a level of intelligence smarter than all of humanity combined], but a full-blown new world — a time he calls the singularity. His AI-related timeline used to be seen as outrageously overzealous, and it still is by many, but in the last 15 years, the rapid advances of ANI [current and narrowly specialized AI, present in everyday technologies (more on ANI, AGI, ASI distinction in Part 2)] systems have brought the larger world of AI experts much closer to Kurzweil’s timeline. His predictions are still a bit more ambitious than the median respondent on Müller and Bostrom’s survey (AGI by 2040, ASI by 2060), but not by that much.”⁹⁹

CC photo by Future of Humanity Institute

The Anxious Corner

“You will not be surprised to learn that Kurzweil’s ideas have attracted significant criticism … For every expert who fervently believes Kurzweil is right on, there are probably three who think he’s way off … [The surprising fact] is that most of the experts who disagree with him don’t really disagree that everything he’s saying is possible.”¹⁰⁰ Nick Bostrom, philosopher and Director of the Oxford Future of Humanity Institute, who criticizes Kurzweil for a variety of reasons, and calls for greater caution in thinking about potential outcomes of AI, admits that:

Disease, poverty, environmental destruction, unnecessary suffering of all kinds: these are things that a superintelligence equipped with advanced nanotechnology would be capable of eliminating. Additionally, a superintelligence could give us indefinite lifespan, either by stopping and reversing the aging process through the use of nanomedicine, or by offering us the option to upload ourselves.”¹⁰¹

“Yes, all of that can happen if we safely transition to ASI—but that’s the hard part.”¹⁰² Thinkers from the Anxious Corner point out that Kurzweil’s “famous book The Singularity is Near is over 700 pages long and he dedicates around 20 of those pages to potential dangers.”¹⁰³ Such a colossal power as ASI is neatly summarized by Kurzweil with the sentence “[ASI] is emerging from many diverse efforts and will be deeply integrated into our civilization’s infrastructure. Indeed, it will be intimately embedded in our bodies and brains. As such, it will reflect our values because it will be us …”¹⁰⁴

“But if that’s the answer, why are so many of the world’s smartest people so worried right now? Why does Stephen Hawking say the development of ASI ‘could spell the end of the human race,’ and Bill Gates says he doesn’t ‘understand why some people are not concerned’ and Elon Musk fears that we’re ‘summoning the demon?’ And why do so many experts on the topic call ASI the biggest threat to humanity?”¹⁰⁵

Read Part 8: “The Last Invention We Will Ever Make”—Existential Dangers Connected to AI Developments. You can also see ← prev|next → essay, Part 1 or table of contents. Subscribe below.

This series was inspired and based on an article from one of the best blogs in our galaxy. Wait But Why posts regularly. They send each post out by email to over 295,000 people — enter your email here and they’ll put you on the list (they only send a few emails each month). If you like this, check out The Fermi Paradox, How (and Why) SpaceX Will Colonize Mars, or Why procrastinators procrastinate. You can also follow Wait But Why on Facebook and Twitter