When Will The First Machine Become Superintelligent?

Predictions from Top AI Experts

Pawel Sysiak
AI Revolution

--

Note 1: You can read an updated version of AI Revolution 101. I edited it slightly and combined each piece into one longer article.
Note 2: This is the 6th part of a short essay series aiming to condense knowledge on the
Artificial Intelligence Revolution. Feel free to start reading here or navigate to Part 1, ← prev|next → essay or table of contents. The project is based on the two-part essay AI Revolution by Tim Urban of Wait But Why. I recreated all images, shortened it x3 and tweaked it a bit. Read more on why/how I wrote it here.

“How long until the first machine reaches superintelligence? Not shockingly, opinions vary wildly, and this is a heated debate among scientists and thinkers. Many, like professor Vernor Vinge, scientist Ben Goertzel, Sun Microsystems co-founder Bill Joy, or, most famously, inventor and futurist Ray Kurzweil, agree with machine learning expert Jeremy Howard when he puts up this graph during a TED Talk:

Graph by Jeremy Howard from his TED talk “The wonderful and terrifying implications of computers that can learn.”

“Those people subscribe to the belief that this is happening soon — that exponential growth is at work and machine learning, though only slowly creeping up on us now, will blow right past us within the next few decades.

“Others, like Microsoft co-founder Paul Allen, research psychologist Gary Marcus, NYU computer scientist Ernest Davis, and tech entrepreneur Mitch Kapor, believe that thinkers like Kurzweil are vastly underestimating the magnitude of the challenge [and the transition will actually take much more time] ...

“The Kurzweil camp would counter that the only underestimating that’s happening is the underappreciation of exponential growth, and they’d compare the doubters to those who looked at the slow-growing seedling of the internet in 1985 and argued that there was no way it would amount to anything impactful in the near future.

“The doubters might argue back that the progress needed to make advancements in intelligence also grows exponentially harder with each subsequent step, which will cancel out the typical exponential nature of technological progress. And so on.

“A third camp, which includes Nick Bostrom, believes neither group has any ground to feel certain about the timeline and acknowledges both A) that this could absolutely happen in the near future and B) that there’s no guarantee about that; it could also take a much longer time.

“Still others, like philosopher Hubert Dreyfus, believe all three of these groups are naive for believing that there is potential of ASI [Artificial Super Intelligence — AI that achieves a level of intelligence smarter than all of humanity combined], arguing that it’s more likely that it won’t actually ever be achieved.

“So what do you get when you put all of these opinions together?”⁸⁶

Timeline for Artificial General Intelligence

(AGI — AI that reaches human-level intelligence)

“In 2013, Vincent C. Müller and Nick Bostrom conducted a survey that asked hundreds of AI experts … the following:”⁸⁷

“For the purposes of this question, assume that human scientific activity continues without major negative disruption. By what year would you see a (10% / 50% / 90%) probability for such Human-Level Machine Intelligence [or what we call AGI] to exist?”⁸⁸

The survey “asked them to name an optimistic year (one in which they believe there’s a 10% chance we’ll have AGI), a realistic guess (a year they believe there’s a 50% chance of AGI — i.e. after that year they think it’s more likely than not that we’ll have AGI), and a safe guess (the earliest year by which they can say with 90% certainty we’ll have AGI). Gathered together as one data set, here were the results:

Median optimistic year (10% likelihood) → 2022
Median realistic year (50% likelihood) → 2040
Median pessimistic year (90% likelihood) → 2075

“So the median participant thinks it’s more likely than not that we’ll have AGI 25 years from now. The 90% median answer of 2075 means that if you’re a teenager right now, the median respondent, along with over half of the group of AI experts, is almost certain AGI will happen within your lifetime.

“A separate study, conducted recently by author James Barrat at Ben Goertzel’s annual AGI Conference, did away with percentages and simply asked when participants thought AGI would be achieved — by 2030, by 2050, by 2100, after 2100, or never. The results:⁸⁹

42% of respondents → By 2030
25% of respondents → By 2050
20% of respondents → By 2100
10% of respondents → After 2100
2% of respondents → Never

“Pretty similar to Müller and Bostrom’s outcomes. In Barrat’s survey, over two thirds of participants believe AGI will be here by 2050 and a little less than half predict AGI within the next 15 years. Also striking is that only 2% of those surveyed don’t think AGI is part of our future.”⁹⁰

Timeline for Artificial Super Intelligence

(ASI — AI that achieves a level of intelligence smarter than all of humanity combined)

“Müller and Bostrom also asked the experts how likely they think it is that we’ll reach ASI A) within two years of reaching AGI (i.e. an almost-immediate intelligence explosion), and B) within 30 years.”⁹¹ Respondents were asked to choose a probability for each option. Here are the results:⁹²

AGI–ASI transition in 2 years → 10% likelihood
AGI–ASI transition in 30 years → 75% likelihood

“The median answer put a rapid (2 year) AGI–ASI transition at only a 10% likelihood, but a longer transition of 30 years or less at a 75% likelihood. We don’t know from this data the length of this transition [AGI–ASI] the median participant would have put at a 50% likelihood, but for ballpark purposes, based on the two answers above, let’s estimate that they’d have said 20 years.

“So the median opinion — the one right in the center of the world of AI experts — believes the most realistic guess for when we’ll hit ASI … is [the 2040 prediction for AGI + our estimated prediction of a 20-year transition from AGI to ASI] = 2060.

“Of course, all of the above statistics are speculative, and they’re only representative of the median opinion of the AI expert community, but it tells us that a large portion of the people who know the most about this topic would agree that 2060 is a very reasonable estimate for the arrival of potentially world-altering ASI. Only 45 years from now”⁹³

Read Part 7: “Who’s right?”—Two Major Groups of AI Scientists. You can also see the ← prev|next → essay, Part 1 or table of contents.. Subscribe below.

This series was inspired and based on an article from one of the best blogs in our galaxy. Wait But Why posts regularly. They send each post out by email to over 295,000 people — enter your email here and they’ll put you on the list (they only send a few emails each month). If you like this, check out The Fermi Paradox, How (and Why) SpaceX Will Colonize Mars, or Why procrastinators procrastinate. You can also follow Wait But Why on Facebook and Twitter

--

--