OpenAI Founder: Short-Term AGI Is a Serious Possibility

Synced
SyncedReview
Published in
4 min readNov 13, 2018

Artificial general intelligence (AGI) is the long-range, human-intelligence-level target of contemporary AI technology. It’s believed AGI has the potential to meet basic human needs globally, end poverty, cure diseases, extend life, and even mitigate climate change. In short, AGI is the tech that could not only save the world, but build a utopia.

While many AI experts believe AGI is still a far-fetched fantasy unachievable with existing tech, Ilya Sutskever, founder and research director of OpenAI, has a decidedly different point of view. In his keynote speech last Friday at the AI Frontiers Conference, Sutskever said “We (OpenAI) have reviewed progress in the field over the past six years. Our conclusion is near term AGI should be taken as a serious possibility.”

Moreover, Sutskever encouraged the AI community to begin taking measures to identify and prevent potential AGI-related risks, such as misspecified goals, malicious use of AI, or booming economies that don’t improve human lives.

Sutskever told the audience one of the main reasons short-term AGI is possibile is Deep Learning, the powerful AI technique that has “repeatedly and rapidly broken through ‘insurmountable’ barriers.” He presented the following deep learning achievements:

  • The benchmark ImageNet classification error rate fell from 26 percent in 2011 to 3.1 percent in 2016.
  • The BLEU score (a mainstream machine translation performance metric) on English-to-French on the WMT dataset increased from 37 in 2014 to 45 in 2018.
  • Generative Adversarial Networks (GANs) are now capable of generating impressive images with high fidelity and low variety gap.
  • Reinforcement learning-enabled machines can now beat professional human players in games such as Go, Atari video games, and Dota 2.
Images generated by the BigGAN model at 512*512 resolution.

A large part of AI’s progress is attributed to increasing compute power, which is used to accelerate the training of deep learning models. Over the past six years the power of CPUs, GPUs, and TPUs used for training neural networks has increased by over 300,000 times. Below is a visualization of the compute boom.

A visual from Sutskever’s presentation compares compute power used for different AI models from 2009 to 2018.

While AI researchers are still working on challenges in unsupervised learning, robustness, reasoning, abstraction, etc., Sutskever argues that the tech nonetheless is continuing to grow more intelligent at an unstoppable pace. While offering no guarantee, he suggests this may soon result in the emergence of an AGI.

However not everyone believes that deep learning will achieve AGI. New York University Professor Gary Marcus has repeatedly argued in his papers and medium blogs that deep learning is unlikely to produce an AGI on its own. He suggests deep learning should be viewed “not as a universal solvent, but simply as one tool among many.” Unlike Sutskever, who has a mathematics and computer science background; Marcus, a psychology and neural science professor, believes that unlocking the mysteries of the human mind is the path that will lead to a true AGI.

Sutskever’s position is fully aligned with the Open AI mission. He co-founded the San Francisco-based non-profit research company with Greg Brockman in 2015 with the aim of ensuring an AGI will benefit all of humanity. OpenAI has raised US$1 billion from high-profile contributors such as Tesla CEO Elon Musk, Y Combinator President Sam Altman, and LinkedIn co-Founder Reid Hoffman.

OpenAI has been preparing for the arrival of an AGI by publishing research papers focused on AI safety and has recently proposed a number of new AI techniques. AI safety via debate and iterated amplification enable models to stay on track and work safely toward goals even if the complexity exceeds human capability for understanding and judging — for example designing a comprehensive public transit system.

OpenAI also joined Google Brain and UC Berkeley on a collaborative research effort designed to ensure that modern machine learning systems operate as intended and stay within bounds.

Swedish Philosopher and Founder of Oxford University’s Future of Humanity Institute Nick Bostrom sparked passionate discussions with his 2014 book Superintelligence: Paths, Dangers, Strategies, which posits an AGI might deliberately or accidentally destroy humankind. President of the European Association for Cognitive Systems Vincent C. Müller says “if the stakes [of a malicious AGI] are so high, even a fairly small possibility (say, 3%) is entirely sufficient to motivate the research.

The machine learning academic community is calling for more research into broad-based considerations on an AGI’s potential impact on the world, as well as which safety research directions should be explored.

Let’s take a poll: Do you believe AGI will come soon?

Journalist: Tony Peng | Editor: Michael Sarazen

Follow us on Twitter @Synced_Global for daily AI news!

We know you don’t want to miss any stories. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.

--

--

Synced
SyncedReview

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global