The Messy, Secretive Reality Behind OpenAI’s Bid to Save the World

The AI moonshot was founded in the spirit of transparency. This is the inside story of how competitive pressure eroded that idealism.

MIT Technology Review
MIT Technology Review

--

From left to right: Greg Brockman, co-founder and CTO; Ilya Sutskever, co-founder and chief scientist; and Dario Amodei, research director. Photo: Christie Hemm Klok

By Karen Hao

Every year, OpenAI’s employees vote on when they believe artificial general intelligence, or AGI, will finally arrive. It’s mostly seen as a fun way to bond, and their estimates differ widely. But in a field that still debates whether human-like autonomous systems are even possible, half the lab bets it is likely to happen within 15 years.

In the four short years of its existence, OpenAI has become one of the leading AI research labs in the world. It has made a name for itself producing consistently headline-grabbing research, alongside other AI heavyweights like Alphabet’s DeepMind. It is also a darling in Silicon Valley, counting Elon Musk and legendary investor Sam Altman among its founders.

Above all, it is lionized for its mission. Its goal is to be the first to create AGI — a machine with the learning and reasoning powers of a human mind. The purpose is not world domination; rather, the lab wants to ensure that the technology is developed safely and its benefits distributed evenly to the world.

--

--

MIT Technology Review
MIT Technology Review

Reporting on important technologies and innovators since 1899