Sharpening The AI Problem

Artificial general intelligence will be humanity’s greatest achievement. But researchers must first agree on the problem they’re solving.

Peter Sweeney
inventing.ai

--

Source: Evan Forester

In 2017, the cognitive scientist and entrepreneur, Gary Marcus, argued that AGI needs a moonshot. In an interview with Alice Lloyd George, he said, “Let’s have an international consortium kind of like we had for CERN, the large hadron collider. That’s seven billion dollars. What if you had $7 billion dollars that was carefully orchestrated towards a common goal.”

Marcus felt that the political climate of the time made such a collective effort unlikely. But the moonshot analogy for AGI has taken hold in the private sector and captured the public imagination. In a 2017 talk, the CEO and co-founder of DeepMind, Demis Hassabis, evoked the moonshot analogy to describe his company as “a kind of Apollo program effort for artificial intelligence.” Hassabis unpacks his vision with pitch deck efficiency: First they’ll understand human intelligence, then they’ll recreate it artificially. AGI will thereafter solve everything else.

A similar moonshot vision was expressed in the recent $1-billion partnership between OpenAI and Microsoft, a competitive response to Google and Amazon. As reported by Cade Metz, “Eventually, [Sam] Altman and…

--

--

Peter Sweeney
inventing.ai

Entrepreneur and inventor | 4 startups, 80+ patents | Writes on the science and philosophy of problem solving. Peter@ExplainableStartup.com | @petersweeney