- Hawking has expressed his opinions about topics that range from extraterrestrial life to artificial intelligence (AI), and of the latter, he has serious misgivings.
- But like many other scientists and thinkers in today’s world, Hawking is concerned that the rise of AI is bringing with it various negative side effects.
- He’s also worried that AI may take over the world or, worse yet, end it.
- Our best bet against this AI uprising, he now tells The Times, is the creation of “some form of world government” that could control the technology.
- Those institutions, such as the Partnership on AI and the Ethics and Governance of Artificial Intelligence Fund (or AI Fund), have begun developing guidelines or frameworks for developing AI more conscientiously.
@TamaraMcCleary: “Stephen Hawking Finally Revealed His Plan for Preventing an #AI Apocalypse via @Futurism” open tweet »