Former OpenAI Chief Scientist Ilya Sutskever Launches New AI Firm to Enhance AI Safety

Ibrahim Murtaza
TechCraft Chronicles
3 min readJun 27, 2024

Ilya Sutskever, the former chief scientist and co-founder of OpenAI, has launched a new artificial intelligence company named Safe Superintelligence Inc. (SSI). This move comes just a month after Sutskever’s departure from OpenAI and marks a significant step towards addressing the critical challenges of AI safety and security.

Formation and Mission

Sutskever founded SSI in collaboration with Daniel Gross, a former Y Combinator partner and Apple AI lead, and Daniel Levy, an ex-OpenAI engineer. The trio is united by a common goal: to enhance AI safety and boost efficiency through innovative approaches. SSI’s business model focuses on security, safety, and progress, a clear departure from the profit-oriented strategies that have become common in the AI sector.

“Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures,” stated Sutskever in a recent post on X, formerly known as Twitter.

Research and Development Focus

SSI is dedicated to research-based initiatives to bridge gaps in the AI sector. Unlike OpenAI, which has shifted towards profit-driven goals, SSI will prioritize research over profit. The company plans to enhance AI capabilities beyond large language models (LLMs) and develop safe superintelligence tools to improve people’s quality of life.

Guided by core values such as liberty and democracy, SSI aims to hire a forward-thinking team to drive its mission forward. The company is currently recruiting technical talent for its offices in Palo Alto and Tel Aviv.

A New Chapter for AI Safety

Sutskever’s departure from OpenAI was partly due to the organization’s shift away from its humanitarian roots. He played a pivotal role in OpenAI’s super alignment team, working to strengthen AI safety tools alongside Jan Leike. Both Sutskever and Leike left OpenAI after a disagreement with the leadership over AI safety approaches. Leike now leads a team at rival AI firm Anthropic.

In a 2023 blog post co-authored with Leike, Sutskever predicted the arrival of AI with intelligence superior to humans within the decade. He stressed the importance of researching ways to control and restrict such AI to ensure it remains benevolent.

Future Outlook

While SSI’s funding situation and valuation remain undisclosed, the company’s credentials and the growing interest in AI suggest it will attract significant capital. Daniel Gross expressed confidence in their ability to raise funds, stating, “Out of all the problems we face, raising capital is not going to be one of them.”

As SSI moves forward, it aims to revolutionize the AI sector by maintaining a balance between advancing capabilities and ensuring safety. Sutskever’s new venture promises to bring transformative changes, addressing the critical technical challenges and regulatory concerns that continue to shape the future of artificial intelligence.

I hope this helps you a lot. Stay tuned and follow me for more.

Follow me for more: https://medium.com/@maxerom

Reach out to me on LinkedIn: https://www.linkedin.com/in/ibrahim-murtaza-5013/

NOTE: If I have done something that I am not supposed to do, Kindly reach out to me on LinkedIn

Let me know if there is something I missed or I can improve.

--

--