The Real Reason OpenAI Canned Its Safety Team

It’s the end of superalignment as we know it (and I feel fine)

Thomas Smith
The Generator

--

Illustration by the author via DALL-E

If you believe the hype, this month OpenAI embarked on an unchecked, world-threatening rampage of AI development.

Sam Altman is apparently running amok, building powerful general intelligence that will crater democracies, outsmart humans, and ultimately put our very existence at risk.

Why all the sudden existential concern? Last week, Ilya Sutskever, a long-serving member of OpenAI and the head of the company’s internal safety team left suddenly under allegedly mysterious circumstances.

Shortly thereafter, Sam Altman dissolved OpenAI’s entire safety team, reassigning its members to other roles. It later replaced the team with a much smaller, weaker safety committee.

A company that’s developing the world’s most advanced artificial intelligence suddenly canning its safety team isn’t a good look.

But does it really indicate that Altman and his crew are hell-bent on unfettered world domination?

What Ilya Saw

Ilya’s departure is certainly a big shake-up at OpenAI. Rumors suggest that he left over concerns about OpenAI’s lack of internal safety policies.

--

--