The Paradox of OpenAI’s New “Reasoning” Model, “Strawberry”

Eric Sentell
Age of Awareness
Published in
5 min readSep 27, 2024

--

Photo by Zak G on Unsplash

OpenAI recently released a preview version of a new AI model officially called “OpenAI o1” and unofficially nicknamed Strawberry, its secret codename within the company. Allegedly, Strawberry can “reason” circles around previous AIs and most humans.

ChatGPT flunks the American Invitational Mathematics Examination (AIME), but Strawberry scored a solid B (83%). Glancing at an example question, I am confident that even ChatGPT would score better then me.

I’m an English professor, so maybe that’s not saying much. Yet according to OpenAI, Strawberry can perform similarly to PhD students in chemistry, physics, and biology on reasoning tasks within those fields. The model reached the 89th percentile in the online coding competition, Codeforces.

Robots automated much of manufacturing. ChatGPT came for writing. Now, Strawberry guns for math and coding. Is any human endeavor safe from humans’ efforts to replace humans with machines?

That’s a paradox, but it’s not the paradox of Strawberry or any other “reasoning” AI.

The paradox is creating tools that give answers we already know, hoping that they will eventually be able to give answers we don’t know, ignoring the reality that they can’t solve what they don’t know.

--

--

Age of Awareness
Age of Awareness

Published in Age of Awareness

Stories providing creative, innovative, and sustainable changes to the ways we learn | Tune in at aoapodcast.com | Connecting 500k+ monthly readers with 1,500+ authors

Eric Sentell
Eric Sentell

Written by Eric Sentell

👉 SAVING FAITH: Build a faith that works, in 2-minutes a week: ericsentell.substack.com