o1: OpenAI’s New Model Series

That ‘Thinks’ Before Answering

Himanshi
Analytics Vidhya
5 min readSep 13, 2024

--

Have you heard the big news? OpenAI just rolled out preview of a new series of AI models — OpenAI o1 (also known as Project Strawberry/Q*). These models are special because they spend more time “thinking” before they give you an answer. That means they’re better at tackling really tough problems in areas like science, coding, and math compared to earlier models.

OpenAI is taking the motto “Think Before You Speak” to heart with the o1 series!

What’s the Big Deal?

The o1-preview models are trained to take a step back and really think things through, much like a human would when faced with a tough problem. They consider different approaches, refine their thoughts, and even catch their own mistakes along the way. This deeper level of thinking allows them to solve problems that older models couldn’t handle.

Use Cases of OpenAI o1

Coding with OpenAI o1

Writing Puzzles with OpenAI o1

HTML Snake with OpenAI o1

Impressive Test Results

To see how much better o1 is compared to the earlier GPT-4o model, OpenAI put them through a series of tough tests, including human exams and machine learning benchmarks. And guess what? o1 outperformed GPT-4o on most of these reasoning-heavy tasks!

Let’s break down some of the results:

Advanced Math Competitions

They tested the models on the AIME (American Invitational Mathematics Examination), which is a super challenging math exam for top high school students in the U.S.

  • GPT-4o: Solved about 12% of the problems (roughly 1.8 out of 15 questions).
  • o1: Solved 74% with just one attempt per problem (around 11.1 out of 15). When they let the model try multiple times and took the most common answer, it scored 83%. Using even more advanced methods, it reached 93%, solving about 13.9 out of 15 problems!

To put that into perspective, a score of 13.9 would place o1 among the top 500 students nationally and above the cutoff for the USA Mathematical Olympiad. That’s some serious brainpower!

Science Expertise

They also evaluated o1 on GPQA-diamond, a tough benchmark that tests knowledge in chemistry, physics, and biology. OpenAI even brought in experts with PhDs to answer these questions.

  • Result: o1 outperformed these human experts, becoming the first AI model to do so on this benchmark! This shows that o1 can solve complex scientific problems at a very high level.

Coding

In coding competitions like Codeforces, the new models reached the 89th percentile, showing they can generate and debug complex code with ease.

Other Benchmarks and Visual Understanding

But that’s not all! The o1 model also showed significant improvements in other areas:

Understanding Visual Information (Vision Perception)

The o1 model can now interpret and understand images-a capability known as vision perception. This means it can analyze visual data and answer questions about it, which is a big step forward for AI.

Medical Imaging Test (MMMU Benchmark)

OpenAI tested o1 on a challenging benchmark called MMMU (which stands for Multimodal Medical Machine Understanding). This test evaluates how well an AI can understand medical images and make accurate assessments, similar to tasks performed by medical professionals.

Result: o1 scored 78.2% on this test, making it the first AI model to perform at a level comparable to human experts in medical imaging. This is huge because understanding and interpreting medical images requires deep knowledge and precision.

Wide Range of Knowledge (MMLU Benchmark)

The o1 model was also tested on the MMLU (Massive Multitask Language Understanding) benchmark, which covers 57 different subjects ranging from history and literature to mathematics and computer science.

Result: o1 outperformed GPT-4o in 54 out of 57 subjects! This shows that o1 isn’t just specialized in one area-it’s demonstrating improved understanding across a broad spectrum of topics.

In simpler terms, o1’s ability to understand both text and images means it’s becoming more versatile and capable. Whether it’s analyzing complex medical images, solving advanced math problems, or answering questions across various subjects, o1 is setting new standards for what AI can do.

Meet o1-mini

OpenAI has also introduced o1-mini, a smaller, faster, and more affordable version of the o1-preview model that’s especially good at coding tasks. It’s 80% cheaper, making it a great option for developers who need powerful reasoning abilities without breaking the bank.

Who can use ChatGPT o1-preview?

These new models are a game-changer for anyone dealing with complex problems:

  • Researchers and Scientists: They can help annotate cell sequencing data or generate complex formulas needed in fields like quantum physics.
  • Developers: Building and executing multi-step workflows becomes easier and more efficient.
  • Students and Educators: They offer a new way to explore challenging concepts in math and science.

How to access ChatGPT o1-Preview?

ChatGPT Plus and Team Users: You can access the o1-preview and o1-mini models in ChatGPT starting today. Just select them from the model picker. There are weekly message limits for now (30 messages for o1-preview and 50 for o1-mini), but OpenAI is working to increase these limits soon.

  • ChatGPT Enterprise and Edu Users: You’ll get access to both models starting next week.
  • Developers: If you’re in API usage tier 5, you can start experimenting with these models through the API today. Some features like function calling and streaming aren’t available yet, but they’re on the way.
  • ChatGPT Free Users: Great news! OpenAI plans to make o1-mini available to all free users soon.

Safety Also Matters

OpenAI has also stepped up the safety features with these models. They’ve been trained to better understand and follow safety guidelines by reasoning about the rules during conversations. This means they’re less likely to be tricked into doing something they shouldn’t (you might have heard of “jailbreaking” AI models).

In tough safety tests, the o1-preview model scored 84 out of 100, compared to GPT-4o’s score of 22. That’s a significant improvement, showing they’re much better at staying within safe and appropriate boundaries.

OpenAI is working closely with safety organizations in the U.S. and U.K. They’ve even given these institutes early access to the models to help with research and ensure everything is up to par.

Final Thoughts

The launch of the o1-preview and o1-mini models is a big deal in the AI world. They represent a significant step forward in how AI can reason through complex problems. With better performance and enhanced safety measures, these models are set to be game-changers for many people working on challenging tasks.

Stay tuned to Analytics Vidhya blog to know more about the uses of o1 and o1 mini!

Originally published at https://www.analyticsvidhya.com on September 13, 2024.

--

--

Himanshi
Analytics Vidhya

I am a data lover and I love to extract and understand the hidden patterns in the data. I want to learn and grow in the field of Machine Learning & Data Science