ChatGPT Turbo vs ChatGPT Plus: Testing the First Priority Feature

Kevin Menear
3 min readFeb 13


If you read my recent review of ChatGPT Plus, you’ll know I was pleasantly surprised by the new Turbo option. This is the first priority feature released to Plus subscribers (which is one of the three perks of the subscription promised by OpenAI).

This is how the Turbo option shows up when starting a new chat.

The Turbo model is different than the regular ChatGPT model. I haven’t found an official release from OpenAI about this model, but it is labeled as an alpha release of a model optimized for speed, which I think says enough.

ChatGPT Turbo in action.

Subjectively, the model feels faster, but I wanted to put it to the test with the same prompts I used when comparing ChatGPT Plus with ChatGPT Free (can I call it that??). In those time tests, ChatGPT Plus responded an average of 2.56 times faster than ChatGPT Free.

I finally had time today to run these same tests with ChatGPT Turbo and ChatGPT Plus. For a detailed discussion of the methods, check the original time-test article linked above.

Assuming you’ve seen those methods & prompts, here are the results:

Figure 1: ChatGPT Turbo responded faster than ChatGPT Plus for all three trials with all five prompts.

Here we see a clear advantage with ChatGPT Turbo (in purple) over ChatGPT Plus (green). Turbo responded faster than Plus in all fifteen trials.

Figure 2: Average speedup factor for all five prompts. This was calculated by taking the average time for each prompt with ChatGPT Plus and dividing it by the average time for the same topic with ChatGPT Turbo.

This figure shows the average speedup (the dashed purple line) is 1.44, which means ChatGPT Turbo is faster than ChatGPT Plus by a factor of 1.44, on average.

Multiplying this 1.44 speedup by the 2.56 speedup with Plus over Free, we get a speedup factor of 3.69 between Turbo and Free. To put that into perspective, this means a response with ChatGPT that takes 1 minute would take the Turbo model about 16 seconds. That’s a major difference.

I have yet to run a quality test with these two models. My hunch is, optimizing for speed means response quality decreases. No free lunch, right?

I think this new model warrants such analysis. When I come up with a good test and run it, I’ll update this article.

UPDATE: So much for Turbo! Just got a notification that Turbo is the new Default, and the old Plus is now legacy.

UPDATE 3/14: I just tested GPT-4. See the results here.

Oh Turbo, we hardly knew ye.



Kevin Menear

I write about AI, Mathematics, and Education, and explore Generative AI by using it out in the open.