How CoT works (from Wie et al., 2022)

Telling ChatGPT to “think step by step” doesn’t (actually) help much

Chain-of-thought prompting isn’t really that useful

Mike Young
10 min readJun 4, 2024

--

Chain of thought prompting (telling an LLM to “think step by step”) has been hailed as a super powerful technique for eliciting complex reasoning from ChatGPT.

The idea is simple: provide step-by-step examples of how to solve a problem, and the model will learn to apply that reasoning to new problems.

But a new study says otherwise, finding that chain of thought’s successes are far more limited and brittle than widely believed.

This study is really making waves on AImodels.fyi, and especially on Twitter. Remember, people release thousands of AI papers, models, and tools daily. Only a few will be revolutionary. We scan repos, journals, and social media to bring them to you in bite-sized recaps, and this is one paper that’s broken through.

If you want someone to monitor and summarize these breakthroughs for you, you should become a subscriber. Read on to learn about why CoT might be a waste of tokens!

Subscribe or follow me on Twitter for more content like this!

Overview

The paper “Chain of Thoughtlessness: An Analysis of CoT in Planning” presents a rigorous case…

--

--

Mike Young

Writing in-depth beginner tutorials on AI, software development, and startups. Follow me on Twitter @mikeyoung44 !