Apple Says LLMs Are Really Not That Smart

Vishal Rajput
AIGuys
Published in
10 min readOct 28, 2024

--

A superb new article on LLMs from six AI researchers at Apple who were brave enough to challenge the dominant paradigm has just come out.

Everyone actively working with AI should read it, or at least this terrific X thread by senior author, Mehrdad Farajtabar, that summarizes what they observed. One key passage:

“we found no evidence of formal reasoning in language models …. Their behavior is better explained by sophisticated pattern matching — so fragile, in fact, that changing names can alter results by ~10%!”

Topics Covered

  • Understanding Reasoning And Planning Is Really Hard
  • Formal logic
  • The Source Of Confusion About LLMs Capabilities
  • Failure Examples
  • Evaluating Benchmarks
  • Conclusion

Understanding Reasoning And Planning Is Really Hard

Just by looking at the results of a system, it is very hard to identify whether the system is actually reasoning or just doing some complex pattern matching. A lot of people have a very limited understanding of what reasoning and planning actually means.

--

--

AIGuys
AIGuys

Published in AIGuys

Deflating the AI hype and bringing real research and insights on the latest SOTA AI research papers. We at AIGuys believe in quality over quantity and are always looking to create more nuanced and detail oriented content.

Responses (2)