Language != Thinking

Syntax/semantics vs reasoning

Sharad Joshi
6 min readFeb 12, 2023

--

Photo by Milad Fakurian on Unsplash

The recent progress and popularity of Large Language Models(models trained auto-regressively to predict the next word given a phrase or masked language modelling) has divided the AI community in two —

Group A : We’re at the cusp of an AI revolution. AGI is within reach in next few years.

Group B : LLMs are nothing special but just stochastic parrots.

Both groups are either under or over hyping the current state of AI.

There is a new paper that puts forth a new cognitively inspired framework to think about LLMs.

In this paper, authors argue there are two kinds of fallacies at play when evaluating the performance of LLMs —

Fallacy A : If a model is good at language, it implies that it is good at thinking as well.

Fallacy B : If a model is bad at thinking, it implies that it is bad at language as well.

Group A follows the fallacy A i.e being able to generate long coherent grammatically correct…

--

--