Sitemap

Member-only story

LLMs DO reason: old logic + new data = new inferences

The thought that LLMs can’t ‘reason’ — create new inferences — is completely bonkers.

2 min readNov 3, 2024

LLMs have been reasoning since GPT-2. (If you define reasoning as creating correct new inferences from your data).

Of course they do!

If you provide new data with implications a human could derive, a premium LLM will generate most of those just as well these days.

As tested by metrics LLMs are 85–98%-ile of humans at logical inference. That is, LLMs are better at logic than most humans.

Don’t get confused by LLM hallucination which occurs because of imperfect recall of billions of factoids in the LLM training set.

But the rules of logic itself are each repeated millions of times within the training set.

The LLM does not hallucinate on what ‘because’ or ‘if’ or ‘despite’ implies.

LLMs – as early as GPT-2 – are highly reliable at turning your new data injected into the prompt into – correct – new inferences.

Thus LLMs have all the intelligence required to build up new knowledge not necessarily explicitly stated or present in a current data feed, but implied by it.

AGI needs this unit of intelligence that LLMs represent. But it needs more, including memory and real-life data access. But these are not intelligence per se.

--

--

Paul Pallaghy, PhD
Paul Pallaghy, PhD

Written by Paul Pallaghy, PhD

PhD Physicist / AI engineer / Biophysicist / Futurist into sustainable global prosperity thru green tech & AI. Archeology nut.

Responses (3)