Debunking ‘LLMs like GPT can’t learn logic’

Paul Pallaghy, PhD
3 min readJan 16, 2024

Here we’ll look at how LLM’s are in fact primarily doing generalization, not regurgitation. Plus I comment on parsimony and decipherment.

Both brains and LLMs use neurons that are undoubtedly, at low level, making correlation-only inferences.

It’s the system that learns (causal) logic through examples.

Here’s my full debunk.

1. Many of us have logic-tested LLMs on complex tests not in the training data.

Including nested logic.

Peer review-type tests. Pure logic, abductive (probabilistic) reasoning and multi-part tasks.

Premium LLMs like GPT-4 are near perfect.

And the tests are novel input, not in the training set.

Plus of course, we’ve all personally tested GPT-4 and other premium LLMs on everyday usage. LLMs can’t possibly have seen all our unique contexts, domains and logic combinations!

2. Claims that current LLMs have ‘insufficient data to understand logic’ are baseless

This seems like a pure non-evidence-based cop out distraction argument.

--

--

Paul Pallaghy, PhD

PhD Physicist / AI engineer / Biophysicist / Futurist into global good, AI, startups, EVs, green tech, space, biomed | Founder Pretzel Technologies Melbourne AU