Do you guys get my point re LLMs like ChatGPT having absorbed logic?

It’s impossible for them not to have.

Paul Pallaghy, PhD
1 min readDec 27, 2023

Are people thinking LLMs don’t understand the logical use of ‘if’ and ‘but’ and ‘because’? And 100 other conjunctions and prepositions?

Really?

That’s deluded thinking.

Of course they get those logical usages right 99.5% of the time. In novel and NESTED combinations.

This is foundational to common sense logic.

LLM pattern recognition allows them to learn the rules of logic. Once learned, LLMs are *almost* perfect at using these rules.

Neural nets are proven to be highly parsimonious at extracting the underlying ‘cause’ of data. LLMs have so many examples of the use of logic words that they get it as well or better than us.

Because LLMs aren’t perfect, many nitpickers out there insist LLMs only regurgitate.

But that flies in the face of tests via numerous novel metrics of understanding as judged by responses. In my hands and others.

On the prompt context, LLMs are highly logical. A few hallucinations doesn’t negate this general statement.

Given the amount of novel stuff they get right. You can’t fake solid NLU at that scale of testing.

Check out my previous post on this.

--

--

Paul Pallaghy, PhD

PhD Physicist / AI engineer / Biophysicist / Futurist into global good, AI, startups, EVs, green tech, space, biomed | Founder Pretzel Technologies Melbourne AU