ChatGPT’s word statistics learns logic so well, for novel input it’s almost as good as intelligence

Paul Pallaghy, PhD
5 min readJan 14, 2023
IMAGE CREDIT | Warner Bros

Over the last few months I’ve been defending ChatGPT & GPT-3 as a useable major breakthrough in NLU (natural language understanding). Coming as a fan and practitioner of both large language models (LLMs) and symbolic AI.

I do get the GPT skepticism.

But here today I try from one more angle to point out that GPT word statistics is so decently robust at logic — especially if you keep to a single paragraph prompt — that except for truly mission critical applications it just usually works.

Whatever the case, it’s an incredible end-user tool for research of all sorts and a harness-able NLU technology for use inside apps.

With care, intelligent assessments and decisions can be extracted from GPT.

So for anything that can tolerate the odd muck up — e.g. end-user research — GPT is the way to go.

Only it’s gonna keep getting better every six months.

Quick re-cap

I’ve approached the GPT debate from up to 10 point of views (LOL) to try and present a balanced view of LLMs that, despite their limitations and the skeptics, the merits far outshine anything symbolic AI has produced and that understanding and…

--

--

Paul Pallaghy, PhD

PhD Physicist / AI engineer / Biophysicist / Futurist into global good, AI, startups, EVs, green tech, space, biomed | Founder Pretzel Technologies Melbourne AU