Enough already! Here’s a balanced view on human-like AGI via LLMs like GPT-3

Paul Pallaghy, PhD
7 min readOct 25, 2022

There’s currently a vigorous debate underway — between Symbolists & Connectionists —regarding whether neural network-based Large Language Models (LLMs) like GPT-3 are exhibiting (or will or even can ever exhibit) capabilities expected of Artificial General Intelligence (AGI). Or at least basic reasoning, common sense and general NLU, natural language understanding.

Even elder statesmen luminaries like the celebrated linguist Noam Chomsky are getting involved (on the Symbolist side). On the other side of the debate you have slightly crazy people who think GPT-3 is sentient (it’s not). And more grounded Connectionists like Sam Altman from OpenAI who just think deep learning can get us to AGI.

And then you've got people like me. Reasonable. Sane. Balanced. LOL. :)

But truly, it’s pretty clear that hybrid approaches, dominated by neural networks, will be the way ahead.

(BTW: here’s a recent update post of mine on this issue).

Large Language Models (eg GPT-3)

This is all despite the fact that the likes of OpenAI’s GPT-3 and Google’s LaMDA are achieving truly wonderful outcomes — perhaps a decade ahead of any expectations — in human-like text generation and at least…

--

--

Paul Pallaghy, PhD

PhD Physicist / AI engineer / Biophysicist / Futurist into global good, AI, startups, EVs, green tech, space, biomed | Founder Pretzel Technologies Melbourne AU