Remind Me Again Why Large Language Models Can’t Think

Microprediction
The Modern Scientist
39 min readApr 6, 2023

--

To avoid looking like an unsophisticated fool it is safest to avow that LLMs cannot reason, think, or truly perceive the world. Sometimes people expressing even the possibility of an alternative are pointed in the direction of “ML for Dummies” and the implicit reductionist case.

Yet as Sister Aloysius said, “I have doubts. I have such doubts!

This screed is too long. I apologize. But I’ve been tapping away here and there since this alien called ChatGPT arrived, and I was further inspired by a wonderful debate hosted at NYU recently. Unlike many of you who work at firms that have banned its use, I am not only free to spend countless hours interrogating the Dirk Gently machines in various ways, but feel compelled to given some experiments (that I cannot talk about) establishing its commercial utility beyond doubt.

It is in the process of this journey into the interconnectedness of all things that I have begun to lose my faith in reductionism as it applies to LLMs, and started to demand evidence. I’ve also started to recognize some patterns in my own thought which are a bit more machine like than I previously assumed. So let us consider the public trial of a machine — one that has been charged with possession of superficial intelligence.

Mysterious representations are revealed, albeit indirectly, and in the “wrong” format.

It wants a fair trial

--

--