Artificial Intelligence and Natural Voodoo

Walid Saba, PhD
ONTOLOGIK
Published in
4 min readMay 27, 2024

For several years now artificial intelligence (AI) has been witnessing an unprecedented amount of misguided hype, led by — unfortunately, big tech and worse yet, by some “prominent” AI researchers. Even reputable journals that we used to look for to keep abreast the latest advances in AI were swept by the hype. Some have published papers where the main claim of the paper is one that was scientifically refuted decades ago (e.g., “reward is enough” — which is basically equivalent to reviving the discredited “behaviorism” of B. F. Skinner, knowing that Skinner has been Skinned, as the late Dan Dennet once remarked!)

Scientifically false claims were also made in thousands of peer reviewed papers on other topics. Thousands of papers on “fine-tuning” were published in the past several years, yet now we know that fine-tuning is not only a futile attempt at making a ‘model’ more aware of the knowledge of some domain, but it might actually cause a drop in performance. This should not be surprising — if we were not swept by the hype, since slight tweaking of the weights of a massive neural model will only add to the unpredictability of the model. But we had to publish thousands of (now) irrelevant papers — papers with a long list of authors and nicely formatted tables that always showed (of course!) great numbers.

Numerous papers are still also appearing claiming some progress in explainability, yet it is a scientific fact that neural networks (NNs) are not explainable since computations in neural networks are not invertible and the outputs cannot be traced back to the original input components (constituents) — in NNs these components are lost once they are sub-symbolically distributed. It is that simple, yet numerous papers making false claims continue to be published.

Numerous papers are still also published claiming automatic programming by AI is near, although the theory of computation tells us that this is impossible to achieve.

There’s more. Thousands of papers have also been published to ‘measure’ the capabilities of large language models (LLMs) in reasoning — mostly in planning and problem solving. Some refelection on the nature of the underlying architecture of LLMs (namely, neural networks) should’ve made it obvious that LLMs cannot handle the kind of reasoning required in planning since the reasoning rquired in planning should involve the storage of and reference to symbolic variables, none of which exists in NNs (if you want to be amused, just ask your favorite LLM to show the plan of going from the initial state to the goal state of the image below).

This is not alchemy — to make a plan of going from the initial state to the goal state — even in this very simple example, requires the manipulation of symbolic variables of different types, something that is not available in purely sub-symbolic distributed neural networks (I thought Jerry Fodor and others proved this decades ago — so save some PDF trees!)

It is one thing for someone to say AGI is near, or that they feel they are witnessing “sentience” in some new AI model (you are free to feel anything you like) but scientific venues should not oblige “feelings” that might simply be the byproduct of a strange brew!

Perhaps all of this started when it became popular to offer an AI certificate with a few courses to anyone who paid the fees. You would think that reputable academic institutions knew that you cannot become an “expert” with a couple of courses in AI. They certainly knew that cognition and the human mind, subjects that for millennia have occupied the most penetrating minds in human history, cannot be mastered in a few months (Plato, Leibniz, Russell, Frege, Immanuel Kant, Quine, Noam Chomsky, etc. were not that mentally challenged to spend their lifetime studying the mind without claiming to reach conclusive results).

We have been warning that this gold rush has been getting out of hand for a long time. Worse even than an AI Winter, getting to a point where we cannot trust peer-reviewed research is very damaging in the long term.

Isn’t it time to bring ‘science’ back into ‘science and technology’?

--

--