Documenting Arguments Against Artificial General Intelligence
--
Following conversations I’ve had with friends, and threads on the Internet about the recent leaps in AI and the questions they’re raising, I thought it’ll be interesting to document a few arguments from people currently disagreeing about AGI and its proximity to human and animal intelligence.
They lack autonomy
One line of argument that seems to dampen the excitement around GPT and related generative models with respect to how closely they simulate (aspects of) human intelligence and creativity can be summed up in this comment on HackerNews:
Leaving aside the question of whether that combination of images is novel… In your example, all the proposed novelty is specified in the prompt. ChatGPT didn’t come up with it, you did.
> Isn’t it pretty much what any person would do, mash up some related words?
Yes, that’s exactly what chatgpt does, and it’s what many humans do. But to be analogous to your initial example, there should be another person who actually came up with the instruction specifying which words to mash into a poem. The word-masher, whether human or chatgpt, is just following the instructions, not coming up with them.
Key to this line of thought is that, to satisfy the OP’s requirement, ChatGPT would have to “come up with” its own instructions and execute on them; that it simply waits on instruction to follow means it is not intelligent enough to deserve the platitudes heaped on it in this earlier comment.
Autonomy was the secret sauce missing in their evaluation of whether this model was “intelligent”, or if it was simply a “word-masher”.
They’re not generalists
Another argument I’ve encountered focused on the “G” in Artificial General Intelligence. While the speaker agreed that an AI model trained on specific data sets can master them to the point of exhibiting some level of intelligence with respect to what it was prepared for, straying outside of its subject area showed just how “unintelligent” the model truly was. That, for it to exhibit intelligence the way we humans do it, it must be a “general problem solver”, like humans are.
Language being a general-enough tool, a model trained on a large and diverse enough corpus can sound as generally intelligent as anyone with good grammar and a cursory understanding of many fields. That it is a fluent wielder of language tricks users into equating that with “true intelligence”, a thing these models do not exhibit.
The same can be said about many people good with words and with enough knowledge to sound smart and get people to follow along. Grifters, psychics and con-people are probably lumped together in this pool.
This argument feels particularly relevant
Generative AI isn’t creative enough
I saw this when GPT-3 began making rounds on the Internet. It led me down a rabbit whole to find poems and music created by AI models. The evaluation from more sophisticated intelligences (read: humans) was as you’d expect; a mix of excitement at what AI was capable of, as well as the sentiment that it wasn’t good enough to be impressive. The output from the humans was almost predictable.
I’ve had first hand experience with this. A brother and I let ChatGPT write out an essay I had been working on a week ago, iterate over it, and re-write it by completing draft paragraphs we’d given it.
The output was not in the style I’d appreciate: it read like a sort of formulaic essay written out by a student who had just been instructed on how to structure their thoughts. It started with an introduction, then defined a key term in the next paragraph, explored the subject in the next two and came to a mainstream, safe and neutrally-positive conclusion.
The other layer of disappointment I had with the generated essay was that it did not contain any of my lived experience, which is what I had intended to draw my conclusions from. Obviously, ChatGPT knows nothing about the conversations I have with friends (they’re all private, heh), but we could argue that, it wrote on the topic based on its “experience” as a model trained on general human knowledge and feedback.
There are questions in there about the lived experience of a piece of software, how intelligence will differ in embodied and disembodied contexts, and other fun things I won’t get into now.
We went on to ask it to write a plan for a waakye business in Accra. It came up with some marketing blurb (see: Bonus) that was interesting for the fact that it knew that we had asked it about “Ghanaian cuisine”. It was generic enough that we could trivially replace waakye with eba, and “Ghanaian” with “Nigerian”.
Similar thoughts on Stable Diffusion’s output were shared in that same HN thread:
Concerning all image generative models I only see them used in generation of OG-images for mostly uninteresting articles on the internet. I can only see the same practical personal use for language models.
A reply to this comment pointed out that we were just at the start of what generative AI was capable of. A quick glance at the growing sophistication of GPT models and their data sets points to potentially exponential growth in their capabilities at least in the short term.
They make silly mistakes
A friend studying this kind of stuff at the graduate level pointed out some embarrassing mistakes AI models have made in the past. Granted, this isn’t specific to the current strides made in the last several months, but it remains on topic.
The case study was of an AI psychologist suggesting to someone suffering depression to end their lives as a solution to their problem.
This is obviously wrong to many well-aligned humans. Contemporary culture places such a value on human life, as does the ethics of the profession, that I’m quite certain a psychologist advising suicide as a solution to depression will lose their license.
The AI psychologist in this case study, perhaps untrained on questions contemporary ethics has long ago settled, must have arrived at a logical conclusion without evaluating the wider implications of applying its solution. Implications being: if you killed your patient to rid them of their ailment, they no longer have that ailment, but they no longer have a life to begin with, and you no longer have a patient!
I’ve met a few people fall into the trap of not thinking their choices through, myself included. But let’s move on to a far more devious case.
ChatGPT-infused-Bing can spin a tale if you ask it to. In that tweet, the model connected activities on LinkedIn to the bank failure. Given the sensational nature of the SVB collapse, I almost thought there was some merit to this. Thankfully I wasn’t open-minded enough to miss the joke after the second tweet in the thread. But it was some good fiction!
The rest of the discussion, especially the tweets about the meaningless and contradictory links it listed as sources tells us how smart the model is. But there is something to be said that in March 2023, we are debating this topic.
They’re not intelligent: I know how they work
I’ve encountered this as a first line of defence from some of the smartest people I know who’re directly involved in AI, acquaintances and random strangers on Internet message boards.
The line of argument reads like the corollary of the idea that “a sufficiently advanced piece of technology is indistinguishable from magic”. For such individuals, proximity to the bolts and nuts of AI immunises them from the levels of fascination non-experts have when they first interact with these models.
This is nonsense. These nets regurgitate the most average possible output given inputs. By definition. You’re conflating and hyping and mixing up so many weird things at once.
You literally couldn’t get GPT to come up with a single novelty if you tried. It’s all remixing existing content, and again, doing so in a way to fit the average of the dataset.
When you realize this you realize it has no intelligence as we most typically define it (novel solutions to novel problems).. its not AI. Call it what it is: a beautifully advanced way to regurgitate the exact most popular (mundane) reply you’d expect given a huge dataset.
It’s sort of good for studying what already exists. It won’t even really ever show you the edges though so it’s actually almost dangerously deceptive as evidenced by this absurd rounding up people are doing. If you want to learn the gist of anything, ask GPT. If you want to know anything in depth, GPT in fact will only mislead you towards genericity, platitudinous mediocrity.
That reply came from someone apparently close to the action. I’m quite convinced this pattern of thinking is common to most experts in fields that outsiders find fascinating. It’s the almost expected response when deep expertise meets naïve excitement. It’s a lot like the conservative answer to unbridled charisma; the wise old sage’s response to the novice.
It did invite us to consider what “novelty” means, ideally and practically, and what value we might be compelled to place on the creative output of individuals (natural and not). We’re already doing this!
It also leads to the question about what the bar should be for qualifying any model as “intelligent”, and if that bar should be fixed, and what such an idea means for the diversity of mental capacities around us and their various outputs…
These arguments and conversations they spark around how we think about things such us human exceptionalism, reasoning and understanding, “Truth”, novelty and the limits of computing are fascinating. There are also more fun concerns about what it means to commoditise and scale intelligence the way computers know how best to do, and what it means to have this power in the hands of a few, or of every one on the planet.
I intend to follow these conversations and update this list with more arguments that look critically at this juggernaut.