Doomerism Is The Threat, Not AI

Image by Freepik

Not so long ago, AI was a mysterious, murky practice, seldom troubling the minds of the average citizen. However, recent hype about its capabilities has caused the AI debate to take center stage and spark diverse reactions. Some praise AI as the next technological revolution, while others regard it as a harbinger of doom and human extinction. In America, round after round of public hysteria over AI’s alleged existential threat have emerged. These purveyors of AI hysteria, while well-meaning, often fail to support their claims effectively, fueling widespread fear and impeding progress. There is no question that AI poses risks, but the real danger lies far from the speculative, Armageddon-like prophecies of AI Doomerism- a movement which distracts from the real issues at hand.

A Closer Look at AI Doomerism

“AI Doomerism” first rose to popularity within the fields of artificial intelligence and technology to describe the belief that “artificial intelligence will lead to the end of humanity or at least to some significant catastrophic event” (Forbes). Major publications such as Time postulate that “improving AI would create a positive feedback loop with no scientifically established limits” resulting in a “godlike AI”. Many voices, like those at Safe.AI, say that mitigating the risk of extinction from AI should be a global priority. Though I respect these concerns, I respectfully disagree.

Clumsy Terminology and the Complexity of Intelligence

Terms like “human-level” and “general intelligence” tend to be misused concerning AI. This misuse can lead to broad implications that aren’t entirely accurate, but nevertheless provoke panic. For example, the Time article argues that extinction from AI would come from, among others, “human-level AI, defined as an AI that can perform a broad range of cognitive tasks at least as well as we can.” and cites a Cornell study on OpenAI’s GPT-4. Essentially, the article equates successful AI performance with human intelligence, and, by extension, to being an entity at the “human-level.” However, the study itself concedes our formal definitions of intelligence, artificial intelligence and artificial general intelligence are in their infancy and struggle to comprehensively capture and quantify true human intelligence. If the scientific world has yet to determine how to evaluate human intelligence, let alone machine learning models, then these terms should not be thrown around carelessly to describe AI, even when it excels at its intended tasks. Though we have yet to find an adequate, formal definition for intelligence, we do know it is made up of more than simply achieving goals or appearing to comprehend.

Indeed, GPT is capable of doing what it was trained to do very well. But while AI can excel at image classification, data interpretation, or answering questions, it falls short in other areas that make human intelligence unique. In fact, the very same Cornell study states that long term memory, continual learning, conceptual leaps, and consistency are areas in which GPT falls short- all integral parts of human intelligence. So though GPT does do exceptionally well at its designated tasks, defining intelligence based solely on the thinking that “it does what it’s supposed to!” is a one dimensional and flawed approach. It would require ignoring GPT’s suboptimal performance in other qualities that are, inconveniently, part of what makes human intelligence so unique.

The Problem of Consciousness and Sentience

Describing current AI as “human level” or “intelligent” perpetuates a misleading, gross oversimplification of the truth. Every word has hidden meanings and implications behind it- and therefore, we should be wary of our words and consider their connotations carefully. The word “intelligence” comes with other connotations, connotations implying an intelligent thing is also a motivated, belief-holding, conscious thing. Therefore, describing AI as “intelligent” implies a certain level of sentience and motivation — one that may not exist. As Stanford Professor Andrew Ng writes in his newsletter The Batch, “There’s no widely agreed-upon, scientific test for whether a system really understands — as opposed to appearing to understand — just as no such tests exist for consciousness or sentience.” We simply don’t have all the answers yet, but to imply consciousness by drawing equivalence between AI and humans, or by carelessly classifying AI as “intelligent”, unintentionally spawns and proliferates an unsupported narrative. Until tests exist for consciousness, or concrete evidence emerges in favor of sentience, we must be careful not to imply that AI are conscious.

The End of the World As We Know It

The notion of “self-improving AI” leading to a “no limit godlike AI” which ultimately will bring “world extinction” is a slippery slope fallacy currently lacking substantial support. While there is limited research in this area, the notion remains sensational and speculative. I have yet to see studies outlining precisely how general intelligence AI could theoretically go rogue. In fact, the current studies and theories we do have suggest there are limits to AI performance. A concept called Bayes’ Error describes how the performance of a machine learning model increases at first, but then slowly peters out and stalls. That level of error that can get no smaller is called Bayes’ Error, which indicates that there is a limit to how well models can perform, even if it’s often lower than human error.

Additionally, self-improvement in AI already exists to some extent- for example, the gradient descent algorithm optimizing neural network weights for better performance. And yet, we still have not seen AI take over the world. Perhaps by self-improving AI, one means an AI taking the place of a human in adjusting hyperparameters or collecting training data. However, entirely replacing human intervention in the development of AI is so computationally expensive that it’s impractical and unrealistic.

And indeed, if the computation barrier could be passed, is there proof an infinite feedback loop causing godlike intelligence would ensue? How do we know the algorithm would know exactly what kinds of data it needs to improve, where to find that data, and how to optimally use that data? How would it prevent losing accuracy in one metric while training itself to improve in another? We lack the necessary proof to definitively answer these questions, and the problem with AI Doomerism is that it assumes we do.

Practical Risks That Deserve Attention

Focusing on unlikely and speculative risks, such as AI-driven extinction, diverts attention and resources from pressing, relevant risks that are associated with AI today. It’s been well documented that AI faces issues with discrimination and bias in production, which are relevant, pressing issues currently affecting our communities. We should be focusing our energies on finding ways to mitigate this risk, but instead of addressing these pressing issues, the rise of AI doomerism led to a proposed 6-month pause in development of AI, which would’ve stalled research on mitigating bias in machine learning systems without significantly impacting our progress towards AGI. While the 6-month pause never took effect, it demonstrates how AI doomerism distracts from efforts to address the more realistic, immediate problems associated with AI. Every ounce of energy devoted to hysteria about unrealistic AI extinction is energy not spent on fixing the real problems we have at hand.

AI As A Tool, Not A Threat

If there’s one thing I can agree on with AI Doomerism, it’s that AI is incredibly powerful. And anything powerful is inherently dangerous, not because of the thing itself, but because of the possibility its wielders will be irresponsible and careless in how they use it. Knives, for instance, are usable because their sharp edge allows us to cut vegetables in the kitchen. Even used in its proper place with the best of intentions, the sharp edge that makes a knife useful could also accidentally cut a finger instead of the tomato. Someone with malicious intentions may take the very same knife we use to slice our onions and harm another human being. In all of these cases, we would not prosecute the knife as the guilty party. We would rightfully blame the individual who used it for malicious purposes. AI has the potential to revolutionize our lives for the better in the fashion of electricity and automobiles, and holding individuals accountable for responsible AI deployment, rather than vilifying AI as the source of all potential harm, is the key to ushering in a bright, AI-powered future.

--

--