Artificial Intelligence: Has everyone lost their mind?

Fabrice Popineau
6 min readMay 31, 2023

--

https://www.francetvinfo.fr/sciences/high-tech/intelligence-artificielle-des-experts-alertent-sur-les-menaces-d-extinction-pour-l-humanite_5856683.html

No: ChatGPT does not spell the extinction of humanity! There seems to be a lot of confusion involved in lumping together technologies that have nothing to do with each other under the same umbrella of artificial intelligence. It’s as if all you had to do was put a number of programs into a computer, and by some miracle a ‘mind’ or ‘genius’ would be born and come to life inside.

ChatGPT isn’t intelligent: it’s a computer program that calculates the most likely sequence of words (‘tokens’) that continues the sentence you give it.

What’s more, the transformer on which it is based is not directly designed to answer questions: it is designed to complete sequences such as “the highest mountain in the world is …”. As Yann LeCun put it so well, ChatGPT is good engineering, but not revolutionary” (the transform model on which it is based is already a few years old).

I signed the petition launched by the Future of Life Institute (https://futureoflife.org/open-letter/pause-giant-ai-experiments/). If I had to do it again today, I wouldn’t sign it.

ChatGPT has taken (almost) everyone by surprise: the dialogue performances are amazing and there has been a staggering effect. The argument concerning the possible delegation of human cognitive faculties to machines is understandable. Having said that, when electronic calculators appeared, we stopped putting so much energy into learning how to calculate by hand and put it into other activities. One thing is constant in the human brain: it doesn’t like doing nothing. If ChatGPT provides writing assistance and more, we will use our cognitive faculties differently. A six-month moratorium is no way to judge the consequences of the large-scale deployment of such Large Language Models (LLMs).

There was a reason for calling for a moratorium of six months (and it’s the one that prevailed for me): putting ChatGPT (and its competitors) online will lead to large-scale pollution of digital space by content generated by LLMs (and by diffusion modelssuch as Midjourney and Dalle-e). ChatGPT’s first victim could well be itself. The performance of these LLMs will decline if they are trained on artificial texts. Even if we can identify with a high probability that a text is produced by an LLM, the proliferation of these LLMs, not to mention the Open Source ones that are bound to appear, will lead to a high cost of identifying the artificial versus human origin of a text. A six-month moratorium could ideally have been used to establish a common rule before polluting the digital space. The economic stakes are such that obtaining this moratorium was merely an illusion.

Risks. There is no such thing as AGI (Artificial General Intelligence) and ChatGPT is certainly not an AGI.

As some colleagues have pointed out, ChatGPT follows an old scientific tradition: when we have data, we aim to construct a model to understand and potentially predict outcomes. This is essentially what has been accomplished with LLM, where models have been created to encompass all digitally available texts written by humans. We can even look at the compression factor achieved by these LLMs: the number of tokens in the training set compared with the number of model parameters. This ratio was 6.66 for GPT-2 and around 2.85 for GPT-3. I don’t think this data is available for GPT-4. ChatGPT can be compared to a marvelous questioning system that grants users access to a vast amount of human knowledge. It achieves this by engaging in natural language conversations, making it a user-friendly interface for exploring and interacting with a wealth of information.

Therein lies the trap: that of anthropomorphism. It’s all the rage at the moment, with people talking about the emergence of consciousness in these LLMs and those who think that ChatGPT is going to hack computer systems. Geoffrey Hinton himself seems to have fallen into this trap

I think it’s a fundamental mistake to attribute human qualities to machines and that we should ban expressions of this type. It is easy to forget that machines only run programs that perform calculations. It’s fashionable to talk of hallucinations for sentences produced by LLMs that turn out to be pure inventions. It’s also probably practical and a bit exciting: if the machine is hallucinating, it must be that its consciousness has awakened? So we’re playing God!

Let’s get our feet back on the ground: if the machine makes statements that don’t correspond to any reality, it’s because these models are not perfect and probably never will be. None of the programs we’re developing today are conscious!

To potentially create an AGI (Artificial General Intelligence), an AI program would need to engage with our physical environment or a (sufficiently precise) digital representation of it. Such an endeavor would necessitate immense computational resources, even by today’s standards. A prime example of the complexity involved can be observed in the formidable challenge of achieving fully autonomous driving. Although notable advancements have been made, reaching stage 5 autonomy remains extremely arduous. Moreover, due to theoretical considerations regarding the computation model (aka: Turing machine), it is uncertain whether our current computers are capable of hosting an AGI.

The systems we know how to build today are a long way from a possible AGI. This is not to say that AI technologies cannot serve harmful purposes. The Future of Life Institute has already warned about slaughterbots, a technology that could see the light (if it doesn’t already exist in certain labs). There’s no AGI in that: it would be just a mechanical device with a definite objective that could be achieved with a high level of performance.

The dangers of using digital systems on a large scale do exist. The filter bubble produced by content recommendation systems has been identified for a long time now. A recent study shows some of the effects of the Twitter recommendation algorithm.

But is this AI? Recommendation systems simply optimize an objective function and are relatively simple systems that handle very large quantities of data. In this respect, we could say with Luc Julia that “AI does not exist”.

Generative AI models increase the democratic problem posed by the mass dissemination of unverified information on social networks. We will have to adapt and educate. Commercial AI services will be subject to regulation, and the EU is working on this, as are the standardization bodies. The fact remains that the technology will make it possible for malicious agents to influence the course of things on a large scale, such as elections or other aspects of our lives. As the worst is never certain, we can also assume that the fake news effect will dissipate: too much falsified data will end up turning users away from the networks that propagate it.

Regulation. Beyond regulation, it’s obvious to say that neither France nor UE own their digital sovereignty. Our investments are orders of magnitude lower than those of the GAFAMs. The total budget of the CNRS is €3.5 billion, whereas Amazon’s R&D budget is $45 billion. Even if the CNRS does not have the same purpose as Amazon’s R&D, we do not see budgets of the same order of magnitude, targeted at technological advances outside the GAFAMs.

This poses a profound problem: behind the commercial dependence on companies like OpenAI, there is the fact that we as customers are also under cultural domination, by the very nature of these tools. OpenAI has included rules chosen by themselves to guarantee a certain ethics in dialogue. This ethic is not necessarily the one that the EU would like to see. The problem posed by the spread of false information via social networks will be even more insidious with generative AI models.

Establishing the necessary regulations presents a challenging equilibrium. While addressing the concerns surrounding the “AI threat,” it is crucial to avoid enacting laws that may later be regretted, as they could potentially curtail freedom of expression or impede the development and utilization of productivity tools. The widespread adoption of ChatGPT by users has been swift, which has also raised apprehensions about substantial job losses. It’s worth noting that every technological advancement has historically caused disruptions; for instance, the advent of printing led to the decline of copyists’ professions. If LLMs enable faster and higher-quality task performance, users will naturally embrace them. Conversely, if LLMs produce subpar outputs, users will swiftly turn away from them. Utilizing these tools without considering the quality of their outputs would be irresponsible and even stupid (like in the case of this American layer who used ChatGPT without checking that the court cases cited by ChatGPT were purely invented!).

The current period is still one of transition. Let’s see what the future will bring to us.

--

--