Is artificial general intelligence already here?

Enrique Dans
Enrique Dans
Published in
2 min readJun 27, 2024

--

IMAGE: A comic-style illustration of a robot looking despectively at a crowd of people, with the robot’s expression and posture conveying a sense of disdain or superiority

Here’s a thought-provoking question: is artificial general intelligence (AGI), sometimes called strong artificial intelligence, until now purely hypothetical, equal to, or beyond average human intelligence?

It’s a controversial concept, such that there is no shortage of attempts to redefine it in a more systematic or formal way, but which intuitively points to the moment when an algorithm is seen as more intelligent than most people reading this. And if algorithms such as ChatGPT and the like were long ago considered capable of passing the imitation game or the Turing Test, i.e., able to convince a person that they are not talking to a machine; then just rewatch the ChatGPT4o presentation from a couple of weeks ago, which showed that a Large Language Model can easily beat humans at providing results for a wide range of tasks in a wide range of areas, and do so in a completely credible human interface.

We’re talking about exam answers generated by algorithms like ChatGPT that consistently outperform the vast majority of university students. If we compare the writing capabilities of ChatGPT and similar algorithms on a wide range of topics with those of an undoubtedly very large percentage of the population, we will find that these algorithms write better, which is to say their grammar is faultless, they express themselves clearly, and…

--

--

Enrique Dans
Enrique Dans

Professor of Innovation at IE Business School and blogger (in English here and in Spanish at enriquedans.com)