The AGI Significance Paradox

Carlos E. Perez
Intuition Machine
Published in
4 min readJan 9, 2021

--

Photo by Jeremy Bishop on Unsplash

As progress accelerates towards AGI, the number of people who realize the significance of each new breakthrough decreases. This is the AGI Significance Paradox.

There is a very old metaphor that you can boil a frog in water without it jumping out when you gradually increase the temperature. The fable goes that the frog does not have the internal models to recognize that there is a change in the water temperature. A cold-blooded creature like the frog is thought to have its temperature regulated only by the external environment.

To recognize change, an agent must have an internal model of reality that is able to recognize this change. Unfortunately, a majority of the population do not have good models of human general intelligence. In fact, even the simplistic dual-process model of system 1 and system 2 is not very well known. It took years for researchers to start using the terminology that Deep Learning was a System 1 (i.e. Intuitive) process. The reason why you see today Daniel Kahneman in many panels of AI is because of this recognition.

Recent big developments were muZero, AlphaFold2, GPT-3 and Dall-E. GPT-3 did receive a lot of attention, but the other 3 likely have not. To understand muZero and AlphaFold2 requires a high level of expertise. Dall-E is actually like GPT-3 but it’s more difficult to grasp.

We are going to continue to get these incremental developments for several years. But the audience that recognizes its importance will continue to decline. Then suddenly, boom… we get to AGI and most people will be in shock. Shocked because they thought that there was no progress in developments.

The quantum leaps (punctuated) in evolution is a consequence of many incremental developments that accrue. It is only when the final piece in the jigsaw puzzle is found when the revolution is expressed. But there are so many pieces that need to be filled in as this chart indicates:

Today’s state of the art AI is only just inside the purple region (i.e. autonomous self) depicted above. In fact, the common honey bee has greater autonomous intelligence than any synthetic intelligence ever invented. At best we have a simulation of an autonomous self, a mere facade of the real thing that we find in biological organisms.

But it takes unusual expertise to know that we are accelerating towards AGI. The problem is that it is not obvious how human intelligence actually works. We simply do not know what it means to ‘understand’. Ask most AGI researchers, philosophers or psychologists as to what it means to ‘understand’. They will be stumped to give you a good answer.

John Krakauer, professor of Neurology at John Hopkins, is one of the few people that I am aware of who can express well the extent of our unknowingness. Expressing the extent of one's ignorance is a feat in itself. A majority of AGI researchers cannot even identify the known unknown. This is perhaps what puts Gary Marcus in hot water. He knows what is unknown, I’m however disagreement with his research doctrine.

Research doctrine is a personality thing. There are many hypotheses at the computational level of explanation (see: Marr’s 3 levels). It is a matter of training and personal inclination as to the preferred explanation. As Krakauer has so deftly explained, as academic researchers we instinctively defend the mound that we have been accustomed to pitching from. Krakauer prefers a pluralistic approach because perhaps every other hypothesis will likely be premature optimization (my words).

So if we do not know this answer, then how can we recognize that the water’s temperature is gradually increasing? The number of people who might know continues to diminish. This implies collectively that we know less and less. When AGI happens, it will come as a shock. It is as if, nobody had anticipated it.

--

--