AI to AGI: reflections on a Darwinian-like evolution for digital survival

Krishnan T.
The Quantastic Journal
12 min readJul 7, 2024
There is a lot of debate today surrounding how far we are from Artificial General Intelligence (AGI) — the time when we will have an AI technology that matches or surpasses human capabilities across a wide range of cognitive tasks. But how do we know when it will achieve this feat? (Picture credit: Freepik).
There is a lot of debate today surrounding how far we are from Artificial General Intelligence (AGI) — the time when we will have an AI technology that matches or surpasses human capabilities across a wide range of cognitive tasks. But how do we know when it will achieve this feat? (Picture credit: Freepik).

Artificial Intelligence (AI) and Artificial General Intelligence (AGI) are all the rage today. Wherever you turn — job market qualifications, start-up pitches, introduction paragraphs of scientific articles or marketing gimmicks — these terms appear to be the new clickbait. Much like the phase that quantum passed through in its own Gartner cycle trajectory — quantum technologies in air conditioners, quantum ions in water bottles or quantum healing — AI is finding applications in “surprising” places. While some are genuine explorations, there is a fair share of quack platforms that are riding the wave as well. But this is likely where the similarity ends. What is drawing additional interest in AI is the AGI concern.

What is this concern about? I briefly highlight it here, but there are many other sources of discussion on the topic. There is no doubt that the trickle of AI into our everyday lives has started, and that it is becoming adept enough to quietly exist in the back end like something of a general-purpose technology. As Mustafa Sulayman says in his recent book [A]:

“…the irony of general-purpose technologies is that, before long, they become invisible and we take them for granted. Language, agriculture, writing — each was a general-purpose technology at the center of an early wave. These three waves formed the foundation of civilization as we know it. Now we take them for granted…within the next couple of years, whatever your job, you will be able to consult an on-demand expert, ask it about your latest ad campaign or product design, quiz it on the specifics of a legal dilemma, isolate the most effective elements of a pitch, solve a thorny logistical question, get a second opinion on a diagnosis, keep probing and testing, getting ever more detailed answers grounded in the very cutting edge of knowledge, delivered with exceptional nuance. All of the world’s knowledge, best practices, precedent, and computational power will be available, tailored to you, to your specific needs and circumstances, instantaneously and effortlessly. It is a leap in cognitive potential at least as great as the introduction of the internet…”

AI is the new normal. The late David Foster Wallace, once opened a commencement speech with a parable that well illustrates the trouble with normality. The story concerns two fish crossing aquatic paths with an elder of their species, who greets them jovially, “Morning, boys. How’s the water?” And the two young fish swim on for a bit, and then eventually one of them looks over at the other and says, “What the hell is water?” The point Wallace wanted to leave his audience pondering was that the most obvious, ubiquitous, important realities are often the ones hardest to see and talk about. AI is on that trajectory.

As the cognitive potential of AI increases, the concern that several parties express is that we will reach a point when we will have an AI technology that matches or surpasses human capabilities across a wide range of cognitive tasks. This is what is considered as AGI. A method for classifying AGI into various levels was proposed in 2023 by DeepMind researchers. They define five levels of AGI: emerging, competent, expert, virtuoso, and superhuman. As an example, a competent AGI is defined as an AI that outperforms 50% of skilled adults in a range of non-physical tasks, and a superhuman AGI is defined with a threshold of 100%. ChatGPT and LLaMA 2 are considered examples of emerging AGI.

The question that one is forced to ask then is, how do we know if a platform surpasses human capabilities? I am not talking about the Turing test, which talks about intelligence. Rather, I am asking a more fundamental question. Given that we modern humans are Homo sapiens, or the wise human, is it sufficient to conclude a platform as having surpassed humans if its intellectual capabilities in problem-solving are greater than ours? Or, is it the wrong metric we are using to compare? We are clearly not Homo intelligens (not that there exists such a classification). But we consider ourselves as wise, as conscious, sentient and aware. Maybe that is what we need to compare to, as we probe our concern levels. How do we do that? What makes a system either conscious, sentient or aware? And if such a system were to arise, how would we be able to verify that it is so?

Non-human agents of consciousness, sentience, and awareness

To tackle that question, we need to look at other non-human agents who potentially have (or already do have) consciousness, sentience, and awareness. For us, sensations have much in common with our other conscious states and play an important role in our self-narrative. When we’re talking about consciousness as an umbrella term for introspectively accessible mental states, and as a mediator of cognitive operations, we can call it cognitive consciousness (you may call this consciousness). The evidence for this in many animals is easy to see. For example, an octopus cracks a combination lock to escape from a box, the crow plans ahead to make sure it has something for breakfast and the chimpanzee outperforms humans on a memory task. But when we’re talking about access to sensations that have phenomenal sensory qualities that are subjective, we can call it phenomenal consciousness (you may call this sentience) [B]. And, intellectual feats like the ones the animals perform above have no direct bearing on phenomenal consciousness. As the philosopher Jeremy Bentham stated: The question is not, can they reason? nor, can they talk? but, can they suffer? In other words, the question is not, do they have a global workspace or a self-narrative, but are they sentient? If signs of cognitive consciousness aren’t going to be sufficient to provide the answer, what could?

A human can be asked if they experience phenomenal consciousness, through self-introspection. However, the very nature of qualia that makes it impossible for me to experience precisely what you are experiencing and taking it at face value, would make it difficult for me to judge if any other organism (or AI system) experiences phenomenal consciousness. We define the idea of sentience by pointing to our own private experience. But given that at some point in history, sentience came into the world as a remarkable internal feature of minds of the animals from whom humans and other sentient species are descended, this private experience cannot be how natural selection or evolution recognizes the fact of sentience. You and your new trait don’t get to survive better if all you can do is point to it in private! You must have something to show for it on the outside that natural selection can latch on to — something that affects biological survival. In other words, your private experience must have closely coupled public consequences that can be seen by natural selection. Dan Dennett, the cognitive philosopher, stated that the subjective qualities of sensations must in the final analysis be cashable in terms of behavioural dispositions. Not just one behaviour but as the integral of all the things the sensation motivates the subject to think, do, and say. And if natural selection can see these consequences, presumably they must be seeable by other kinds of outside observers — scientists, philosophers and poets — if only they knew what to look for. So how would evolution have seen the advantage of consciousness as a phenomenal effect in helping biological survival? For, if it had not been so, we wouldn’t have likely retained this ability from our first mammalian ancestor — the synapsids. If we look at answering that question, perhaps we can better understand what may induce AGI to evolve from AI, given that AI would soon have cognitive capabilities equal to or surpassing humans.

Evolution of consciousness in living entities

The theoretical psychologist Nicholas Humphrey has over the years, posited a very interesting concept of how consciousness may have arisen via evolution [B]. He divides it into four broad stages: sentition and sensation, privatization, thick moment and ipsundrum.

Sentition (stage 1a): Imagine a primitive amoeba-like animal floating in the ancient primordial soup. We know that external events take place and that in some way, the animal will interact with an external stimulus — a ray of light falls on it, something bumps into it or a chemical interacts with it. The animal, if it is to survive, must have evolved the ability to sort out the good from the bad and respond appropriately — say with a wriggle of acceptance or rejection. The wriggles would be honed by taking into account their quality, intensity, and distribution on the body surface and the implications they have on the well-being of the animal. To begin with, the responses are organized locally at the body surface. But before long, to allow coordination, sensory information gets to be sent to a central ganglion or proton-brain, where a reflex response is initiated. A more accurate term for this would be sentition — something between sensation and action. Sentition enacts what the stimulation means to the animal. So, an external observer can tell from what the animal is doing, just how it feels about what is happening.

Sensation (stage 1b): However, through evolution, there comes a time when reflex behavior is not enough. If they are to behave more flexibly, they need to be able to store information about themselves and their environs in a form they can refer to offline. In particular, they need a way of representing and holding ‘in mind’ information about events occurring at their body surface. Here’s how evolution may have done it: in the same way that an external observer can tell how the animal is feeling by seeing what it is doing, so can an internal observer! In other words, the animal will be able to discover for itself what the stimulation means to it by monitoring its own response. All it needs to do is create an efference copy of the outgoing signal when its central command sends a motor command to create the external response. This can be read in reverse to yield a representation of how it’s responding and so of how it feels. We can likely say from this, that once the animal begins to represent its situation this way, it is on the way to having a self that is the subject of sensations. Yet, at this stage, sensations don’t have any of the remarkable phenomenal feel that they have in sentient animals such as us.

Privatization (stage 2): As these animals develop more sophisticated ways of interacting with the environment, there is bound to come a point when the original bodily responses are no longer appropriate. But by this time, the responses have already acquired their useful role as the vehicle for representing what the stimulation means. At this point, the outgoing commands, rather than causing an actual bodily response to the stimulus where it is occurring, begin to target the internal body map where the sense organs first project to the brain. Thus, they are still about responding to what’s happening to me with this part of my body. There is still an efference copy, forming a representation the subject can take information from, but now the commands issue in a virtual, as-if, expressive response that no longer shows on the surface. So, now there is a potential for feedback since the motor signals formerly sent out for a reaction on a body surface have been redirected to the place in the brain where the sensory signals from this locus come in. Thus, a self-entangling loop can be created to sustain recursive activity, flowing round and round — almost catching its own tail.

Thick moment (stage 3): It means, that sentition can be drawn out in time, so that the subject, monitoring the outgoing signals, will get the impression that each moment of sensation lasts longer than it really does. Sensations are, as it were, being thickened up.

Ipsundrum (stage 4): Once the loop is established, it can potentially be channelled and stabilized, it can settle into an attractor state, where a complex pattern repeats itself over and over again. Such an attractor, while being a mathematical object, can cause sensations to be experienced as inalienably private, suffused with distinctive modality-specific qualities, rooted in the thick time of the subjective present, made of immaterial mind stuff: in short, phenomenal.

From here on, it has been likely that the complex social interactions we enjoy as a species today has been enabled by this ipsundrum since our ancestors were able to understand what it feels to be another one like them, what it feels like to be them and how they are perceived by the others. Basically, it has enabled a relational construct — something essential for social creatures. Consciousness thus may have been an evolutionary win to enable biological survival. So, if there is a concern for AGI, how would that come about from a similar analogue, starting with a cognitively competent AI?

AI to AGI: Darwinian digital evolution

The philosopher David Hume said,

‘For my part, when I enter most intimately into what I call myself, I always stumble on some particular perception or other, of heat or cold, light or shade, love or hatred, pain or pleasure. I never can catch myself at any time without a perception, and never can observe any thing but the perception. When my perceptions are removed for any time, as by sound sleep, so long am I insensible of myself, and may truly be said not to exist.’

Sensations that far back in history started out as a way of tracking your interaction with the physical world while still connecting you to the physical environment, serve also to distance you from it. They give you the feeling that there is an essential non-physical dimension to your life. Sensations have become a work that captures the paradoxical nature of what it is to be you. Friedrich Nietzsche wrote, Art is not merely an imitation of the reality of nature, but a metaphysical supplement to the reality of nature…we have art in order not to die of the truth. For us humans, perhaps we have a phenomenal self in order not to die of materialism.

Perhaps then, we should entertain the following points to ponder what the future of the AI to AGI digital evolution looks like, inspired by the Darwinian theory:

  1. Does consciousness exist without external stimulus? As noted above by Hume, it is an interesting question of whether consciousness would have evolved without the presence of external stimuli to cause the stages of evolution we discussed above. Likely, the more kinds of external data that AI will be fed with, the more nuanced its proto-brain can become (as in stage 1a). If there is no interface with humans, would AI “evolve” further? If not, maybe a cap on kinds and amount of data fed into AI will self-limit the progress towards AGI.
  2. What is the critical threshold of complexity of interactions that needs consciousness to come about? As mentioned earlier, our complex social interactions enable us to have an advantage with consciousness. Similar occurrences have been seen with other primates such as gorillas and chimpanzees. So, maybe in isolation, different AI systems would not need to evolve something of an efference copy of themselves and another. But if we start linking multiple such systems together, maybe the need for a self-aware AI would become more evident. This is assuming that the AI is driven by a need for survival in the digital world operations, similar to the Darwinian survival in the biological world. What is the number of such AI systems that would need to be plugged together to give rise to such an emergent need for consciousness?
  3. Can unpredictable digital demands accelerate the emergence of AGI? Nerve conduction speeds peak at around 37 C, allowing warm-blooded animals to potentially process more information, faster. Mammals thereby possess a larger likelihood for more complex compute capabilities, lending themselves easier targets for the evolution of consciousness. But this warm-bloodedness was caused by an evolutionary niche that was enabled by the rise in global temperatures that allowed those animals to spread geographically. Will there be such a ‘climate change’ in the digital world, enabling the emergence of AGI as a niche digital species?
  4. Will AGI enable co-existence of biological and digital species? Just because humans evolved, not all species instantaneously perished. Certainly, many unintelligent acts of humans permanently wiped out certain species from the planet. But we did end up domesticating some, eliminating others and still, due to the passion of a select few to save wildlife, have saved many for posterity without effacing their presence. Maybe the future of AGI, if it comes about, will reflect a similar trend. Given how AGI may be heavily cyberphysical, the intermingling of digital and biological worlds may cause interspecies domestication of sorts. Will there be semi-peaceful co-existence? Will there be subjugation or harvesting of other species? Only time will tell.

On an ending note, perhaps with the creation of generative AI tools, we are now witnessing stage 1b of the Darwinian digital evolution towards a supposed AGI to come. Whatever happens, we may, for a brief flash of time, understand the emergence of consciousness, having created one ourselves.

(Disclaimer: all views expressed in this are my own and do not represent those of any organization that I may be affiliated with)

Bibliography

A. ‘The Coming Wave: Technology, Power, and the Twenty-first Century’s Greatest Dilemma’, Mustafa Suleyman and Michael Bhaskar, Crown, 2023

B. ‘Sentience: The Invention of Consciousness’, Nicholas Humphrey, The MIT press, 2023

C. ‘The experience machine: how our minds predict and shape reality’, Andy Clark, Pantheon Books, 2023

--

--

Krishnan T.
The Quantastic Journal

PhD | Gardener of ideas | Scientist | Entrepreneur | Innovator | Thinker | Strategist | Critical Thinker