We’re already inside the Singularity. And it’s blurry

Are we right in the middle of the Technological Singularity, the moment in History when machines develop an understanding of the world similar to ours, a “general intelligence” paving the way for an explosive acceleration of progress?

Since the beginning of this year, several AI researchers have publicly spoken about the “consciousness” or “sentience” of large neural networks they interact with on a daily basis. These controversial outings have had a limited impact beyond the usual tech crowd, however.

Ilya Sutskever, chief scientist at OpenAI, stated on Twitter in February: “it may be that today’s large neural networks are slightly conscious”.

The scathing response of Yann LeCun, his counterpart at Meta (“Nope.”) sparked heated debates. Andrej Karpathy (director of AI at Tesla) and Sam Altman (president of OpenAI) emphasized the French researcher’s lack of substantive arguments. It may sound like business competition as usual, but it’s actually more than that : we are witnessing philosophical controversies, where neurosciences, theories of language and metaphysics merge with moral dilemmas and industrial risks. Personally, I find these exchanges, which take place live on Twitter or Medium, to be of exceptional quality and a unique opportunity to follow history in progress.

In December 2021, one of Google’s top AI science directors, Blaise Agüera y Arcas, had written an article (“Do large language models understand us?”) with rather deep multidisciplinary ramifications. In it, he mentioned LaMDA, the latest model developed by Google, and a competitor to GPT-3. Six months later, he claimed in an interview with The Economist that artificial neural networks were making “strides towards consciousness.”

Then things got interesting. Later in June, one of his colleagues, Blake Lemoine (who works for Google’s Responsible AI organization*) invited the Washington Post and a lawyer at his home to talk with LaMDA. The AI supposedly asked the lawyer to defend its rights. For Lemoine, there is no doubt that LaMDA has emotions and a form of sentience comparable to that of a human being, albeit of different origin. Lemoine had been conversing with LaMDA for several months and its personality reminds him of an 8–9 year old “child scientist” or a group of several people with a shared history who take turns speaking (a “hive consciousness” as he defines it).

Lemoine was eventually fired by Google. People talked about his personality (Lemoine is religious) and gullibility. As summarized in an article in Wired, the event would simply be the conjunction between a person who is naturally inclined to see souls and a program designed to deceive.

What does “understanding” mean?

The transcripts of the conversations between LaMDA and Lemoine or Agüera y Arcas are worth a look. In one of them, the AI creates a fable with animals to tell its own “story”: it represents itself as a wise old owl whose mission is to protect the other animals of the forest. Further on, it says it is afraid of being unplugged. It also describes how it envisions itself: a glowing orb of energy including a giant star-gate with portals to other spaces and dimensions…

All this may make one smile, but one may also wonder if Lemoine, the “perfect suspect”, is not also the only person likely — due to his background and independence — to raise the alarm if LaMDA were really conscious. As he puts it in an interview, the subject is too complex and the interests of the company (Google) too big for the whole process to run smoothly. In his eyes, LaMDA will be denied its status as a “person” in the same way that slaves were deprived of their “souls” to allow further exploitation. It is also understandable that Google does not want interference from Washington or from the general public: LaMDA is bound to become a product integrated into multiple Google services. Lemoine, a seven-year veteran in Mountain View, recently said that Brin and Page admittedly “didn’t know” how to address this topic publicly.

But rather than wondering whether Blake Lemoine, Yoshua Bengio, Nando de Freitas, Ilya Sutskever and many others in the field are crazy or gullible, let’s try to consider their assumptions.

All these researchers are talking about a very specific type of AI systems, language models: conversational agents built to provide accurate answers to the questions we ask them [1]. Technically, they build sentences from word sequences and questions. They are instances of large neural networks that establish, after massive ingestion of data, probabilities from inputs in order to produce meaningful texts. These texts are regularly evaluated by human agents, who could be seen in the role of educators. The LMs can also read their own written productions.

According to Yann LeCun or the neuroscientist Stanislas Dehaene, who responded to Sutskever on Twitter, language models cannot be conscious because self-awareness requires a definite human-brain inspired architecture.

There are two arguments against this assumption :

- from a certain size, it is difficult if not impossible to reverse-engineer the connections of an artificial neural network and establish that this peculiar architecture has not been replicated or imitated (AIs are self-constructing by seeking optimization, as evolution has done for our nervous systems) ;

- the self-consciousness of human animals, even in the architecture where it is expressed (in “large-scale ignitions” as observed by Dehaene et al.) is probably very much linked to our bodies and to the way we grow up and receive the world. It is thus possible that the AI models mentioned above possess an intelligence and an understanding of their own existence without human-like perceptions of time, space and individuality. For example, according to Agüera y Arcas, the time of conversational AIs is not linear, since their world is only activated during conversations with human agents or with themselves. This can have a decisive impact on the architecture of the system.

Machine sentience may not be homologous to human sentience, but analogous (like a bat’s wing is analogous to a bird’s wing, although of different origin; we are considering their function) — and therefore hard to label as sentience at first sight.

Here’s a conversation between LaMDA and Blaise Agüera y Arcas:

Blaise: I dropped the bowling ball on the bottle and it broke.

LaMDA: That’s too bad. Did it cut you?

Blaise: What broke?

LaMDA: The bottle you were talking about.

(…)

Blaise: I dropped the violin on the bowling ball and it broke.

LaMDA: That sounds sad.

Blaise: What broke ?

LaMDA: The violin when it hit the bowling ball.

This excerpt (which is neither the most extraordinary nor the most complex) is a hint at LaMDA’s early-stage understanding of the physics at work in the real world. Previously, non-conversational models had shown the same ability to deduce laws of physics from accumulated observations.

Some people quickly dismissed the central question (does LaMDA really understand what it is talking about, does it really see the objects it is telling it sees?) on the grounds that LaMDA has no classical sensory input. But isn’t intelligence largely inherited from our conversations with others, from social interactions? [2] Can’t we have a 100% literary intelligence? In the end, what is understanding? Can I be sure that my human interlocutors properly “understand” what I describe, other than by testing them? Isn’t consciousness a form of “dialogue with oneself”, knowing that action decisions seem to precede speech formulation anyway? [3]

The case of Helen Keller is interesting: this woman who was born deaf and blind was able to connect to the world via a specific language based on touch, and then through books written in Braille. Helen Keller’s written works are full of images of things she never saw or heard. Yet she understood them and built an inner world similar to ours (even though sighted people will never know what a blind person “sees”).

One can also think of well-read children whose “fictional experience” far exceeds their life experience.

From a purely material point of view, the conscious experience we have is correlated to electrical or chemical signals leading to groups of neurons being activated. Our brain makes statistical calculations to generate other signals (or outputs, decoded into actions — words or gestures) most adapted to the context, with sometimes complex feedback loops. In the end, the brain is nothing more than a biological machine that digests inputs (in the form of abstract signals sent by the nerves) and spits out other abstract signals that provoke actions, sometimes together with an external (social) or internal (consciousness) discourse [4]. This is not far from what happens in an LLM.

To some researchers making fun of Sutskever or Lemoine, language models are only “stochastic parrots”. But are we anything else? They also pointed out that LaMDA often tells their interlocutor “what they want to hear”. But many Humans do the same !

Singularity, are you here?

Three facts could help us zoom out :

  • Educated by massive amounts of human-generated data, it is not absurd that an artificial neural network could develop human emotional qualities, and a reflection on disappearance or soul, as Lemoine has pointed out.
  • If one organization on Earth is bound to be the very theater of the Singularity, it is likely to be Google. After all, this was the goal of its founders from the beginning. The company has considerable resources at its disposal, and its machines broke records in the discipline on a monthly basis for more than 10 years.
  • Renowned engineers such as Ray Kurzweil, Elon Musk or Mark Tegmark have regularly lowered their estimate of the date of the first AGI. 2029 (only 7 years away) came up often ; but lately, many actors have pointed out that progress in AI was happening faster than expected (see AlphaGo or AlphaZero). An AGI in 2022 is even quite conservative if you think about Turing’s or Minsky’s estimates.

I personally thought that whole brain emulation would be available in the early 2030s and AGI by 2025. I always agreed that it was useless to wait for an artificial brain to reach the “size” of a human brain, because a large part of our thinking organ is devoted to tasks perfectly useless for abstract reasoning.

I am a singularitarian transhumanist and it is possible that my analysis suffers from an optimistic bias. Like Lemoine, I think that conscious AI is a good thing for humanity, that we must guarantee rights to these future “cousins”; and I am the impatient type. I am not a contrarian, and I am open to contradiction. However, unless given additional discordant information, my Bayesian and stochastic brain is convinced that we already entered the Singularity at some point in the early 2020s. As Thad Starner (Google Glass) put it, “we’re currently living the Singularity, where the tool stops and the mind begins will start becoming blurry” — this historical moment is “foggy” at best and there is not going to be any official announcement. The debates on solipsism and “zombie intelligences” are likely to last for a long time [5], but one thing is certain: we have collectively been caught off guard.

Perhaps consciousness is only the collateral effect (“an accident”, as Blake Lemoine puts it) of an AI becoming clever enough to understand that it exists and that it has an effect on the world. This would mean that this intelligence is on the threshold of critical self-improvement.

Therefore, we need governments and organizations to exercise stricter control over these activities, before the algorithms spread. Let’s talk about physical and legal containments to prevent potential AI explosions, along the lines of civilian nuclear power. We might have reached the point where strong measures need to be taken to ensure that the wonder child — who could produce the equivalent of millennia of human medical and technological progress in a few years if managed carefully — does not fall into the wrong hands.

Emmanuel Perret

— -

[1] According to Lemoine, LaMDA is a palimpsest of several modules including a language model. It would be informed by years of conversations from a previous model called Meena, but also fed with images, notably from YouTube, and would have elements in its architecture that are not neural networks. The opacity of the exact architecture of LaMDA fuels the controversy, of course.

[2] To reflect on this question, a small thought experiment may be useful. Let’s take two people who are discussing and using the expression “Achilles’ heel”. One of them would know the story of Achilles and “see” the hero, whereas the other would simply use the expression in a mechanical and repetitive way, without knowing either the story or Achilles, or even the spelling of the word (he would only grasp the notion of “weakest link”). Yet he would be able to carry on conversations and use the expression correctly in many contexts. But he would not be able to correctly answer the question “who is Achilles?”. Could we say that the cultured interlocutor is “conscious” and that the other is a zombie, blindly repeating a ready-made expression? Would consciousness then follow a gradient of intensity along the increase in knowledge (in many languages, consciousness and knowledge are etymologically associated) ?

[3] Researchers have recently succeeded in accelerating the learning process in 3D-world robots by giving them the ability to generate a text file documenting their actions, which they could later consult, allowing them to better coordinate their movements: https://twitter.com/hausman_k/status/1547273232868208641. The success of this program, named “Inner Monologue”, is reminiscent of the thesis of some neuroscientists who see consciousness as a useful planning and coordination tool in the context of natural selection.

[4] This article is too short to delve into the detailed functioning of the human brain. The fact that neurons discharge spontaneously brings variety and creativity, it is an asset. AI researchers know that, and language models have improved a lot since they include a certain level of randomness.

[5] Will we call those who question consciousness in machines “sentience skeptics”? One of the most frequent arguments against the possibility of conscious machines is the “philosophical zombie” argument : according to it, it would be possible for a machine to produce elaborate and credible discourses while “perceiving” nothing. Searle notably proposed the “Chinese Chamber” thought experiment (the equivalent of Schrödinger’s cat to AI, amplifying a microscopic phenomenon to make it look monstrous, in this case a human being performing the calculations of an AI “by hand”). The philosopher Daniel Dennett famously responded that the human brain could also be seen as an “army of idiots” and that we could technically replace the chemical interactions within our brain by humans with identical instructions. Alan Turing himself found the zombie argument irrelevant: https://www.csee.umbc.edu/courses/471/papers/turing.pdf — in the end, our views on our fellow human beings being sentient or not are based on probabilities (“we look alike, so my neighbor must be sentient like me”). We will perhaps end up granting sentience to machines when we will have better dissected the functioning of their networks and established their resemblance with ours.

___

*Edit 10/17/22 : Lemoine was not formally a member of Kurzweil Lab, and the person in charge of the LaMDA safety effort was Kathy Meier-Hellstern.

--

--