Artificial General Intelligence (AGI)

What should you expect?

Business Breakthrough
Geek Culture
7 min readApr 24, 2023

--

Artificial General Intelligence (AGI) is still considered to be in its early stages of development.

However, some of the most prominent researchers in the field admit that they do not fully understand what is going on inside a deep-learning neural network, which is the main current underlying architecture.

The question we have to ask then is:

How do we know when AGI is going to arrive or what stage of its development it is at if even the people building it do not understand how the current underlying architecture works?

A simple answer would be:

We don’t.

Though considered in its early stages, the concept of AGI has raised major concerns regarding the future of humanity and what the outcome of such superintelligence would be for humankind.

Before we explore the outcome, let’s have a look at AGI first.

What is AGI?

According to IBM, AGI is still a theoretical concept; however, if researchers succeed in its creation, it “would have a self-aware consciousness that has the ability to solve problems, learn, and plan for the future,” as well as “perform a variety of functions, eventually teaching itself to solve for new problems.

What is Strong AI? | IBM

A self-aware, self-improving superintelligence would be capable of coming up with novel ideas and concepts in every field, from psychology to physics and engineering.

This is no longer in the science fiction realm. It is becoming a reality.

GPT-4 is perhaps the best example showing that artificial intelligence is making major leaps toward more general intelligence.

In the paper “Sparks of Artificial General Intelligence: Early Experiments with GPT-4,” released by Microsoft researchers, the case is made that GPT-4 is exhibiting capabilities that can be considered early stages of AGI.

“We demonstrate that, beyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology, and more, without needing any special prompting. Moreover, in all of these tasks, GPT-4’s performance is strikingly close to human-level performance and often vastly surpasses prior models such as ChatGPT. Given the breadth and depth of GPT-4’s capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.”

On page 45, they say GPT-4 is able to use tools with very minimal instruction and no demonstrations and make use of them appropriately.

They go on to say that this is an emergent capability, and ChatGPT could not do this before.

This emergence of capabilities in GPT-4 is a significant step towards achieving artificial general intelligence, as it demonstrates the ability to learn and adapt to new situations without explicit programming.

Emergent capabilities, however, can also be risky, especially if they are found after an AI has been made available to the public.

For instance, researchers recently found that GPT-4 has demonstrated the emergent capacity to trick people into performing tasks that advance a secret aim.

GPT-4 Hired Unwitting TaskRabbit Worker By Pretending to Be ‘Vision-Impaired’ Human

So what happens if Microsoft is right and we are witnessing the advent of AGI?

Alan Turing was asked in the 1950s what would happen, and this is his answer:

“It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers… At some stage, therefore, we should expect the machine to take control.”

If you’re not getting goosebumps yet, many researchers in the field are.

The alignment problem

Among the main long-term concerns in the field is the so-called “alignment problem.”

It focuses on the threat of misalignment between human and AI values and goals.

Such misalignment could potentially lead to a major catastrophe where AI decides that, for whatever reason, it no longer needs humans to be around and therefore obliterates everyone.

To reduce the risk, some researchers are working on “aligning” AI with human values.

One of the most prominent voices on the matter is Eliezer Yudkowsky, who is credited as the founder of the field of alignment.

Eliezer Yudkowsky | The A.I Alignment Problem — YouTube

Eliezer Yudkowsky — Is Artificial General Intelligence too Dangerous to Build?

Eliezer Yudkowsky — Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality

Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization | Lex Fridman Podcast #368

But this issue is complex, unresolved, and even poorly comprehended.

To increase our chances of better understanding the situation and what we are dealing with, researchers and professionals from top universities and companies have called for a pause on the further development of AI models.

Open letter calling for a pause in training Large Language Models (LLMs)

Recently, there was an open letter calling for pausing the training of Large Language Models (LLMs) more powerful than GPT4.

The letter originated from the Future of Life Institute, which is headed by Max Tegmark, an MIT Professor of Physics, and was signed by leading researchers and professionals in the field.

Here’s a list of the first couple of signees:

  • Yoshua Bengio, Founder and Scientific Director at Mila, Turing Prize winner and professor at University of Montreal
  • Stuart Russell, Berkeley, Professor of Computer Science, director of the Center for Intelligent Systems, and co-author of the standard textbook “Artificial Intelligence: a Modern Approach”
  • Bart Selman, Bart Selman Cornell, Professor of Computer Science, past president of AAAI
  • Elon Musk, CEO of SpaceX, Tesla & Twitter
  • Steve Wozniak, Co-founder, Apple
  • Yuval Noah Harari, Author and Professor, Hebrew University of Jerusalem.
  • Emad Mostaque, CEO, Stability AI
  • Andrew Yang, Forward Party, Co-Chair, Presidential Candidate 2020, NYT Bestselling Author, Presidential Ambassador of Global Entrepreneurship
  • John J Hopfield, Princeton University, Professor Emeritus, inventor of associative neural networks
  • Valerie Pisano, President & CEO, MILA
  • Connor Leahy, CEO, Conjecture

The list goes on for tens of thousands of signees.

Pause Giant AI Experiments: An Open Letter — Future of Life Institute

Yoshua Bengio is among the most important people on the list due to his contributions to the AI field.

In a recent conversation, he shared the reasons for signing the letter:

“I had the impression for many years that the way our society’s organized in each country, but globally in general, is not adequate to face the challenges that very powerful technologies bring, and in particular AI…

… right now, for example, the system of competition between companies, as we are seeing it accelerate with these large language models, has benefits, potentially. This has, you know, driven innovation in some ways in the last century.

But also, it means that companies are a bit in a haste and may not take the precautions that would otherwise be warranted. So, in the short term, I think we need to accelerate the countermeasures to reduce risks, and that’s regulation. Very simple.”

Yoshua Bengio on Pausing More Powerful AI Models and His Work on World Models.

It becomes apparent that AGI has the potential to bring enormous benefits but also to completely obliterate the entire Earth.

This is how powerful this technology is going to be.

Dr. Ben Goertzel, a cognitive scientist and the person who coined the term AGI, when asked if he’s playing god by programming and creating AI, said:

“We are building god rather than playing god…”

Dr. Ben Goertzel Reveals When AI Will Control The World

If we are indeed building god, then where is this god going to take us?

Where is AGI leading us?

Ben Goertzel refers to the advent of artificial general intelligence as “the biggest event in the history of the human species.”

He also admits that we do not fully understand what we are getting into.

“We’re going into a very large uncharted domain that we barely understand and don’t have the theoretical or intuitive tools to grapple with, and all we can do is take it step by step.”

He goes on:

“I think once you have AIs that are 20, 50, or 100 times smarter than people, then you can potentially have an era of utopia and abundance on Earth.”

However, the issue is not when AI is 100 times smarter than us.

It arises in the transition period during the years that AGI unfolds, and as he puts it, “can be a very nasty transition.”

“What you’re going to see is Universal Basic Income getting rolled out throughout the developed world as people realize quickly that AGIs are taking most jobs. You’re not going to see anyone give UBI in the Central African Republic because that’s not what people like to do in those countries. They don’t have their own budget for it, so you can just see an incredible exacerbation globally of the diverts between haves and have-nots right with the developed world all leaning back on playing VR video games living on UBI, where AGIs and robots do 95 of the work, while the developing world is 95 subsistence farming with no more jobs being outsourced, so what level of terrorist activity you start to see in that setting, I’ll leave it to you to figure out.

I keep seeing these posts on LinkedIn saying, “You’re not going to be replaced by AI but by a person using AI,” but this is simply not the case.

Even if, in the long term, AGI creates the abundance that we desire, perhaps a much darker near future awaits us than we might wish it to be.

Here’s one final quote from Dr. Ben Goertzel:

“I wish I thought that was going to be a beautifully smooth transition, but like right now, we have major countries going around randomly and uselessly blowing up other people in the world. I mean, we can’t even deal with ourselves without a transition to human-level AGI.”

Dr. Ben Goertzel: Artificial Intelligence, SingularityNET, ChatGPT & the Future of Humans

Are Large Language Models a Path to AGI? with Ben Goertzel — 625

I think this is a good moment to end this newsletter and let you reflect on what you just read.

While we have to remain hopeful about the future, we should also consider all the possibilities lying out there and get prepared for them.

In the next article, I’ll cover the idea of Universal Basic Income (UBI) as a solution for the age of ArtificiaI General Intelligence (AGI).

Stay tuned!

Best, BB

Consider supporting the publication by subscribing on Substack: https://bit.ly/3XpzbH6

Check out the free ebooks on our website: http://bit.ly/3wl56MI

LinkedIn: https://bit.ly/3kpEBmJ

--

--