The Strong and Weak Artificial Intelligence Debate

Md Ashikquer Rahman
Oct 5, 2020 · 8 min read
The Strong And Weak Artificial Intelligence Debate
The Strong And Weak Artificial Intelligence Debate
Photo by istockphoto

Recently, I had a debate with my favorite new thinker about high-performance and low-performance AI, which reminded me of something I wrote more than a year ago, so I decided to pick up these dusty ideas. There is too much technical hype for artificial intelligence, and sometimes it should return to its philosophical roots. Among all the philosophical debates surrounding AI, the most important is the debate on strong and weak AI.

From a technical point of view, I agree with the idea that we have achieved one or two breakthroughs in realizing some form of powerful or general AI. But from a philosophical point of view, there are still some challenges that need to be reconciled. Many challenges can be explained by obscure theories pioneered by Austro-Hungarian mathematicians in the last century and one of the leading fields of neuroscience research.

In AI theory, low-performance AI systems usually just look smart, while high-performance AI machines are truly capable of thinking. Thinking here refers to real thinking, not just simulated thinking. This dilemma is often referred to as the “high-performance AI hypothesis.”

In a world where digital assistants and algorithms are used to defeat the Go champion and World of Warcraft 2 teams, the question of whether the machine can act intelligently seems foolish. In some limited environments, such as medical research, Go, travel, etc., we have been able to build a large number of AI systems that can exert their intelligence. Most experts agree that low-performance AI is definitely possible, but there are still great doubts about high-performance AI.

Can machines think?

Since Alan Turing published his famous paper “Computer and Intelligence” in 1950, these problems have plagued computer scientists and philosophers. When most scientists cannot even agree on a formal definition of thinking, it seems unfair to determine the answer to this question.

To illustrate the confusion surrounding the high-performance AI hypothesis, we can learn from the humorous expression of the famous computer scientist Edsger Dijkstra. In a 1984 paper, he compared the question of whether a machine can think with questions such as “Can a submarine swim?” or “Can a plane fly?”

Although these issues seem similar, most English speakers will agree that airplanes can actually fly, but submarines cannot swim. Why is this? The basic point of this comparison is that without a unified definition of thinking, then the obsession with whether machines can think does not seem to matter.

Among the main objections to high-performance AI, it is essentially impossible to determine whether the machine can really think. The argument stems from one of the most famous mathematical theorems in history.

Gödel’s incompleteness theorem

When we talk about the greatest mathematical theory in history that has had a broad impact on the way of thinking, Gödel’s incompleteness theorem should have a place.

In 1931, the mathematician Kurt Gödel (Kurt Gödel) proved the limitations of the deductive method by proving his famous incompleteness theorem. Gödel’s theorem points out any form of theory that can perform arithmetic operations, but there is no evidence for true statements in this theory.

For a long time, the incompleteness theorem has been used as an object against strong artificial intelligence. Proponents of this theory believe that high-performance AI agents will not be able to really think because they are restricted by the incompleteness theorem, while human thinking is clearly unrestricted.

This argument has caused a lot of controversy, and many high-performance AI practitioners also refuse to accept it. The most common argument of the high-performance AI school is that it is impossible to determine whether human thinking obeys Godel’s theorem, because any evidence requires formal human knowledge, and we know that this is impossible.

Consciousness argument

In the heated debate about AI, my favorite argument is consciousness. Can machines really think or can they simulate thinking? If machines can think in the future, it means that they will need to be conscious (that is, to be aware of their states and actions), because consciousness is the cornerstone of human thinking.

Suspicion of high-performance AI has led to various debates, ranging from classic mathematical theories (such as G?del’s incompleteness theorem) to the pure technical limitations of AI platforms. However, the intersection of biology, neuroscience and philosophy is still the focus of controversy and is related to the consciousness of artificial intelligence systems.

What is consciousness?

There are many definitions and controversies about consciousness, which are enough to dissuade most people from continuing to pursue the argument of its role in the AI ​​system. Most definitions of consciousness involve self-awareness or the entity’s ability to understand its mental state. However, when it comes to AI, self-awareness and mental state are also not clearly defined, so we are like falling into a rabbit hole quickly, and we are lost in confusion. At a loss.

In order to be applicable to AI, consciousness theory needs to be more pragmatic and technical, and less philosophical. My favorite definition of consciousness following these principles comes from Physics Prize winner Michio Kaku, a professor of theoretical physics at New York University and one of the creators of string theory.

A few years ago, Dr. Kaku proposed what he called the “space-time theory of consciousness”, which brought together the definitions of consciousness in the fields of biology and neuroscience. Dr. Kaku’s definition of consciousness in his theory is as follows: “Awareness is the process of creating a world model using multiple feedback loops of various parameters (such as temperature, space, time, and parameters related to others) to achieve goals (such as Looking for a partner, food, shelter).”

The spatio-temporal definition of consciousness is directly applicable to AI because it is based on the brain’s ability to create a world model, not only based on space (such as animals), but also on the relationship with time (backward and forward). From this perspective, Dr. Kaku defines human consciousness as “a form of consciousness that can create a model of the world, and then simulate the future by evaluating the past, so as to simulate it in time.” In other words, human consciousness Our ability to plan for the future is directly related.

In addition to its core definition, the space-time theory of consciousness also includes several types of consciousness:

  • Level 0: Including organisms such as plants with limited mobility, which use a few parameters such as temperature to create their spatial models.
  • Level 1: Creatures like reptiles, they are mobile and have a nervous system. These creatures use more other parameters to form their spatial models.
  • Level 2: Creatures such as mammals are not only based on space, but also on world models related to others.
  • Level 3: Human beings who understand the relationship of time and have the unique ability to imagine the future.

Is the AI ​​system conscious?

Awareness is one of the hottest debate topics in the AI ​​community. AI awareness here refers to the ability of AI to be self-aware of its “mental state”. The previous article introduced a framework first proposed by the famous physicist Dr. Michio Kaku to evaluate consciousness from four different levels.

In Dr. Kaku’s theory, level 0 consciousness describes organisms such as plants, which evaluate their authenticity based on some parameters such as temperature. Reptiles and insects show level 1 consciousness when creating world models using new parameters including space. Level 2 consciousness involves creating a model of the world based on emotions and relationships with other species. Mammals are the main group associated with level 2 consciousness. Finally, we can attribute human consciousness to level 3 consciousness based on the world model involving future simulations.

Based on the consciousness framework of Dr. Kaku, we can assess the consciousness level of the current generation of AI technology. Most experts agree that today’s AI agents can be divided into level 1 or level 2 early consciousness. There are many factors involved in ranking AI agents as level 1, including liquidity.

Today, many AI agents have been able to achieve mobility and develop their environmental models based on the space around them. However, most AI agents have many difficulties operating outside of restricted environments.

Spatial assessment is not the only factor that places AI agents in level 1 consciousness. The number of feedback loops used to create the model is another super important factor to consider.

Let us take image analysis as an example. Even the most advanced visual AI algorithms use a relatively small number of feedback loops to recognize objects. If we compare these models with cognitive abilities, insects and reptiles, they don’t seem complicated. So yes, the current artificial intelligence technology has insect consciousness.

Enter second level consciousness

Some AI technologies have steadily displayed the characteristics of Level 2 consciousness, and several factors have contributed to this development. Artificial intelligence technology is gaining a higher level of understanding and simulating emotions, as well as perceiving surrounding emotional reactions.

In addition to the development of AI technology based on emotions, AI agents are operating more and more efficiently in the group environment. In the group environment, they need to cooperate or compete with each other to survive. In some cases, teamwork can even create new cognitive skills. For the latest cases of AI agents with Level 2 awareness, we can think of the work of companies such as DeepMind and OpenAI.

Recently, DeepMind conducted some experiments in which AI agents are required to live in an environment with limited resources. When resources are abundant and poor, artificial intelligence agents exhibit different behaviors. Because agents need to interact with each other, their behavior has changed.

An interesting example can also be found in the recent OpenAI simulation experiment. In this experiment, the AI ​​agent was able to use a small number of symbols to create its own language in order to better coexist in its environment.

The mainstream AI solutions are still in the early stages, but improving the awareness of AI agents is one of the most important goals in the current AI technology stack. Level 2 awareness is the next research area.

Level 3 consciousness

At present, the three-level consciousness in artificial intelligence systems is still an active topic of debate. However, recent systems (such as OpenAI Five or DeepMind Quake III) have clearly demonstrated the ability of AI agents for long-term planning and collaboration, so we may achieve this goal soon.

Is the AI ​​system conscious? My answer is yes. Applying Dr. Kaku’s spatiotemporal awareness theory to an AI system, it is obvious that AI agents can exhibit some basic forms of consciousness. Taking into account the capabilities of the current AI technology, I put the consciousness of AI agents at the basic level of I (reptiles) or II.

As for the third level, although it is still far away, I don’t think this is a fantasy.

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium