Consciousness: The Greatest Mystery of Artificial Intelligence

Mars Xiang
4 min readAug 29, 2020

--

Helix Nebula, which looks like an eye

In the future, if you owned a robot that could talk to you, would you consider it conscious? As it displayed more genuine feelings and emotions and understood our world more, where would you draw the line between consciousness and unconsciousness?

Consciousness is one of the most intangible ideas that exist, and philosophers, neuroscientists, and machine learning engineers have all come up with different definitions, with requirements like self-awareness and intelligence. The broadest concept of consciousness is an experience.

In our current age, answer questions like “Why is anything conscious at all”, and “What exactly makes something conscious” cannot be answered, but we do know that certain parts of the brain are responsible for consciousness while others are not.

Consciousness Gives Meaning

“Beauty is in the eye of the beholder, not in the laws of physics, so before our Universe awoke, there was no beauty.” — Max Tegmark, Life 3.0

What good would a piece of art be if there was nobody to look at it and appreciate it? In the same way, what good purpose would our universe serve if there were no conscious beings to observe it?

If our universe were dominated by intelligent artificial intelligences in the future, but they had no consciousness, nothing would ever be appreciated again, and until the universe reaches a state like heat death, it would just be a massive waste of space and time.

In this sense, when people ask, “What is the meaning of our universe?”, they are getting it backwards — the universe does not give meaning to conscious beings, but rather, conscious beings give meaning to the universe. This is why consciousness is one of the most important topics of discussion about artificial intelligence in the future.

Intelligence

Most people would agree that our brains are intelligent. Some say that machines can become intelligent, and others say they can’t. However, what is intelligence? Both our brains and machines are made up of neurons (although machines are made of artificial neurons instead of cellular ones), so can one be intelligent while the other cannot?

Being intelligent can be split up into three different subtasks: storing memory, computing, and learning.

  • A physical system that can store memory needs to be able to be stable in many positions. A ball at the bottom of one of two valleys can encode a single piece of information — true or false.
  • A physical system that can compute must give some representation of an output when affected by a physical input. For example, an electrical circuit produces different outputs when different inputs (represented by voltage) is provided.
  • A physical system that can learn is the hardest to create. It must constantly improve itself in some task given a physical input. An example of this is a neural network, which uses gradient descent to constantly tweak its parameters.

In any physical system, information can take a life of its own. As long as there is matter to support memory, computation, and learning, it does not matter what kind of matter it is. Physicists can study waves without knowing what the wave is made of. This property is called substrate independence — information becomes independent from its physical system, and it means different physical systems can all become intelligent.

Emergence

A single human cell is made up of many different and dead organelles, yet cells are living. This is an example of a fundamental and mysterious property of our universe, emergence, where the whole is greater than the sum of its parts.

Often, in emergent structures, there are no well-defined leaders, and sometimes, no leaders at all. In the immune system, soldiers, messengers, and antibody factories are scattered throughout the body with no control center. The parts that make up the immune system each have their own roles and behaviors, yet they manage to fight infectious disease together better than they could on their own.

Similar to how intelligence is an emergent property of its substrate, consciousness could also be an emergent property of intelligence. If this is true, consciousness is two levels above physical matter.

Denying Consciousness

Humans have a history of denying lesser consciousnesses rights. Many people find it okay to eat kill and eat animals, while they find doing the same thing to humans unacceptable. In the past, humans often argued that it was acceptable to enslave of kill other people because they were more “animal” and less “human”.

Many of this world have adopted the idea of anthropocentrism, the belief that humans are the only important beings in the universe, since humans are the most intelligent beings in this world. This causes us to become greedy and apathetic towards other beings, who are sometimes even humans.

In the light of possible human-level or superhuman-level artificial intelligence, this raises two concerns:

  • If artificial intelligence is conscious, will humans disregard their needs and desires to fulfill ours? In the past, we have done it with humans slaves, and this time, the possible economic gain is massive.
  • If artificial intelligences become more powerful than humans, will they disregard our needs and desires? If they are not conscious, and decide to dispose of humans, our universe would become meaningless.

Conclusion

Consciousness is one of the hardest ideas to understand, but it is crucial that we do understand it. While a physical system can enable itself to become intelligent, consciousness might be a level above intelligence. Humans have a history of exploiting conscious beings for material gain, but before human-level AI exist, we must learn what it means to be conscious.

--

--