The case for Artificial Consciousness

Amol Kelkar
The quest for AGI
Published in
5 min readOct 9, 2017

Artificial whaaa!?

The word “consciousness” is often infused and confused with concepts that are meta-physical, supernatural, mysterious, related to religion/soul, etc. So I want to first clarify what it means. The word has a precise definition that is based on the computational theory of mind, which posits that mental processes are the result of computations performed by the brain. The definition most commonly accepted by Cognitive scientists is -

Consciousness is the the ability of an agent to have qualia, i.e. subjective experiences

Now, the ultimate goal of Artificial Intelligence is the emergence of human or super-human intelligence. The field of AI has been focused mostly on the “easy” problems of AI — logic, learning, perception, memory, planning and language. On the other hand, the “hard” problems of AI — awareness, emotions and motivation — have been largely ignored by the AI research community for various reasons.

The “hard” problems of AI map well to the problem of consciousness because awareness, emotions and motivation are required for subjective experiences. Indeed, we would not ascribe human-level intelligence to any agent that does not demonstrate consciousness. For example, they will not be able to pass intelligence tests such as the Turing test without having subjective experiences and the ability to communicate about them.

To reach the goal of human level artificial intelligence, we need to solve artificial consciousness.

Solving the “easy” problems of AI is “narrow AI”.
Solving both “easy” and “hard” problems is “general AI”.

Is Artificial Consciousness possible?

Yes.

Given the exponentially growing computing capabilities, artificial agents would eventually be able to act like they are conscious. At that point, humans would have an overwhelming impulse to attribute consciousness to them and the impulse itself is the only evidence needed to say that the agents are indeed conscious. That is the same test humans use to ascribe consciousness to fellow humans.

Relationship with narrow AI research

There have been fantastic advances in narrow AI, especially during the last decade using deep neural network techniques. I consider much of this as side branches or tangents to the path that would lead to general AI. In other words, these advances are not likely to get us closer to general AI. Why?

Substrate —

Consciousness is possible when sensory, memory, attention, emotion and other subsystems are able to communicate and deeply influence each other, so any system that demonstrates consciousness must have a high degree of integration among its modules. Such integration is possible only when the subsystems are built on the basis (substrate) of similar algorithmic and representational mechanisms. Algorithms and models designed specifically to solve one type of problem (narrow AI) are not likely to be suitable for building conscious agents. We need to find a common architecture for solving all the problems of AI.

Once we have a substrate that enables us to tackle the “hard” problems, we will need to solve/implement the “easy” problems on top of that substrate.

We will be able to take hints from narrow AI solutions, but won’t be able to plug-and-play those solutions.

Capabilities —

A general AI system will be less efficient and less capable at solving narrow AI problems than a corresponding narrow AI solution. Narrow AI has already delivered solutions to cognitively challenging problems at super-human capabilities (think natural language translation, image style transfer, board games, and many other problems). They will continue to produce further impressive results.

A general AI system will likely use narrow AI techniques as tools instead of as modules.

Also, general AI systems will take at least a decade of algorithmic and computing advances to become capable of doing anything interesting.

What will it take to create Artificial Consciousness?

To create Artificial Consciousness followed by Artificial General Intelligence, we will need to bring together insights from several disciplines. Here is what I am looking at -

  • Neuroscience: Learn from the only known implementation of general intelligence; feed-forward learning;
  • (deep) neural networks: Various techniques such as convolutions, scalable / GPU / compute-graph implementations; open source software ecosystem, etc
  • Spiking neural networks: what do spikes represent? cooperation between spike frequency based (Hebbian) and spike time based (STDP) learning; delay learning (polychronous) systems;
  • Hierarchical pattern memory: Plausible functional model of cognition; forward and backward predictions; pattern creation, refinement; temporal-difference-type reinforcement learning
  • Dynamical systems: Functional behavior and control of a collection of interconnected units; emergent behavior; equilibrium points and attractors
  • Homeostatic equilibrium: Homeostatic equilibrium as a fundamental goal; use of emotions as levers to achieve the goal; eschew most hyper-parameters and instead use energy, time and input constraints
  • Emotions: Neurobiological basis and dynamics of emotions; building up a full assortment of emotions from basic set of environment/genetics driven feedback and the goal of homeostatic equilibrium
  • Self-organization: building models from grounds up within given constraints to match complexity of the environment

My near term goal is to put together a step by step recipe for creating artificial consciousness, with full conceptual clarity along with basic software implementation of each step.

But won’t the intelligent machines kill us all?

Intelligent machines with no consciousness are simply tools. Humans can use them as weapons with no opposition coming from the machines. This scenario is incredibly dangerous — imagine nuclear weapon level distructive capabilities in everyone’s pocket.

On the other hand, if the machines are conscious and are trained (raised?) in a (nurturing?) environment with appropriate feedback (parenting?), they will overwhelmingly turn out to be moral citizens. Yes, AI babysitting is likely to be a high skill job of tomorrow. They would find it offensive to harm humans and would make positive contributions to society. Note that it will still be possible to create machine versions of psychopaths and killers.

Our best hope is to have powerful conscious machines on our side.

Hack-proofing AI systems

Imagine terrorists hacking the centralized control system and crashing 10,000 trucks. — a user comment from online post discussing potential issues with a fleet of fully automated trucks

“radicalized” or “brain washed ” humans are known to have driven vehicles into crowds with the intent to kill. That is in essense the same as being hacked. The difference seems to be that it takes more effort to radicalize than to hack because the subject offers substantial and no resistance, respectively. The underlying difference is consciousness or the lack thereof. Conscious agents would act to restore their state of homeostatic equilibrium, which for a well-adjusted agent would be a moral citizen.

We need future AI systems to have consciousness to make it harder to misuse them.

This is why creating artificial consciousness is going to be critical to humanity’s future in the coming decades.

--

--