The need for Consciousness in an AI System

Last March Microsoft released an AI bot called Tay which they were forced to close down within days of release. The reason was the bot learned from its users and then started to say unsavoury things. The bot had a deep learning neural network and was modifying itself according to its users behaviour. The result was inevitable. The developers took the bot down and started to try to modify its neural network software to prevent this happening. However, I think the problem is, that they are missing a software component, rather than needing to modify the neural net. In fact the neural net is behaving correctly.

Just like neural nets emulate the brain’s network of neurons, I believe a further component should be taken from nature: the conscious brain or cerebral cortex. Just think, as parents, we bring up our children not to swear and yet in the playground at school they learn to swear and do so at will with their friends. But in the home and certainly with authority figures such as grand parents they do not swear. What is happening here is their unconscious mind learns to swear as well as how not to swear. Their conscious mind has an internal model of the world which takes input from the unconscious mind — in fact it can take multiple inputs and choose based on its internal model what words to actually speak. It’s a social filter based on context — when to swear and when not to swear. The unconscious mind is an unfettered data processing engine which takes inputs from the world and processes and learns from these inputs.

The conscious mind creates a simulation of the world and selects the results from the unconscious which most closely match the needs of its internal/social model.

In the AI world the same model can be replicated. The conscious layer of software must sit above the machine learning AI layer and filter the results according to the context of the user. Furthermore, my view is the conscious layer should not be capable of learning (today for the current state of AI machine learning). It needs to have hard wired rules to ensure that a consistent and controlled output is presented to the users — this needs to be fixed. At the same time the AI layer should be allowed to be totally unfettered in its processing of the world and even produced multiple conflicting results which the conscious mind will present to the world based on the social context it is currently in.

Also imagine the future world when we have super-intelligence (perhaps by 2040 — when robots are more intelligent than humans). Even in this case I think we should have a conscious layer having ultimate control over the robots defining at minimum the laws of the country but inevitably much more that they need to do what they’re told to do by their owner. The scenarios where a robot is more intelligent than a human and can do whatever it wants are too scary to deal with — the hardware and software needs these conscious restrictions fully built into them — perhaps by the chip designers running the software — such that it is totally un-modifiable by the robot itself.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.