Are we speeding towards AI consciousness?
By Jeffrey Ng, Chief Scientist, Founders Factory (www.foundersfactory.com)
(This article has previously appeared in Computing, link)
The uniqueness of humans is our consciousness, our ability to think and reason. It emerged from evolutionary pressures to build internal models of the environment and ourselves. It’s long been the ambition of computer scientists to build a machine that can replicate such human neural models. In recent months, major strides have been taken to make this a reality but we’re not at the human-level yet.
Today’s machine learning capabilities are super-human (i.e. better than human) in visual recognition tasks. They are also creeping close to the level of others in the animal kingdom, such as mice or cats, for navigation. However, to bridge the gap to human-level thinking, more needs to be done to overcome the technical hurdles in current state-of-the-art Deep Neural Networks (e.g. better neural building blocks, more flexible architectures and faster training algorithms).
In this piece, I explain recent developments paving the way to AI consciousness and why it is important to our world’s future.
The consciousness debate
There are two philosophies of machine learning; 1. do we keep building “narrow” AI enslaved in mundane ‘paper pushing’ jobs through massive datasets containing simple labels, e.g. image and object tags, OR 2. do we push the limits of computing to a point where it can replicate human rationalisation? I feel like the latter point is where the true value of AI can be realised for businesses and consumers, mainly through the economics of building AI systems.
Today’s AI can be compared to a curious toddler regarding its learning ability. It requires a lot of human resources to create, not the least in the amount of data labelling and experimenting with new architectures. Even superhuman AI lacks the toddler’s basic ability to self-correct, to build a minimal but sufficient model of the world and its own processes to master increasingly-complex skills.
The trajectory to ‘thinking’ machines
The hurdle we face is that our current generation of machines essentially learn to “cheat”, that is to take the fastest route from data input to data output. Our current big-data approaches pushed our AI to superhuman levels by memorising by rote every input-output pair. Compare this to the human brain which instinctively works through a sequence of connective steps to come to the desired outcome. Even with huge datasets, our deep-neural-network machines can’t yet match this human-like approach.
At this moment, no computer has shown true human-level artificial intelligence. However, the evolution from early artificial intelligence in the late 1950s to today’s more complex deep learning technology has been fast and led to the current boom in AI. The likes of DeepMind, IBM Watson, OpenAI as well as exciting startups such as Iris.AI, Bloomsbury.AI, Wayve.AI, Evolution.AI are changing the industry so fast and making the architectures slicker that the future may not be too far away.
Recent exciting projects have shown significant developments in mechanistic reasoning, such as:
- In July, DeepMind revealed its AI bot playing a stripped down version of ‘Capture the Flag’ where the bot not only learned how to grab the flag but developed strategies of how to protect itself. This ability to come up with increasingly complex strategies is reminiscent of mammalian intelligence.
- At Stanford University, Professor Drew Hudson and Dr Christopher Manning found that compositional attention networks can ‘facilitate explicit and expressive reasoning’. This is a significant step in neural networks that require less training data and hand-crafting of neural architectures for reasoning tasks, while also strengthening the machine’s ability to support explainable and structured learning. This machine’s ability to choose its own reasoning steps, the number of connective steps and the information that it stores in-between steps is a big step forward. What was found is that compositional attention networks reduce the error percentage on a Q&A (Question & Answer) task from 2.4% to 1.1%, making this a compelling story for making logic reasoning on a broader scale.
- At June’s Founder’s Forum discussion on AI, Michael Graziano’s work was highlighted as a good working theory for consciousness. Attention schema theory proposes that the attribution of awareness to other humans is a mechanism to model what other humans’ brains are focusing on. This is a necessary step in AI’s ability to interact and perform with other intelligent agents in the loop, including us.
We have come a long way in developing deep learning capabilities. But as technologists, our work is never done. As we continue to investigate and experiment with new approaches, the cost of giving machines new skills for businesses of any size will come down. What then follows is a greater use of smart AI to improve relationships between businesses, employees, consumers and the world around them.