Artificial general intelligence: is machine consciousness possible?

We see huge achievements in artificial intelligence (deep learning methods) and GPU computing speed. However, the machine intelligence will never reach human level no matter now or in the future. Especially, if we talk about computer programs’ consciousness (e.g. internal and external existence), it will never happen if we use the current deep learning approach. Artificial general intelligence is just another tool like microscope and calculator to extend human’s power to understand the world and help humans doing things efficient.

2. Is strong AI possible?

If the ultimate goal of AGI is to help computer programs have cognitive ability like human, then it will not happen in the next few decades. And the current approach to train deep AI model should never cultivate self-awareness. On the one hand, the knowledge based reasoning is building on the existed human knowledge space, it does not generalize well, and its depth and breadth depends on human’s exploration and exploitation on uncharted territory. Thus, its deductive inference of logic symbols cannot exceed human level. What is more, the logic reasoning and matrix multiplication is deterministic mechanic operation, it is not related to consciousness. On the other hand, the data-driven approaches learn deep neural AI models, and do generalize well to specific task. However, its inputs and outputs are also defined by human, it can only learn specified skills. For example, the input and output space in classification tasks are predefined, so the model can only apply in the same domain. Reinforcement learning does mimic the human’s trial and error to learn new tasks. However, its state, action space and rewards are also defined by human. So the inference of AI model can only handle the domain specified by human, and its deterministic mathematic operations will not cultivate consciousness.
On the contrary, human is greedy and selfish from the perspective of economics, with the purpose to pursue personal rewards, which in turn will maximize the social outcome. Moreover, there are also many individuals devote their whole life to serve the whole society, by driving science and technology forward and exploring unknown space and uncharted territory. So the human society cannot be easily duplicated by trial and reward in the reinforcement learning.

In the past few decades, we do see significant progress in deep learning on specific tasks, like face recognition, object recognition, alphaGo, etc. Logic inference based on existed knowledge is also expanding as we exploring the world. Whatever, human is the upper bound of AGI, in both innovation and creativity.

2.1 Human’s innovation and creativity are unlimited

Human as the smartest species on the planet can explore world, understand phenomenon and find the essence behind the appearance to unravel the fabric of the universe. For example, there is story at 17th-century that a young Isaac Newton was bonked on the head by a falling piece of fruit while sitting under an apple tree, that prompted him to suddenly come up with the law of gravity. If we put the scene/video that apple fails from tree to deep neural networks, it may predict “the apple falling action”, but never the law of gravity (unless we hardcode this into it like knowledge graph). Moreover, human can make tools, which includes a range of cognitive tools to extend our mind and enlarge our scope. To overcome our perceptual limitations, human have developed a suite of tools and techniques: microscopes for microscopic organisms, X-ray film to produce images of internal tissues, Geiger counters for detecting and measuring ionizing radiation, laser interferometer Gravitational-Wave observatory and so forth. In a similar way, human use physical objects (paper), camera and computer to disseminate knowledge and vastly increase the memory capacity of our naked brains. Mathematics is another fantastic mind-extension technology, which is a good way to abstract the world with axiom and theorem, and enables us to represent concepts that we couldn’t think of with our bare brains. Deep AI model (ultimately AGI) is another tool to help us recognize the world, facilitate our live and relief us from repetitive work. In mathematic level, it is purely matrix multiplication in the inference stage, and its input and output scope is limited while we train it. Thus, it is mechanic operation will not cultivate self-awareness. In addition, human continuously create new products and services, which are at a speed level never seen before. For example, one task for object detection is to locate and recognition objects from a given image. We can train our deep neural models to detect airplane, bus, phone and so on. However, for new products, our current models cannot recognize them. Although there is transfer learning technique available, it still uses the existed neural networks which are transferred to fit the new data. So human is the upper bound of AGI.

2.2 Current machine learning approaches only fit the model in specific task

We did see alphaGo beats human recently which is often regarded as a breakthrough, and the similar event happened 25 years ago when IBM’s chess-playing computer Deep Blue in 1997 defeated the world champion of chess, Garri Kasparov. We admit there is significant progress in the past 30 years in AI. And with the help of these AI models, we human can release our burden from repetitive and tedious work, and shift our focus to creative tasks. However, it is still a tool. In technique level, it only means we design better algorithm and construct better model to fit the data. For example, there is imageNet dataset for object detection, now the model using deep learning can achieve human level performance. However, it is still weak AI, because it cannot make correct prediction on unseen data. In essence, its learning process is based on filters (matrix multiplication) and its input and output space are limited at the beginning. What is more, the AI model’s ability to beat human does not mean brewing consciousness. The calculator is faster than human on arithmetic operations, nowadays alphaGo beats human again but on a different but large space domain problem. it just shows that AI models can handle more large and complex domain problem than before with advantage on deep learning and fast GPU computing. Although there are other machine learning methods to transfer/extend the model to other domains, the idea is similar to fit different datasets from different domains. As for the AI model, you can think it is just another smart tool we designed to enhance our human’s ability to recognize the world. We are now still in the stage of fitting the inputs and outputs to improve performance, and it is totally mechanic approach (mathematical operations behind it), and it is definitely far away from true intelligence and consciousness.

2.3 knowledge explosion requires group collaboration.

In the last 200 years, we have see knowledge explosion expanding human’s scope to explore microbiology to outer space. Most importantly, we can transfer our own minds to those of our fellow human beings. What makes our species unique is that we are capable of cumulative knowledge by “standing on the shoulders of Giants”. A population of human brains is much smarter than any individual brain in isolation. On the other side, it is impossible for any individual to assimilate the entire human knowledge. In other words, each individual can only possess a small domain of research field, and keep on learning as human’s aspiration to discovery new world is unlimited. In facing the large project cross multiple domains, it requires cooperation and collaboration. It goes saying that no single scientist would unravel the mysteries of the universe. But scientists can extend the scope of their understanding, achieving much more than any of them would be capable of individually by collaborating with their peers. The same is true for our understanding of how the human brain gives rise to consciousness, meaning and intentionality. As for AI model, machine learning approaches are limited by its input and output space. For most recognition tasks, we have limited datasets, and train a model to fit the dataset. If we use supervised learning with deep neural networks, it in fact learns multiple layers of filters.
Reinforcement learning simulates human activities to learn knowledge by trial and error. For example, AlphaGo is trained using reinforcement learning and Monte Carlo tree search, however its space is still limited on a full-sized 19*19 board. It beats human similar to IBM’s Deep Blue, with more powerful GPU machines and better algorithm. Collaboration between AI agents is still in early stage, but its space is predefined, and we do not see any sign of self-awareness.

3. What we can do?

Human is the upper bound of the artificial general intelligence, and human can also design and improve weak AI’s ability. In the following, we will focus on data driven AI model, and think a few random ideas to improve AI model.

3.1 Enlarge the input/output space and model architecture

The current trend at machine learning community is to design better architecture and enlarge input/output space. On the one hand, we can think better deep architecture with large parameter space to fit the training data better. For example, we have CNN, capsule network, transformer and so on. In addition, how to add semantic structure information (points, curves and parts) in deep networks is a challenge. How to combine generative model with deep AI model is another hot topic. For example, we can generate objects from basic parts which is forward approach, while data-driven AI model is learnt from gradient backward. With the advance in science and technology while human exploring new territory, it is better to expand the inputs and outputs while training data-driven models. In classification, we can consider more complex input under different contexts, i.e. large training data (more input categories and label every things). For example, we can expand the input and output inside a domain and between domains to extend the skills learned for AI model. Today we have imageNet for semantic segmentation, then tomorrow we can label everything in any scene to train AI model to recognize everything. In addition, we can label object at the functional level (traits, characteristics, etc), instead of pure category label. In reinforcement learning, it is better to handle large state and action space, and design better reward strategy.

3.2 Bridge the gap between knowledge and AI model

Logic reasoning is a forward inference based on existed knowledge (or graph), while data-driven AI model learns model parameters in a backward fashion (gradient backpropagation). Moreover, the semantic meaning for each node in knowledge graph and AI model is different. For example, each node in knowledge graph represents an entity and it’s fixed, while a hidden node in AI model can be learned and mapped to be on/off state. Hence, it is not easy to merge these two approaches. One possible approach is that we can map knowledge graph into embedding space at first. Then, we can add vectorized entities/edges from knowledge graph as input to AI model, but we do not update them in the backpropagation stage. Other domain knowledge such as taxonomy and hierarchical structure can be combined with data driven AI model to improve performance.

Figure 1: (a) we can map knowledge graph into embedding space as vector graph, then build its connection to deep AI model, which can refer vector graph in the both learning and inference stage. (b) multiple agents cooperate given environment. Each agent will take its action given its state and will get its reward (can be different for different agent) based on contribution.

We give an example in Fig 1 (a), where we only label tiger as a special case (we can do the same to person, grass, pebble, etc). There are many other challenge knowledge such as mathematics, physics, etc in different domains, which need to be understood by machine. In particular, we want to build a connection between knowledge and deep AI model.

3.3 Collaborate between AI agents

Human’s basic physical requirement is survival and reproduction, and collaboration is one form of the social skills to survive while facing threats, such as hunt large animals together as food.

Collaboration is so sophisticated behavior which possessing advanced brains, and displays levels of intelligence seen only in humans, chimpanzees, dolphins, and other smartest creatures capable of higher forms of thinking. A population of human brains is more powerful and much smarter than any individual brain in isolation, and collaboration between agents will reach the same goal, for example solving puzzles and hunting food together. Multi-agent reinforcement learning is a branch of machine learning in Fig 1 (b), which combines multi-agent actions to reach specific goal under certain environment. Compared to single agent reinforcement learning, multi-agent reinforcement learning evaluates and quantifies not only with rewards, but also using social metrics, such as collaboration, social influence and discrimination.

Except cooperation, AI agent should mimic the human characteristics such as self-identification, self-awareness, selfishness and competition. AI agent need to identify and define itself, and know what specific traits, abilities, and characteristics it has in relation to social context. Competition between agents will enhance skills, by evolving the AI model. So we should give a unique ID to each AI agent, and also add randomness and mutation to its model parameters. No model should be same and each agent should have social difference and abilities, which can be evolving (similar to evolutionary algorithms).

3.4 Unify multiple weak AI models

There are many approaches to merge multiple weak models, such as bagging, boosting, and feature concatenation. The idea behind these methods assumes that training and testing datasets are from the similar domain. For instance, bagging samples a subset from training set to train a model, then average these models to make prediction. While boosting gives more weights to hard (or misclassified) samples in the next model.

Human use sight, hearing, touch, taste, and smell to build a clear picture of your world. In a similar way, we can build better AI model, which depends on corresponding five weak AI models as the sensory organs to collect outside information and make decision. For example, we have computer vision model to understand scene, natural language processing (NLP) model to listen voices, radar to detect distance and so on. By combining these models, we can build a more powerful model to overcome the single perceptual limitations around the world. Even inside of NLP, we can using bagging to train multiple NLP models on different domains, and unify these models to improve prediction accuracy.

Another issue is how to merge models from different domains. There are many ways to combine these models, such as averaging multiple models with weights and product of experts. Also we can trigger other models to assist our decision when the current model cannot make confident prediction. It is also possible to use the one model’s output as the input of another model to improve accuracy.

We can build a very large logic network, with each weak model as a node in this big decision making network. The purpose is to boost accuracy by imitating human behavior with corresponding sensors and leveraging multiple AI models to make decisions.

To sum up, it is still far away for computer program to has consciousness.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store