What does it mean for a machine to “understand”?

Thomas G. Dietterich
7 min readOct 27, 2019

--

Critics of recent advances in artificial intelligence complain that although these advances have produced remarkable improvements in AI systems, these systems still do not exhibit “real”, “true”, or “genuine” understanding. The use of words like “real”, “true”, and “genuine” imply that “understanding” is binary. A system either exhibits “genuine” understanding or it does not. The difficulty with this way of thinking is that human understanding is never complete and perfect. In this article, I argue that “understanding” exists along a continuous spectrum of capabilities. Consider, for example, the concept of “water”. Most people understand many properties of water: it is wet, you can drink it, plants need it, it forms ice if chilled, and so on. But unfortunately, many people do not understand that water is an electrical conductor and, therefore, one should not use a blowdryer in the shower. Nonetheless, we do not say of those people that they lack “real”, “true”, or “genuine” understanding of “water”. Instead, we say that their understanding is incomplete.

We should adopt this same attitude toward assessing our AI systems. Existing systems exhibit certain kinds of understanding. For example, when I tell Siri “Call Carol” and it dials the correct number, you will have a hard time convincing me that Siri did not understand my request. When I ask Google “Who did IBM’s Deep Blue system defeat?” and it gives me an infobox with the answer “Kasparov” in big letters, it has correctly understood my question. Of course this understanding is limited. If I follow up my question to Google with “When?”, it gives me the dictionary definition of “when” — it doesn’t interpret my question as part of a dialogue.

The debate over “understanding” goes back to Aristotle and perhaps became most clearly articulated in Searle’s Chinese Room argument (Searle, 1980). I encourage people to read Cole’s excellent article in the Stanford Encyclopedia of Philosophy (Cole, 2014). My position is a form of functionalism. We characterize understanding functionally, and we assess the contribution of the various internal structures in the brain or in an AI system according to their causal role in producing the measured function.

From a software engineering perspective, functionalism encourages us to design a series of tests to measure the functionality of the system. We can ask a system (or a person), “What happens if I chill water to –20 degrees?” or “What could happen if I use a blowdryer in the shower?” and measure the responses. To the extent that the responses are appropriate, we can say that the system is understanding, and to the extent the responses are wrong, we have uncovered cases where the system did not understand.

In order for a system to understand, it must create linkages between different concepts, states, and actions. Today’s language translation systems correctly link “water” in English to “agua” in Spanish, but they don’t have any links between “water” and “electric shock”.

Much of the criticism of the latest AI advances stems from two sources. First, the hype surrounding AI (generated by researchers, the organizations they work for, and even governments and funding agencies) has reached extreme levels. It has even engendered fear that “superintelligence” or the “robot apocalypse” is imminent. Criticism is essential for countering this nonsense.

Second, criticism is part of the ongoing debate about future research directions in artificial intelligence research and to allocation of government funding. On the one side are the advocates of connectionism who developed deep learning and who support continuing that line of research. On the other side are the advocates of AI methods based on the construction and manipulation of symbols (e.g., using formal logic). There is also a growing community arguing for systems that combine both approaches in a hybrid architecture. Criticism is also essential for this discussion, because the AI community must continually challenge our assumptions and choose how to invest society’s time and money in advancing AI science and technology. However, I object to the argument that says “Today’s deep learning-based systems don’t exhibit genuine understanding, and therefore deep learning should be abandoned”. This argument is just as faulty as the argument that says “Today’s deep learning-based systems have achieved great advances, and pursuing them further will `solve intelligence’.” I like the analysis by Lakatos (1978) that research programmes tend to be pursued until they cease to be fruitful. I think we should continue to pursue the connectionist programme, the symbolic representationalist programme, and the emerging hybrid programmes, because they all continue to be very fruitful.

Criticism of deep learning is already leading to new directions. In particular, the demonstration that deep learning systems can match human performance on various benchmark tasks and yet fail to generalize to superficially very similar tasks has produced a crisis in machine learning (in the sense of Kuhn, 1962). Researchers are responding with new ideas such as learning invariants (Arjovsky, et al., 2019; Vapnik & Ismailov, 2019) and discovering causal models (Peters, et al., 2017). These ideas are applicable to both symbolic and connectionist machine learning.

I believe we should pursue advances in the science and technology of AI without engaging in debates about what counts as “genuine” understanding. I encourage us instead to focus on which system capabilities we should be trying to achieve in the next 5, 10, or 50 years. We should define these capabilities in terms of tests that we could perform on an AI system to measure whether it possesses these capabilities. To do this, the capabilities must be operationalized. In short, I’m arguing for test-driven development of AI. This will require us to translate our fuzzy notions of “understanding” and “intelligence” into concrete, measurable capabilities. That in itself, can be a very useful exercise.

Operational tests need not consider only the input-output behavior of the AI system. They can examine the internal structures (data structures, knowledge bases, etc.) that produce this behavior. One of the great advantages that AI has over neuroscience is that it is much easier for us to perform experiments on our AI systems to understand and evaluate their behavior. A word of caution is in order, however. Connectionist methods, including deep learning, often create internal structures that are difficult to interpret, and it seems that this is true of brains as well. Therefore, we should not set as a research goal to ensure that certain structures (e.g., symbolic representations) are present. Rather we should focus on the desired behavioral capabilities and ask how the internal mechanisms achieve those capabilities. For example, to carry out a successful dialogue, each participant must keep track of the history of the interaction. But there are many ways to do this, and we should not necessarily expect to find an explicit history memory inside a deep learning system. Conversely, just because we have programmed a specific internal structure doesn’t mean that it behaves the way we intended. Drew McDermott, in his famous critique “Artificial Intelligence Meets Natural Stupidity” (McDermott, 1976), discussed this problem at length.

One consequence of the repeated waves of AI advances and criticisms is known as the “AI Effect” in which the AI field is viewed as a failure because the state-of-the-art systems do not exhibit “true understanding” or “real intelligence”. The result is that AI successes are dismissed, and funding declines. For example, there was a time when playing Chess or Go at human level was regarded as such a criterion of intelligence. But when Deep Blue defeated Kasparov in 1997 (Campbell, et al., 2002), one prominent AI researcher argued that beating humans at chess was easy — to show real intelligence, one must solve the “truck backer-upper problem,” which involves backing an articulated semi-trailer truck into a parking space (Personal Communication). In fact, this problem had already been solved nine years earlier by Nguyen and Widrow using reinforcement learning (Nguyen & Widrow, 1989). Today, many thoughtful critics are again proposing new tasks and new sets of necessary or sufficient conditions for declaring that a system “understands”.

Meanwhile, AI research and development is delivering ever-more-capable systems that are providing value to society. It is important, both for intellectual honesty and for continued funding, that AI researchers claim credit for our successes and take ownership of our shortcomings. We must suppress the hype surrounding new advances, and we must objectively measure the ways in which our systems do and do not understand their users, their goals, and the broader world in which they operate. Let’s stop dismissing our successes as “fake” and not “genuine”, and let’s continue to move forward with honesty and productive self-criticism.

References

Arjovsky, M., Bottou, L., Gulrajani, I., & Lopez-Paz, D. (2019). Invariant Risk Minimization. ArXiv, 1907.02893(v2), 1–31. http://arxiv.org/abs/1907.02893

Baudiš,, P. & Gailly, J.-L. Pachi: State of the art open source Go program. In Advances in Computer Games, 24–38 (Springer, 2012).

Campbell, M., Hoane, A. J., Hsu, F. H. (2002). “Deep Blue”. Artificial Intelligence. 134: 57–59. doi:10.1016/S0004–3702(01)00129–1

Cole, D. (2014). The Chinese Room Argument. The Stanford Encyclopedia of Philosophy https://plato.stanford.edu/entries/chinese-room/

Kuhn, Thomas S. (1962). The Structure of Scientific Revolutions (1st ed.). University of Chicago Press. p. 172. LCCN 62019621.

Lakatos (1978). The Methodology of Scientific Research Programmes: Philosophical Papers Volume 1. Cambridge: Cambridge University Press

McDermott, D. (1976). Artificial intelligence meets natural stupidity. ACM SIGART Bulletin (57), 4–9.

Nguyen, D.S., & Widrow, B. (1989). The truck backer-upper: an example of self-learning in neural networks. International Joint Conference on Neural Networks (IJCNN 1989), 357–363 vol.2.

Peters, J., Janzing, D., & Schölkopf, B. (2017). Elements of Causal Inference: Foundations and Learning Algorithms. MIT Press Cambridge, MA, USA.

Searle, J. (1980). Minds, Brains and Programs. Behavioral and Brain Sciences, 3: 417–57

Vapnik, V., & Izmailov, R. (2019). Rethinking statistical learning theory: learning using statistical invariants. Machine Learning, 108(3), 381–423. https://doi.org/10.1007/s10994-018-5742-0

--

--

Thomas G. Dietterich
Thomas G. Dietterich

Written by Thomas G. Dietterich

Distinguished Professor Emeritus, Oregon State University. Former President, AAAI, IMLS. ArXiv moderator for cs.LG. Google Scholar Profile