A Model For Understanding The Concept of Understanding

This post begins with a dog. Our last post was about sapience and learning processes; when we’d toss ideas back and forth, the family dog made for great anecdotes. Occasionally when we’d mention Zoe (the dog), she’d cock her head and look like she was trying understand what we were saying. Cue the debates about canine cognition. The question “What Is Understanding?” isn’t limited to debate about dogs; it’s going to be a huge stumbling block as automation increasingly continues to take hold.

We’re all clear that bread slicers, factory robots, Roombas, and thousands of other machines that are automating routine manual jobs don’t understand anything about the tasks they do. But what about the systems that replace middle managers, software developers, academics, and artists? This issue isn’t purely philosophical. In the end it’s also technical and highly political. And we don’t even agree on the language we’ll use to discuss it.

So what does it mean to understand something? As with many of the terms we use in this blog, it’s easy to fall into a colloquial understanding: a know-it-when-you-see-it conceptualization that feels well-defined but eludes formality. Unfortunately this attitude leads to confusion around such as artificial intelligence and machine intelligence.

To remedy this confusion, we introduce a more robust definition of understanding.

Understanding is a property of a model describing both

  • the extent to which that model accurately predicts reality and
  • the extent to which that model simplifies the data it is based upon

Throughout the rest of this blog, we will lead you through the reasoning behind our definition as we discuss whether or not an artificial intelligence can have understanding.

Normally the dictionary’s a good starting point for a conversation like this; in this case it almost worsens the situation. Webster’s New World College Dictionary, the dictionary sitting on the bookshelf next to me, offers fully 8 substantially different definitions, along with the etymological roots, for the word “understand” which range from the original Old English “understandan” “to stand among, hence observe, understand”, a literal definition, to “to have a sympathetic rapport with.” This lexical variety muddies the waters somewhat and does not help in a formalization. The second definition, given the context of artificial intelligence and machine learning, may be interesting but isn’t pertinent to this discussion. However, the first definition, the older definition, may hold a kernel of truth. It emphasizes a personal viewpoint, a contextual viewpoint, which is traditionally seen as important to the process of understanding something. We believe this discussion about understanding revolves around fundamental disconnect between syntax and semantics.

Perhaps the best example of the disconnect between syntax and semantics is the Chinese Room thought experiment.

Imagine a blank white room, furnished with a small table in the center, a chair, and a mail slot in one wall. On the table is a sheaf of paper, a pen, and fat volume which contains a large set of rules on how to transform symbols or certain sets of symbols into certain other symbols or sets of symbols.

A man lives in this room and occasionally he finds a piece of paper has been deposited through the mail slot. When this happens, he dutifully examines the symbols on that piece of paper, writes new symbols on a clean sheet from the sheaf on the desk using the symbol transformation rules found in the book, and deposits the paper back through the mail slot when finished.

He does not know what the symbols on the papers he receives mean and does not understand the symbols in the book. He only understands how to use the book to turn some symbols into other symbols.

If we think about that book of rules from the example above, and further imagine that those transformation rules in the book are actually a complete set of rules for carrying on conversations in a Chinese dialect, then something strange happens. We find that when we write messages in Chinese and pass them through the mail slot to man in the room, he responds with perfectly intelligible Chinese in kind. Thus we can carry out a conversation, while the man in the room understands no Chinese and nothing of the conversation he participates in. The man in the room uses the syntax of Chinese (from his book of rules), but he does not understand the semantics, context, or meaning of those same symbols.

Clearly then it is the semantics, and not the syntax, of communication that is interesting. Unfortunately the path to a reasonable definition of understanding is still murky. Some machine learning algorithms seem to be able to parse the semantics of texts in a manner which is eerily human-like.

In 2013 a group of researchers at Google published the word2vec toolset which, using a variety of machine learning techniques, computes vector representations of words. While vector representations sounds esoteric, the way they work is actually pretty intuitive. Once computed, the vector representations of words can be added or subtracted from each other to examine relationships between words.

In a paper published by some of the authors of word2vec, sample relationships included…

Paris — France + Italy = Rome

Einstein — scientist + Picasso = Painter

Japan — sushi + Germany = bratwurst

It’s easy to see that the algorithms which generated these word vectors captured genuine analogies, which are solidly semantic relationships.

Admittedly, the analogical relationships generated by the word2vec toolset represent a fairly basic semantic comprehension of the training text. Human children can reasonably be expected to perform similar tasks demonstrating analogical reasoning as part of a holistic set of cognitive capabilities between the ages of 3 and 4 years old. Basic comprehension is still comprehension though. If a three year old child has the ability to understand, then it seems fair to say that word2vec’s analogical relationships demonstrate some understanding.

Coming back around to the beginning and looking to the tangle of my dictionary, it seems the closest definition of the bunch was number 7, “ to know thoroughly; grasp or perceive clearly and fully the nature, character, functioning, etc. of.” Still this is not very analytic (a quality necessary if want to measure understanding), although it clearly captures the importance of semantics, but it does lead into the ideas of Gregory Chaitin, who says “Understanding is compression, comprehension is compression!” By this obtuse phrase he succinctly summarizes the idea that understanding can be seen as the degree to which a theory simplifies the data it attempts to explain. Expanding on these ideas, we arrive at the definition we will use in our blog from here on out:

Understanding is a property of a model describing both

  • the extent to which that model accurately predicts reality and
  • the extent to which that model simplifies the data it is based upon

It seems odd to describe understanding as a quality of static objects. As humans, we are used to seeing understanding in light of our own cognitive abilities, which by nature seem dynamic.

Our definition also diminishes the “mystery” factor behind understanding. When you say “I understand”, what you really mean is “I have a mental model which understands”. When you work to develop an understanding of something, you work to improve a mental model that understands something. Mental models are something that can be, and already are, objectively studied.

As machine learning techniques become more advanced, and robots become more capable, having good measures by which to compare their performance to human performance will become more valuable. Eventually, the issue of whether or not a robot can be conscious will need to be broached; but in the meantime, it will be necessary to have standards by which to judge machine intelligences which don’t rely on the answer to the question of consciousness.

The more we look, the more difficult it becomes to differentiate ourselves from the machines we create, and it is increasingly important to avoid a human bias when thinking about what it is that makes us tick. Not so long ago, it would have been unthinkable to suggest that mere animals could understand abstract concepts, display empathy, reason, or have emotions. While machines may currently lag far behind, they’re quickly catching up to us. It’s going to be a crowded future.

In coming posts, we will discuss some of the major economic forecasts dealing with automation, and look at some attempts to quantify and measure creativity.


Originally published at www.imitatingmachines.com.