Member-only story
LLMs fail without context
Context is key to human language
When we are children, we quickly master directions. Directions are flexible in most languages — and important. That flexibility allows for descriptions of spatial relationships in a myriad of ways — near, left, above, North of, and so on.
To read more on spatial languages, a great resource for such cognitive semantics was written by Leonard Talmy 25 years ago in his book, Toward a Cognitive Semantics. These relationships seem to have fixed meanings once learned, but rely closely on the context to remove ambiguity, if possible and recognize it otherwise.
C.S. Peirce had a great model more than 100 years ago
What’s interesting about navigation is how it relates to the C.S. Peirce signs. In his model, signs can be icons (things that resemble other things, like how maps resemble the world in spatial terms), indices (connection to their object) and symbols (connection to objects by convention).
For navigation, they are icons whose semantics relate objects in context with other objects in context or that need to be. This is least probable to work with today’s transformer architecture because each transformer token is mapped to a word vector — a little like catering to symbols, a little for indices to relate…