There’s no Ambiguity, but Infinite Intensions: On Ambiguity, Language, and the Language of Thought

Walid Saba, PhD
ONTOLOGIK
Published in
6 min readJun 23, 2024

Infinite Intensions

Let us start with a simple, and seemingly unambiguous object, namely an arithmetic expression:

(1) 2 + ((2 ^ 3) * (3 + 7))

Like any other object (including a painting, a musical note, a video, a sentence in some natural language, a relation in a database), the expression in (1) has form (think structure, syntax) and content (think meaning, semantics). The form of the expression can be described in a tree-like structure as that shown in figure 1.

Figure 1. The syntax tree of the arithmetic expression in (1)

But what about the meaning (every form has content)? Well, it depends. The ‘content’ of the expression in (1) is infinite — yes, infinite! One part of the content is the (decimal) value, which in this case is 82. This part of the content is stored in an attribute (loosely speaking, a feature, a property) that we can call VALUE. But there’s a lot more content in that expression — or, there are many ways we can describe that expression other than by its value. For example, the expression in (1) can also be described by NUMBER_OF_OPERATORS, which in this case is 4, or by the NUMBER_OF_ODD_NUMBERS, which in this case is 3. How many attributes do we have in total — that is, in how many ways can we describe the expression in (1)? Again, we have an infinite number of attributes (descriptions)! Consider this:

DECIMAL_VALUE
NUMBER_OF_OPERATORS
NUMBER_OF_OPERANDS
NUMBER_OF_ODD_NUMBERS
NUMBER_OF_EVEN_NUMBERS
NUMBER_OF_PARANTHESES
NUMBER_OF_EVEN_NUMBERS_LESS_THAN_100
NUMBER_OF_ODD_NUMBERS_LESS_THAN_100
NUMBER_OF_EVEN_NUMBERS_LESS_THAN_200
NUMBER_OF_EVEN_NUMBERS_LESS_THAN_202
NUMBER_OF_EVEN_NUMBERS_LESS_THAN_203
NUMBER_OF_EVEN_NUMBERS_LESS_THAN_204
etc.

Sure, most of these attributes are usually not relevant to us (in most cases the VALUE is what matters), but nevertheless there are an infinite number of ways one can describe the ‘content’ of that ‘form’.

Now that (I hope) I convinced you that objects have infinite intensions — they can be potentially described in an infinite number of ways, even if they are as simple and as seemingly unambiguous as an arithmetic expression, let me now move into other types of objects, let’s say my car. While the landlord wants the MODEL, MAKE and LICENSE_PLATE (and perhaps the COLOR) of my car (when they give me a parking spot), an insurance company (that perhaps does not care about the color) wants attributes that the landlord does not care about such as the PRICE, the YEAR, the MILAGE, etc. But in general, there are an infinite number of descriptions I can give my car — that is, the “intension” of my car object is an infinite set (e.g., there’s the WEIGHT, the NUM_OF_ROADS_DRIVEN_ON, NUM_OF_ROADS_DRIVEN_ON_AT_9PM, the NUM_OF_ROADS_DRIVEN_ON_AT_9:05PM, etc…). Again, many of these descriptions are (perhaps!) not relevant to anyone, but they are descriptions (attributes) of my car.

The fact that objects (all objects) have an infinite intension (an infinite number of ways they can be described) is related to (what we call) ambiguity. Consider the picture in figure 2.

Figure 2. A picture of a farm. In how many ways can you describe this picture?

Like an arithmetic expression (and like a Spanish or a Japanese sentence) the picture in figure 2 has form (think structure/syntax) and content (think meaning/semantics). And like any other object it has an infinite intension — that is, it can be described in an infinite number of ways, although, admittedly, many of the descriptions (attributes) are not relevant to most of us. For example, while NUM_OF_PIGS might be a relevant attribute to some, NUM_OF_BASEBALL_GLOVES is not. The point I wish to make here is this: different people ‘see’ different attributes of the content and the same person might, at different points in time ‘see’ different attributes. We mistake this by ‘ambiguity’ — we say the picture is ambiguous because different people read it (interpret it) differently, or because they interpret it differently at different points in time. But the picture is innocent — it is not ambiguous. It does have the same content that has an infinite number of attributes that we decide to see a (different) subset of them at different points in time, depending on ‘our’ context — i.e., our ‘frame of mind’ at that point in time. The same applies to a musical note, or a painting, or a building, or a mountain. Any object has an infinite number of attributes and we decide what to ‘see’ in it. So while we refer to this as ambiguity, objects are innocent, and their content is not ambiguous. Because different people zone in on a different subset of the intension — that is, because different people see different things, we call this ambiguity.

The Language of Thought

Having explained that all objects have form (structure/syntax) and content (meaning/semantics), it means that there is a formal language that describes any object. When we think (conceive) of these objects we also think of their composition, and depending on the “type” of the object, their negation, their part-whole relationship, their size and their enlargements, their rotations, their orientation, their length, etc. These mental operations happen according to some rules, and they occur in some language — a language that Jerry Fodor called The Language of Thought. It is the language we think in. It is not English, nor Persian, nor Japanese nor Chinese nor Portuguese, it is Mentalese. And the Language of Thought must be a formal language since when we agree on a specific attribute, we fetch the same ‘data’.

The reason for all of the above is a recent article in nature magazine that draws the conclusion that we do not think in language. They base their conclusions on evidence of subjects with defects in their language network that still demonstrated thinking and reasoning on many tasks. The authors then conclude that language is a tool for communication more so than a biological faculty in the human species, a faculty that is central to thinking and reasoning.

The authors are correct if by language they mean the external languages we use to encode our thoughts, but the Language of Thought hypothesis is not about these external languages (E-languages) but about the internal language of thought, the I-language that we use to manipulate and compose our thoughts that we make of objects and concepts — objects that, as we argued above, have infinite intensions. The selection of a subset of an intension happens according to some rules. For example, while the number of ways an object can be described is potentially infinite, dreams do not have a HEIGHT, and tables do not make PLANS, and mountains do not RUN. The attribution, composition, and manipulation of objects of cognition (the ‘thinking’) must happen in some formal language, otherwise we could not convey thoughts to one another.

That the Language of Thought is a formal language is just a corollary of Immanuel Kant’s very first sentence of his Logic monograph:

Everything in nature, both in the lifeless and in the living world, takes place according to rules, although we are not always acquainted with these rules*.

If Kant and Fodor are correct, then we are still very far from anything we can call AGI, because mere approximation of data by some continuous function is certainly not going to take us too far. Our cognitive and conceptual apparatus and the language of thought that we use to manipulate and think about the world — the world of objects with infinite intensions, still eludes us.

__
* Incidentally, I am always amused by those that contrast neural networks with “rule-based systems” — as if the two have nothing to do with each other. Besides Kant’s statement that “everything happens according to rules” it is easy to prove that a neural network — that is nothing but a series of function compositions, is actually a series of composed rules since every function is, in the end, a rule. For example

length [ ] = 0
length(x : xs) = 1 + length xs

Can be re-written as

s.LENGTH = 0 if s ::= [ ]
s.LENGTH = 1 + xs.LENGTH if s ::= (x : xs)

See this article that proves neural networks are a collection of many rules and that all computation is defined, in the end, by rules.

--

--