Energy and the Utility of Signs
The purpose of the brain is homeostasis. More specifically, a particular variant referred to by a lesser-known word “allostasis”. Accepting this reveals all that is wrong with machine language approaches in modeling brains. Permit me to explain…
Allostasis proposes that efficient regulation depends on the anticipation of needs and preparation for their satisfaction. This is a more complex form of homeostasis, which is typically defined as maintaining a system within a narrow operating range.
The problem with machine learning approaches is that the formulation of the domain of stability is performed by a researcher who explicitly defines an objective function or in the RL paradigm a reward function.
It is a great leap of faith that the research can formulate a
‘god seeing eye’.
Reality is unfortunately horrifically complex and it is the nature of brains to livewire itself to the environment through interaction. Brains anticipate and prepare by learning about this world.
Machine learning algorithms do very well in learning narrow tasks. That is because they “direct fit to nature” of the objective or reward function. They fit themselves to the very narrow goals of the environment that they are forged in.
Objective functions are analogous to equations that describe the energy of a system. Direct to fit algorithms are like physical systems that achieve their stability in the state of minimal energy configuration.
But is the stability conditions as described by physics the same stability conditions we find in homeostasis in biology? It could be a good metaphor, but it is entirely wrong.
It’s as wrong as the saying that thoughts are like ‘data structures’. We have a habit of using metaphors that we are familiar with to describe things that we don’t understanding. It’s all too easy to fall into the trap of using a bad metaphor.
This history of science is littered with paths that were driven by metaphors that were simply wrong. The concept of Aether and GOFAI are examples of these wrong-headed metaphors.
But of all the metaphors that human civilization has invented, the one that is most insidious in a meta-metaphor. This metaphor originates in philosophy and mathematics. It was aggressively promoted in the early 1900s by Russell and Whitehead.
Although Godel (apologies, I don’t know the keystrokes for the umlaut) revealed the error in Russell and Whitehead’s formalization of mathematics. Ludwig Wittgenstein attempted also to solve philosophy but only to discover its flaws.
Wittgenstein had thought he had solved philosophy when he wrote his Tractatus Logico. He promptly retired from philosophy after and pursued home building and elementary school teaching. But only to come back a second time to rework his mistake.
Wittgenstein realized that to understand words, we have to understand the context in which these words are expressed. More specifically, he called this context a ‘language game’.
In our everyday social interactions with each other, we play many kinds of language games without explicitly being conscious of it. For someone to understand what we say, that someone unconsciously understands the language game from wherein we speak.
Misunderstanding in human communication is a consequence of an impedance mismatch between the language game played by the speak and the game assumed by the listener.
Biosemiotics is an approach to understanding biology by taking a ‘language-turn’ in its description of the horrific complexity of biology. The molecular machinery of biology achieves coordination via the propagation of signs.
Signs are not just composed of symbols. As C.S. Peirce has formulated, there is a richer milieu of signs.
I believe the correct metaphor for uncovering the mysteries of general intelligence is through the study of signs.
Because hidden underneath signs are the indexical references to energy. In its most abstract description, allostasis is the anticipation of loss of energy and the preparation for the retrieval of energy. But how do we know what is energy if not for a sign?