The madness behind the method

Lucius Gregory Meredith
RChain Cooperative
Published in
5 min readOct 29, 2020

Recently Vitalik Buterin posted the following question: Does living in a universe that runs by laws of physics count as being “governed by an algorithm”? (I have my own detailed answer to this, but curious what other people’s response is) i responded because the assumptions implicit in this question cause me to worry that people, especially people who work in technical fields, such as software engineering, have a very skewed view of science.

Folks who are not really thinking about the foundations of science may not know that the general scientific consensus is that we never know the laws of physics or any other field of science. We have provisional proposals that may be overturned at any juncture. For example, after 400 years of supposed stability in the last decade the physics of optics was rewritten exposing new optical phenomena and materials.

Many proposals that we consider to be the cornerstones of our understanding of the natural world are actually not very stable at all. Despite its success and vaunted accuracy, general relativity is not even a scientific theory. The Einstein field equations require knowing the stress-energy tensor (distribution of matter/energy for the whole universe). That’s not even testable. Only approximations of this theory are testable. Furthermore, the theory is not compositional, meaning that you can’t piece together more complete descriptions of physical systems from less complete ones.

Further, as many may know, quantum mechanics, another theory with extremely high degrees of accuracy, doesn’t interface well with general relativity. One attempt to reconcile the two theories, string theory, has held a hegemonic grip on physics departments around the world despite the lack of testable proposals. That a non-testable theory has held sway and directed resource distribution for research for so many decades has been the subject of many popular texts, recently. So, as soon as we scratch the surface of “algorithmic laws” we find there’s not much of a code base there, and what code we do have is brittle and never provably correct.

In fact, the whole point of being a scientist is that you like living with this level of uncertainty. It energizes you and gets you up in the morning. Hand in hand with this orientation towards life is an orientation towards the “laws” of physics. Since they can’t be known we also cannot accept, prima facie, that there is an algorithmic account of the physical universe. Certainly, many theories in modern physics use frameworks that are far outside of what is computable. Thus, even with the current provisional proposals, we do not find algorithmic laws. Not to put too fine a point on it, adopting a view of an algorithmic universe is an act of faith, not science.

Now, around the same time, Vlad Zamfir and @P33RL3SS were discussing whether or not people need to agree on a model of what it is to be human in order to communicate. Vlad was arguing that no such model need exist in order to manage conflict between agents while @P33RL3SS was arguing that such models are a necessity to have meaningful engagement. Vlad’s position was that even describing humans (and lots of other phenomena) as systems (i.e., via models) amounts to hubris, while @P33RL3SS was pointing out failures when governments don’t recognize certain populations as human.

There is a natural resolution to these two positions but the connection to the arguments above has to do with the a number of implicit assumptions about model making and its role in agency — which is the very heart and soul of the scientific endeavor.

To reconcile the two position we have only to note that there is a growing consensus around certain aspects of cognition. Specifically, species such as Corvids and Homo Sapiens appear to enjoy something called ‘theory of mind.’ In a nutshell, individuals of such a species are genetically wired to build mental models of other individuals of the species. Further, linguistic cognition in humans appears to be built to make use of this theory of mind. Thus, there are two things under discussion between Vlad and @P33RL3SS.

At some fundamental level, humans can’t use natural language without a built in theory of mind, i.e. without a built in theory of what a human is. But if you ask a human to articulate what a human is, they stumble in roughly the same way they would if you asked them to give the grammar of the language they learned as a child. They know but they don’t know that they know.

There is another kind of model, the sort of model that a physicist, or biologist would build to explain a theory of a particular phenomenon to her colleagues. This kind of model is higher order in the sense that the modeler is cognizant of the act of modeling. As a general rule, people do not need this kind of model to do effective decision making. Hunters and gatherers did not have to have a higher order model of the hunt or the prey in much the same way that a Cheetah does not have to have a model of the hunt or the prey that it is conscious of or consciously manipulates. If anything, such a model would only slow it down.

From the modern scientific perspective on cognition Vlad and @P33RL3SS are both right and both wrong. Natural selection is not really subject to hubris. It’s a method of searching for solutions in a design space. Natural selection developed species with theories of mind. So Vlad’s position is much too harsh. Likewise, @P33RL3SS’s position is also much too extreme: sometimes higher order modeling will produce better decisions, but emotional response, herd response, tribal response and other built in responses often undermine or get in the way of effective uses of these higher order models where people know that they know.

Updating the two different models proceeds in different time scales and via different mechanisms. Updates to built in theory of mind happens through natural selection. Updates to higher order models are relatively easy for individuals, and harder for communities. But rest assured an individual’s actions are more profoundly driven by the built in models than they are by the higher order ones. That’s why people can know that smoking is bad for them and still smoke.

But the troubling aspect of Vlad’s position has to do with the idea that a model — be it a built in one, or a higher order one — has to be a proposal of the totality of being of the thing being modeled. Neither natural selection nor science make that mistake. Proposing a model, proposing a theory isn’t about imagining that said proposal describes the totality of being of a given universe of discourse. Instead, such proposals are best seen as focusing on some whole within the totality of being.

Think about it like meditation. Your breath is not the totality of your being. Yet, focusing on your breath can open up worlds of experience. Likewise focusing on some whole, aka some system, such as human cognition, or human social behavior, can open up worlds in the same way focusing on your inspiration and expiration can. This is not hubris. This is diligence within a field of humility. Much in the same way that accepting that the universe is subject to algorithmic laws is an act of faith, the kind of hubris Vlad worries about arises when we imagine that our models, our scientific theories are anything more than a meditation.

--

--