Motivations

Bruno Monteiro
Synesism
Published in
7 min readOct 16, 2016

Many might object to the very idea of a theory of everything, and they can summon various and very relevant arguments against one. Scientifically speaking, there is of course the issue that even IF we had one, we’d never know for sure because science doesn’t work that way — we can at most be sure of what the case is up to an arbitrary level of precision, but there’s no such thing as a final stage to our knowledge, only ever-increasing approximations. The notion of a final theory also challenges one of its main tenets, namely that of falsifiability; science only deals with contingent truths, that may be true or false depending on the current state of our knowledge, but are never beyond questioning; nothing is set in stone and any theory can be discarded just as soon as a single piece of incontrovertible evidence deposes against it. It has no pretensions of providing insight about what’s ‘really out there’, but only the tiny sliver of objective reality available to both our senses and intellect — in essence, science deals with negatives.

That brings us to yet another point: even if there was a Theory of Everything (TOE), what makes us think that we’d be able to grasp it? The universe is much bigger and complex than we can even imagine, let alone try to figure out its workings. Our brains are amazing devices, but still evolved in a very constrained environment with very specific purposes, and it shouldn’t come as a surprise to anyone if they fell short of the immense amount of processing power needed to crack the reality code. Of course we’re an industrious species, and if anything we’re very capable when it comes to augmentating ourselves through technology, but even with things like quantum computers and artificial intelligence there are clear lines drawn in the sand in terms of what we can accomplish. Gödel’s Incompleteness Theorems assure that even our most complete picture of arithmetic will either always be lacking or inconsistent; similarly, the halting problem puts strict limits in our ability to predict how our programs (ourselves?) will behave in the indefinite future.

In physics, the uncertainty principle comes to slash any hopes of getting arbitrary precision in our experiments, and thermodynamics assures us that we’ll always get less than what we chipped in for in the first place — and worse: the universe seems to be marching relentlessly towards chaos, so even our best efforts to impose structure and meaning into the world will eventually be undone.

With a list like that one, I wouldn’t blame anyone for adopting a fatalist approach to the whole question, and yet I find myself confident that it is a worthwhile pursuit. Here are a few reasons why:

  • for all our shortcomings, the universe is still incredibly amenable to inquiry; there has never been a single phenomenon so otherworldly we couldn’t at least think about (despite many still lacking sensible explanations);
  • not only that, it seems to have an uncanny resemblance to our mental models. Even when the products of the mind seem completely detached from anything tangible, it often becomes the case that they actually can be mapped to something in the external world with amazing accuracy. To this the physicist Eugene Wigner gave the moniker of “unreasonable effectiveness of mathematics in the natural sciences”, but it stretches well beyond formal reasoning and the natural sciences;
  • in every discipline of human activity there appears to be an underlying theme of unification. Trends and advances often occur when seemingly disparate fields are found to have (or are imposed) some sort of connection in which their statements can be mutually translated into one another — one needn’t look any further than the amazing work already done in physics ever since Newton, or the Langlands program in mathematics, or what modernism tried to achieve in the arts, etc.;
  • more so, one can argue that there’s nothing surprising about that phenomenon at all: nature itself, the ultimate benchmark and source of all human inspiration, is an endlessly integrated whole whose constituents are constantly being reshuffled and repurposed, and where all boundaries we try to impose falter on closer examination.

And if the ultimate goal of our models and conjectures is, after all, to reproduce the results we see in nature, and try to understand the mechanisms behind its phenomena, it seems reasonable to expect that they will also reproduce its most remarkable features (like the seamless integration hinted above).

Moreover, there are reasons of economic order as to why a shared explanation is preferable to many different ones. A theory’s expressiveness (ie, the amount of information that can be retrieved from its statements) is also a measurement of its richness and predictive power, and can be used to increase our leverage on the natural world. Suppose for instance that we have two individuals, A and B; A has a good understanding of many different areas of knowledge, but each is like an island — distinct and isolated from one another -, while B has a similar understanding of many different fields but fails to see the distinctions between them, rather treating what A would say are completely disconnect disciplines as simply particular cases of an overarching theory. There are some very clear advantages to B in relation to his fellow A; for one, it would take a lot less effort (read: resources, of all kinds) for B to become as versed as A — instead of churning book after book for each of his subjects of interest, he might simply learn the general principles behind them and work their derivations; he also would be able to see how exactly they relate to each other and in what aspects they diverge, therefore being able to transpose techniques and knowledge previously held from one field to the other, enabling him a far broader set of tools to work with and a way to explore new and yet undiscovered realms with notching but deductive reasoning. Think of how a student scoffs on having to learn Maxwell’s four very succinct equations; does he imagine the trouble it would be for him to try and make sense of electromagnetic properties had he not have them available and were instead forced to account for the individual behavior of every single electron? Would the modern world of electronic devices even be possible if that were the case? The list goes on and on with examples of the sort.

Perhaps he might even be capable, in principle, of correctly describing their collective behavior by tirelessly collecting information on each one and later summing it all up instead of considering the ensemble. But what if some phenomena can ONLY be properly described by an unifying approach? What if there’s something beyond their individual properties that we fail to retrieve by simply adding them all up? That is the domain of emergent processes, and more than just a preferred choice, the unifying approach is the only possible venue of exploration into it. It’s still unclear to us how much of the world consists of emergent phenomena — after all, the reductionist approach seemed to work well enough for a time -, but as we delve into more and more complex subjects, the analytical toolkit seems increasingly shorthanded for the sort of descriptiveness required.

Finally, it is a pillar of everything we conceive of that it must respect some basic rules of composition (something usually referred to as the “laws of thought”), and though they might seem arbitrary at first it’s hard to think of any instance where nature might differ from them. I argue that the most important one, ubiquitous to all there was or can ever be (and for which versions thereof can be found in every field imaginable) is the principle of consistency. Of all the absurd and counterintuitive things either nature or our minds can conjure, none can be without some sort of accordance (be it to themselves or to something external), for otherwise they’d be devoid of any substance whatsoever. You might argue that’s not true and all sorts of inconsistent notions can be formed at will, but can they really? Language is a very empowering tool, such that it can fool us into believing any concoction it can come up with is a legal one, but I challenge any person to think of an actual object that is both a ball and a square — therefore respecting both their qualities, and not simply blending some features — at the same time. Or 1=2, or anything like it.

Again, you could argue that in whatever the language you’re using the statement is true because both symbols “1” and “2” have the same meaning, but by doing so you haven’t enabled an inconsistency but, quite the opposite, you undone it. The matter of the fact remains that you can’t really get anything out of inconsistency, other than its very name — and it’s important to say that this matter is not one of opinion but of fact, and that it has, at least mathematically, been rigorously demonstrated in the area known as model theory (inconsistent systems are the only ones with no models).

Now, I believe there are plenty of reasons to at least suspect there might be something more to the unification effort than OCD freaks trying to bring order to an otherwise chaotic universe, and specially after seeing so many disparate areas of knowledge moving towards what seems like a focal point, I began an effort to try and discover what this point might be. I now present to you the current state of this enterprise.

tl;dr version: People keep finding ways the universe seems to stack the cards against us and try to prevent anyone from snooping in on its secrets, but the more we advance our understanding of things the more they seem to be tightly knit in a manner we haven’t been able to describe up until now. What follows is mypathetic excuse of an attempt.

--

--