“The aim of science is to seek the simplest explanation of complex facts. We are apt to fall into the error of thinking that the facts are simple because simplicity is the goal of our quest. The guiding motto in the life of every natural philosopher should be ‘Seek simplicity and distrust it.’”
— Alfred North Whitehead
Simplicity is powerful. Economists seek minimal models to describe market fluctuations, and our greatest mathematicians use the guiding light of elegance to discover their next great truths. But is this preference a fundamental reflection of nature’s workings, or an aesthetic one? Occam’s razor — positing that the simplest explanation is usually the correct one — is particularly abused in biology. As Eve Marder has long argued, biological systems are equipped with redundant strategies and contingencies that confound the interpretation of even the most tightly controlled biological experiments. Even the idea of probing a “controlled” biological system merits careful reflection. By controlling for variables, we stray from reality, instead describing an abstract, manipulated system, and often still failing to account for the hundreds of cellular mechanisms we don’t yet know about. (Take, for example, surprising work from Didier Stanier and colleagues, who recently reported that knocking out a gene with CRISPR leads to the expression of homologous genes to compensate for the loss.) In a debate held a decade ago, Richard Dawkins and Lynn Margulis argued about the role of symbiosis in evolutionary theory. An exasperated Dawkins asked: “why on earth would you want to drag in symbiogenesis when it’s so unparsimonious and uneconomical?” Margulis replied: “because it’s there.” Parsimony for parsimony’s sake is not parsimonious in the biosciences.
It’s important to recognize that our formalization of biology is fundamentally historical — philosophers like Hans-Jorg Rheinberger, Bachelard, and others have argued that biology has been primarily technology (rather than hypothesis) driven. The ideas of biology are inextricable from the technology that produces them. As Rheinberger put it: “phenomena and instrument, object and experience, concept and method are all engaged in a running process of mutual instruction.” Our resultant understanding of nature is dominated by our choice of experimental system, which includes our instruments, the model organism, even the culture wherein a discovery is made: the knowledge generated is, in some very real ways, as contingent as the processes it describes. We fit discovered phenomena with simple handles — reduced, for practical purposes, to a sort of currency to be exchanged between labs, resorting to pale metaphors when communicating the natural mysteries.
It’s clear why we do this. It’s in part a heuristic shortcut, making things easier to wrap our heads around. We want to understand nature — we, being individual humans, not as the vague “human collective.” We therefore seek truths comprehensible to a single intellect; and so, as the parable goes, we’re searching for our keys under a streetlamp. Even so, it seems so inherently obvious, so inarguable that true things should be simpler things. The instinct to discover the rules of nature is older than man: the nervous system is biology’s greatest prediction algorithm, and it dutifully learns patterns whose knowledge might enhance its chance of survival (science being the formal application of this impulse). Thus, human thought is underpinned by an unconscious aesthetic laid down in the nervous system itself.
Studies suggest we naturally tend to find satisfaction in simplicity, in learnability (often through repetition)— in that which is easy to process. In music, for example, the most universally enjoyable songs lie somewhere between tedious simplicity (like the worst of pop) and unpredictable entropy (like some modern composers). We’re just acting after our nervous system’s modus operandi when we seek learnable patterns. Maxims that appear to be self-evident — e.g. something along Occam’s logic, “a simpler explanation is better” — may only appear to be so because they’re rewarded within the system that evaluates them. That is, they’re self-reflective: the nervous system, itself effectively a simplifying model of its environment, seeks to uncover patterns that render its existence more manageable. It’s evaluating a reductive internal model against its own implicit function. The mind is a causality inferring machine: the impetus to ascribe linear causal relationships is inbuilt to our nervous system. Armed with this hammer, the whole messy universe looks like an elegant nail.
Of course, ultimately, what we want to do with science is to uncover what Dawkins has referred to as “economically expressed rules.” We are interested in the objects of life primarily because they point us to the process of life. We don’t count the color bands of a beetle for the sake of knowing this fact, but because our understanding of rules often emerges from collections of observations — in the beetle’s case, for example, untangling the logic of developmental programs. But is there even a clear boundary between biological object and process?
For example, it’s often said that biological entities perform computations (we’ll ignore, for the time being, the fact that no one can agree what is meant by computation): the organism an object, and computation its process. In doing so we suppose a separation between software and hardware, algorithm and data. But organisms are also the result of computations: cells can be thought of as “testing hypotheses” during the development of an embryo, for example. Both evolution and nervous systems are the results of computations becoming embodied in the architecture of their computers. Even in machine learning, as Sunderhauf et al. recently argued, “there is a spectrum — rather than a dichotomy — between programming and data.” Indeed, the success of machine learning, despite its inelegance, underscores the fact that simplicity isn’t necessarily a useful goal.
Evolution has never (until, perhaps, soon) operated by reason, but rolls of a die. The resultant systems are rife with feedback loops and interdependencies. Neuroscientists too often conflate observational studies with causal explanations of behaviors, but a description or manipulation of what neurons or networks are active during a behavior is not the explanans of that behavior. ‘Necessary and sufficient’ doesn’t work the way neuroscientists usually use it. Thirty years ago, Randolf DiDomenico and colleagues proposed that we avoid making causal claims in individual papers, and instead build them from multiple studies using various techniques and approaches. Given the sheer complexity of these networks and amount of data we’ve generated, this is increasingly beyond the scope of individual human intellect. All this is to say: think hard about what it is you wish to show with your studies. Be humble in your claims. Pragmatism may be a more holy grail than Truth. Or, as Hemingway (perhaps apocryphally) advised: kill your darlings.
Want more? Follow us at The Spike