The Fallacy of the “Scientific Proof”

Uncertainty is the bread and butter of good scientists

Stefan Schroedl
Science and Philosophy
5 min readMar 10, 2021

--

Photo by Alexis Gethin on Unsplash

Many people view science as the business of establishing absolute certainty. They see scientists as binary creatures: As believing only what they can prove, accepting only “hard facts.”

Maybe this view is confounded by their recollection of high-school mathematics. Here, we drew up crystal-clear definitions and axioms and then applied chains of logical deductions to derive theorems from them. In the vast majority of cases, we can prove or disprove a statement — its truth is mostly¹ binary.

Empirical sciences use the language and methods of mathematics. To make a claim testable, it has to be expressed as precisely as possible. But then we proceed in the opposite direction than in math: Instead of starting from the axioms, we reason inductively about the world starting from observations; we create hypotheses and validate them based on the degree of their explanatory value. Of course, evidence and measurements are always error-prone and partial. It is easy to imagine the abstract concept of a circle. Still, any possible drawing will deviate from it at some finite level of resolution. All natural sciences, such as physics, chemistry, and biology, are subject to imperfection and incompleteness; and so are applied sciences (once they reach a sufficient level of complexity), such as engineering, computer and data science, and social sciences. Our theories are always approximations and will never cover every particular case.

It is human nature that we strive to understand the world around us and have as much certainty as possible; we all would like to get there ideally in a straight line, as fast as possible, and with the least possible effort. Uncertainty is innately uncomfortable. But certainty is exceedingly rare in the real, messy, complicated, and only partially known world. And it becomes ever harder to find in our modern world of information overload and echo chambers. So, sometimes simple, black-and-white, clear-cut “answers” look appealing and comforting. Most often, though, they belong to the realm of axiomatic, human-made belief systems like social norms, tradition, ideology, alleged authorities, and religion. It takes significant courage and humility to admit uncertainty and ignorance. Nobody is more willing to face that than good scientists. They are always aware that the currently accepted knowledge can change in light of new evidence. There are many things out there, maybe most, that we don’t understand and probably will never understand. In fact, always leaving the door open to doubt is at the heart of the scientific method. It works precisely because it is self-correcting. The notion of a “scientific proof” is almost an oxymoron.

As a machine learning scientist, embracing these ideas about uncertainty reminds me of the role of entropy in loss functions and priors when building predictive models. Information entropy is a measure of concentration among the different alternatives in a probability distribution. Suppose the probability of heads in a coin toss is the same as the probability of tails. In that case, the entropy is as high as it possibly can a two-outcome trial (1 in this case). On the other hand, if a weighted coin always landed on the same side, we would have zero entropy and absolute certainty. In a classification problem, binary cross-entropy can grow without limits as the model comes closer and closer to assigning zero probability to the correct class. When fitting a statistical model, there is always a tradeoff between the confidence in getting the class right (for some examples) and hedging its bets to avoid being too confidently wrong (on other examples). The principle of maximum entropy states that in the absence of any information about several prior alternatives, they should be given equal weight. This is related to Occam’s razor: The simplest explanations are often the best. We should not make any assumptions that are unnecessary for the final conclusion. Conversely, placing one’s belief in a single source or authority would correspond to a low entropy among all the relevant possible sources and interpretations.

There is also an element of pragmatism in the question of uncertainty. In the real world, we are continually forced to make decisions and act on incomplete information. Any of our resources in terms of time, cost, energy, and brainpower that we can spend on research are limited. We have to focus on what level of detail is necessary to know and how it is worth spending our efforts for exploration. It is often falsely claimed that if scientists can’t prove X, they believe in not-X. But rather than denial, pragmatism more often leads to an attitude better described as agnostic. We know our known unknowns and unknown unknowns. In many cases, we can reserve the right to be indifferent, especially if it is too hard at the current point in time to get to the bottom of them, or if they don’t appear to have tangible consequences for the rest of our pursuits and decisions. Extraordinary claims require extraordinary evidence or the lack of alternative, more common explanations. It is easy to make an unfounded claim, so the burden of proof should be on the person who makes it. In other words, there is no obligation for science to disprove every possible idea. That would lead to a spectacularly inefficient and wasteful process.

Some critics of the scientific method paint uncertainty as a weakness rather than an asset. None of our knowledge is either 0 or 100 percent certain, but that is totally OK. We can often be quite sure about the big picture even without having figured out every detail. Non-smokers are getting lung cancer too, that doesn’t deny the causal health damage. Some evolutionary processes are still being investigated, but this does not put evolution on equal footing with the idea of intelligent design by any stretch of the imagination. Our climate simulations have bands of uncertainty, but there is no debate among serious scientists about human impact and the urgency to act fast to avoid disaster. Detractors have used uncertainty to shrug their virtual shoulders, resigning to a hypocritical “we just don’t know.” Even in the face of uncertainty, we have to act pragmatically on what the overwhelming evidence or the big picture says. It is valid to dispute model assumptions and verify facts, but this is different from questioning the scientific method altogether. I want to ask these critics, what is the alternative? Taking evidence less serious and relying more blindly on alleged authorities and influencers? I will gladly admit at any time that the scientific process is not perfect; it has many well-known flaws. However, stealing from Churchill’s famous dictum: Among all methods of approaching understanding and knowledge about the world, the scientific method is the most effective one that humans have developed so far.

[1] A well-known exception being Gödel’s famous incompleteness results.

--

--

Stefan Schroedl
Science and Philosophy

Head of Machine Learning @ Atomwise — Deep Learning for Better Medicines, Faster. Formerly Amazon, Yahoo, DaimlerChrysler.