Does the Scientific Method need Revision?

Sabine Hossenfelder
Starts With A Bang!
8 min readDec 17, 2014

--

Does the prevalence of untestable theories in cosmology and quantum gravity require us to change what we mean by a scientific theory?

Theoretical physics has problems. That’s nothing new — if it wasn’t so, then we’d have nothing left to do. But especially in high energy physics and quantum gravity, progress has basically stalled since the development of the standard model in the mid 70s. Yes, we’ve discovered a new particle every now and then. Yes, we’ve collected loads of data. But the fundamental constituents of our theories, quantum field theory and Riemannian geometry, haven’t changed since that time.

Everybody has their own favorite explanation for why this is so and what can be done about it. One major factor is certainly that the low hanging fruits have been picked, and progress slows as we have to climb farther up the tree. Today, we have to invest billions of dollars into experiments that are testing new ranges of parameter space, build colliders, shoot telescopes into orbit, have superclusters flip their flops. The days in which history was made by watching your bathtub spill over are gone.

Image credit: © NEWSru.com, via http://www.newsru.com/world/07mar2006/otkrr.html.

Another factor is arguably that the questions are getting technically harder while our brains haven’t changed all that much. Yes, now we have computers to help us, but these are, at least for now, chewing and digesting the food we feed them, not cooking their own.

Taken together, this means that return on investment must slow down as we learn more about nature. Not so surprising.

Still, it is a frustrating situation and this makes you wonder if not there are other reasons for lack of progress, reasons that we can do something about. Especially in a time when we really need a game changer, some breakthrough technology, clean energy, that warp drive, a transporter! Anything to get us off the road to Facebook, sorry, I meant self-destruction.

Images credit: Pawel Kuczynski, via http://www.pawelkuczynski.com/Strona-g-owna/Home/index.php.

It is our lacking understanding of space, time, matter, and their quantum behavior that prevents us from better using what nature has given us. And it is this frustration that lead people inside and outside the community to argue we’re doing something wrong, that the social dynamics in the field is troubled, that we’ve lost our path, that we are not making progress because we keep working on unscientific theories.

Is that so?

It’s not like we haven’t tried to make headway on finding the quantum nature of space and time. The arxiv categories hep-th and gr-qc are full every day with supposedly new ideas. But so far, not a single one of the existing approaches towards quantum gravity has any evidence speaking for it.

Image credit: Brianna T. Wedge of deviantART, via http://briannatwedge.deviantart.com/.

To me the reason this has happened is obvious: We haven’t paid enough attention to experimentally testing quantum gravity. One cannot develop a scientific theory without experimental input. It’s never happened before and it will never happen. Without data, a theory isn’t science. Without experimental test, quantum gravity isn’t physics.

Image credit: CERN / IOP publishing, via http://cerncourier.com/cws/article/cern/28263/1/cernphysw1_7-00.

If you think that more attention is now being paid to quantum gravity phenomenology, you are mistaken. Yes, I’ve heard them too, the lip confessions by people who want to keep on dwelling on their fantasies. But the reality is there is no funding for quantum gravity phenomenology and there are no jobs either. On the rare occasions that I have seen quantum gravity phenomenology mentioned on a job posting, the position was filled with somebody working on the theory, I am tempted to say, working on mathematics rather than physics.

It is beyond me that funding agencies invest money into developing a theory of quantum gravity, but not into its experimental test. Yes, experimental tests of quantum gravity are farfetched. But if you think that you can’t test it, you shouldn’t put money into the theory either. And yes, that’s a community problem because funding agencies rely on experts’ opinion. And so the circle closes.

A theory is only scientific if it useful to describe nature. Image source: http://abstrusegoose.com/275.

To make matters worse, philosopher Richard Dawid has recently argued that it is possible to assess the promise of a theory without experimental test whatsover, and that physicists should thus revise the scientific method by taking into account what he calls “non-empirical facts”. By this he seems to mean what we often loosely refer to as internal consistency: theoretical physics is math heavy and thus has a very stringent logic. This allows one to deduce a lot of, often surprising, consequences from very few assumptions. Clearly, these must be taken into account when assessing the usefulness or range-of-validity of a theory, and they are being taken into account. But the consequences are irrelevant to the use of the theory unless some aspects of them are observable, because what makes up the use of a scientific theory is its power to describe nature.

Dawid may be confused on this matter because physicists do, in practice, use empirical facts that we do not explicitly collect data on. For example, we discard theories that have an unstable vacuum, singularities, or complex-valued observables. Not because this is an internal inconsistency — it is not. You can deal with this mathematically just fine. We discard these because we have never observed any of that. We discard them because we don’t think they’ll describe what we see. This is not a non-empirical assessment.

A huge problem with the lack of empirical fact is that theories remain axiomatically underconstrained. In practice, physicists don’t always start with a set of axioms, but in principle this could be done. If you do not have any axioms you have no theory, so you need to select some. The whole point of physics is to select axioms to construct a theory that describes observation. This already tells you that the idea of a theory for everything will inevitably lead to what has now been called the “multiverse”. It is just a consequence of stripping away axioms until the theory becomes ambiguous.

Image credit: Moonrunner Design, via http://news.nationalgeographic.com/news/2014/03/140318-multiverse-inflation-big-bang-science-space/.

Somewhere along the line many physicists have come to believe that it must be possible to formulate a theory without observational input, based on pure logic and some sense of aesthetics. They must believe their brains have a mystical connection to the universe and pure power of thought will tell them the laws of nature. But the only logical requirement to choose axioms for a theory is that the axioms not be in conflict with each other. You can thus never arrive at a theory that describes our universe without taking into account observations, period. The attempt to reduce axioms too much just leads to a whole “multiverse” of predictions, most of which don’t describe anything we will ever see.

(The only other option is to just use all of mathematics, as Tegmark argues. You might like or not like that; at least it’s logically coherent. But that’s a different story and shall be told another time.)

Now if you have a theory that contains more than one universe, you can still try to find out how likely it is that we find ourselves in a universe just like ours. The multiverse-defenders therefore also argue for a modification of the scientific method, one that takes into account probabilistic predictions. But we have nothing to gain from that. Calculating a probability in the multiverse is just another way of adding an axiom, in this case for the probability distribution. Nothing wrong with this, but you don’t have to change the scientific method to accommodate it.

Image credit: screenshot from Nature, via http://www.nature.com/news/scientific-method-defend-the-integrity-of-physics-1.16535.

In a Nature comment out today, George Ellis and Joe Silk argue that the trend of physicists to pursue untestable theories is worrisome. I agree with this, though I would have said the worrisome part is that physicists do not care enough about the testability — and apparently don’t need to care because they are getting published and paid regardless.

See, in practice the origin of the problem is senior researchers not teaching their students that physics is all about describing nature. Instead, the students are taught by example that you can publish and live from outright bizarre speculations as long as you wrap them into enough math. I cringe every time a string theorist starts talking about beauty and elegance. Whatever made them think that the human sense for beauty has any relevance for the fundamental laws of nature?

Schematic illustration for the circle of continually testing and improving scientific hypotheses. Source: Backreaction.

The scientific method is often quoted as a circle of formulating and testing of hypotheses, but I find this misleading. There isn’t any one scientific method. The only thing that matters is that you honestly assess the use of a theory to describe nature. If it’s useful, keep it. If not, try something else. This method doesn’t have to be changed, it has to be more consistently applied. You can’t assess the use of a scientific theory without comparing it to observation.

A theory might have other uses than describing nature. It might be pretty, artistic even. It might be thought-provoking. Yes, it might be beautiful and elegant. It might be too good to be true, it might be forever promising. If that’s what you are looking for that’s all fine by me. I am not arguing that these theories should not be pursued. Call them mathematics, art, or philosophy, but if they don’t describe nature don’t call them science.

--

--