Choosing our commitments

Because we can only know afterwards.

Mike Brownnutt
Re-Assembling Reality
10 min readJun 15, 2022

--

Re-Assembling Reality #30. By Mike Brownnutt and David A. Palmer

In the last Essay (#29) we considered what to do in a storm at sea, when abstaining from action is not an option: We can sink or we can swim, but we can’t do nothing. This kind of situation — where you cannot opt out — is what the philosopher William James called a forced option.[1]

If you are given the option of a lifeboat or a life-ring, this is not a forced option: you could take neither, and stay on the ship instead. By contrast, if you are given the option of staying on the ship or not staying on the ship, this is a forced option. You cannot say you won’t do either of them.[2]

The universe requires that you make a choice. Unfortunately for positivists, the universe does not require that you make an informed choice. Worse still, the universe often does not permit you to make an informed choice. But it demands a choice from you nonetheless.

The rationalist in us might want to withhold a decision until we have the relevant information. But we live in a universe that will not let us play that game: it withholds the relevant information until we have made a decision.

This holds true, not only for ships at sea, but also for religion and science. In this essay we shall start to unpack some of the consequences of this.

Forced options in religion

The analysis of forced and unforced options has clear applications in religion. If someone asks whether you believe that the Christian God exists or the Sikh God exists, this is an unforced option. You could say that you do not believe that either exists; you may believe that Norse gods exist, or not believe that any Gods exist. You may say that you are agnostic and you do not know which Gods, if any, exist. In any event, you are not constrained to having to pick one of the two options initially offered.

Academic assent to the existence of any particular god is not a forced option. By contrast, actions taken with respect to those gods do constitute a forced option.

A principled agnostic can claim they do not know which course of action is best. But an agnostic, no matter how principled, cannot fail to act. What they do (or do not do) effectively tips their hand for them. They may not know which god (if any) to worship. But in their actions, they either do or do not worship something. If they choose to not worship any god then, regardless of what they do or do not believe, they are acting as though they believed that it is OK to not worship any god. While they might be (strictly) agnostic, they act as though they were an atheist.

This discussion raises a new question which we can ask, and it is a question that may not have previously seemed significant. Previously, we might have asked,

“How do I know what is true?”

Now we are asking,

“How do I decide what to believe?”

And intimately connected to this, the corollary question,

“How do I act in light of that belief?”

First of all, we must note that these latter two questions are very different from the first question. Next we must note that the latter two questions are so tightly intertwined that something strange has happened to our notion of ‘belief’.

Belief is no longer primarily tied to truth, but to action. If truth is now invoked in connection with belief, it is not by understanding belief in terms of “thinking that something is true,” but in terms of “acting as though you think something is true.”

The person on a ship may look at a lifeboat and claim (in a propositional sense), “I do not believe that this life-boat would save anyone.” They may later recount their story and insist, “I never believed the life-boat would save me.” And yet, when they jump into the life-boat to escape a sinking ship, such propositional truth claims are irrelevant. Of existential significance is their action, on insufficient evidence, to throw their lot in the life-boat and act as though the lifeboat would save them.

The understanding of belief has long been present, even if not explicitly, in our thinking. If a person says they believe something, but does not act in accordance with what they say they believe, we can reasonably ask if we should look to their actions or their words to evaluate what they really believe. If Bob says that women deserve respect, but his actions denigrate women, surely his actions make his true beliefs clear. Any propositional protestations — “But I really do believe women should be respected” — are undercut by his actions. In like manner, propositional protestations that “I did not believe the life boat was a good choice” are undermined by actions of choosing the lifeboat. And in just the same way, the agnostic’s propositional claim, “I do not know if any Gods are worthy of worship,” is irrelevant in face of the effectively atheistic action of not worshiping any of them.

It is not only sailors and worshipers who face tough decisions regarding what they should choose to believe, and how they should then act. Scientists, too, must wrestle with such issues.

Theory choice in science

Compare a scientist faced with multiple theories which may or may not be correct, and a mariner faced with multiple survival strategies which may or may not save him.

At first glance, one might say that the problem faced by the mariner — that they have insufficient information; that they were unable to step outside the situation and do repeated tests — are the very limitations which scientists do not face, and which science allows us to overcome. On closer inspection, however, the distinction starts to fade. To illustrate this, we shall consider a scientist wanting to test a given theory; trying to decide if a theory accords with experiment or not.

If a theorist writes down an equation about some aspect of the physical world, we would like to think that this would be amenable to experimental testing. If a theorist who wrote an equation about the physical world insisted that the equation could not be investigated experimentally, one might feel that the theorist had misunderstood how science is supposed to work.

To take a concrete example, consider the following equation, relating different aspects of an electron’s angular momentum:

The exact meaning of each term does not concern us here [3], but we might feel that an experimentalist should be able to set up an experiment to empirically determine the values of numbers in that equation. Given the experimentally determined values, we would then be able to see whether the equation was correct or incorrect.

Scientists are particularly interested in measuring the electron g‑factor, g_e, because different theories predict different values for it. Classical mechanics, for example, predicts that g_e = 1, while quantum mechanics predicts a value of g_e = 2. By experimentally measuring g_e it should therefore be possible to choose between those theories; to say either that classical mechanics is wrong, or that quantum mechanics is wrong or, possibly, that both are wrong.

Quantum mechanics describes the world in different ways to classical mechanics. And some of those differences should be measurable. Like, is the electron g factor 1 or 2? That sounds like a pretty cut and dried difference. If only we could perform theory-independent measurements. (Source: Minute Physics.)

The experimentalist goes into the lab with an open mind and starts to set up their experiment. To make an experimental measurement, they need an experimental apparatus. They need, for example, to build a laser. Unfortunately, there is no classical theory of laser operation. To design a laser, they need to invoke quantum mechanics. Their open mind cannot be as open as they might like. There are several options now available to them.

Option one:

Admit that, obviously, they are pretty certain that quantum mechanics is true and classical mechanics is not, so they could just use quantum mechanics and be done. Unfortunately, this rather bypasses the whole ‘experimental testing’ part; they have just picked whichever one they thought was true.

Option one becomes even more tricky if the scientist is not absolutely certain that quantum mechanics is true. What if they are only 90% sure?

Option two:

Calculate the answer twice — once with quantum mechanics, once with classical mechanics — and use a 90:10 combination of the two answers (because they are 90% sure that quantum mechanics is right). By this method, if quantum mechanics says their laser will output 100 mW of light, and classical mechanics says their laser will not output any light, they conclude that the laser outputs 90 mW of light. It is hopefully clear to all readers that this is a terrible idea, and a very bad way to do science.

Option three:

Use classical mechanics 10% of the time and quantum mechanics 90% of the time. Any work done on Monday mornings would assume that the laser outputs no light, and work done from Monday afternoon through to Friday would assume that it outputs 100 mW. Again, it hopefully goes without saying that this is a terrible idea. It is certainly terrible for the laser-safety officer who believes that it is safe to start their work week by looking directly into the laser.

Option four:

Assume that one theory is correct. Either one will do, but let us for the moment assume that quantum mechanics is correct. The scientists would work as though quantum mechanics were true. They would do all of their calculations, and set up all of their experiments, based on that assumption. They would utterly commit to it. Being aware that the theory may be wrong; indeed being aware that they risk talking complete nonsense, they commit to it totally. Under this option, one scientist may commit to quantum mechanics, like a mariner putting out into unknown waters in a life-boat. Another scientist may commit to classical mechanics, like a mariner clinging to a ship’s broken mast. One or both of them may perish in the attempt. But it is a risk they must take.

It is, of course, the fourth option that scientists take. In order to test the claims of quantum mechanics, they commit absolutely to the assumptions of quantum mechanics. If this seems circular, it is because it is circular. If the experiment ultimately ‘showed’ that g_e = 2 , as ‘expected’— that classical mechanics was wrong and that quantum mechanics was correct — the skeptic might reasonably throw their arms in the air and cry foul. It was a fix! Obviously, if you put quantum mechanics in, you get quantum mechanics out! That is hardly a fair test! You have done nothing but show that quantum mechanics is self-consistent. How can science find anything new?

This is where the magic of science occurs: we put quantum mechanics in, but we do not get quantum mechanics out. If we do everything assuming quantum mechanics is correct, the measured answer turns out to be g_e = 2.002. (Well, strictly, it turns out to be -2.0023193043626, give or take four in the last decimal place [4]). Our logic may have been circular on its own, but we do not use our logic on its own. We beat ourselves against a universe which is quite distinct from our logic. By assuming that quantum mechanics was correct, by committing to it, by acting as though we believed it to be true, we discovered something new about the universe: that quantum mechanics is wrong.

This, then, is how science works. In order to find out about quantum mechanics, you cannot reserve judgement until all the data is in. The way scientists get the data is to immerse themselves in the world of quantum mechanics. In attempting to find out about the world, a scientist may remain (strictly speaking) agnostic regarding the truth of a theory, but they must still act as though the theory were completely true. There is a direct parallel here with the situation on the ship: in attempting to survive, you may remain (strictly speaking) agnostic about which flotation device will save you, but you have to do something and, in that doing, you act as though you believe one particular approach will save you. And in that acting, you learn whether you are right.

Michael Polanyi, as a philosopher of science, said it this way: “Do not seek to understand in order to believe, but believe so that you may understand.”[5] Belief (certainly in the sense of “acting as though something were true”) comes before understanding the thing being believed. This is necessarily so, and simply follows from the way we come to understand. In order to understand quantum mechanics, you first have to commit to it.

Any scientist who followed Clifford’s Principle, that “It is wrong to believe anything on insufficient evidence,” [6] would never understand anything, because they would never believe anything. They would never commit to a theory, step into the lab and act as though that theory was correct. The scientist who believes is the scientist who will understand. That is how scientific research necessarily works.

It is uncomfortable to realise that we must commit to something. The universe will not let us abstain. It is even more uncomfortable to realise that we cannot understand what we have committed ourselves to until after we have committed ourselves to it. In the next essay we shall consider what that commitment costs us. Then things things will get really uncomfortable.

[1] William James, The Will to Believe (1896).

[2] Some options are (strictly speaking) unforced, but are forced for all practical purposes. If a student is told that they must write an essay about either Charles Darwin or William James, this is (strictly) and unforced option. The student could choose instead to jot down a haiku on the Russian revolution, and spend the rest of the afternoon drinking coffee with friends. However, if a student is told that, in order to pass the course, they must write an essay about either Charles Darwin or William James, then it becomes — for students who want to pass the course — a forced option.

[3] But for those of you who really want to know,
\mu_s is the electron’s spin magnetic moment,
g_e is the electron spin factor,
\mu_B is the Bohr magneton,
\hbar is Plank’s constant, and
S is the electron’s spin angular momentum.

[4] CODATA recommended values (2018).

[5] Michael Polanyi, The Tacit Dimension (1966). Polanyi was not the first to come up with this idea. He borrowed it from the theologian, Anselm of Canterbury (Proslogion, 1078), who in turn picked up the idea from theologian Augustine of Hippo (Tractates on the Gospel of John, c. 400).

[6] William Kingdon Clifford, The Ethics of Belief (1877), and discussed in Essay #29.

--

--

Mike Brownnutt
Re-Assembling Reality

I have a Master's in theology and a PhD in physics. I am employed in social work to do philosophy. Sometimes I pretend that's not a bit weird.