The cost of commitment
The necessity of risking everything you are.
Re-Assembling Reality #31. By Mike Brownnutt and David A. Palmer
In Essay #30, we shifted the questions we were asking from “How do I know what is true?” to “How do I decide what to believe?” If “belief” is an uncomfortable word for scientists, we could pose the question, “How do I decide which theory to commit to?”
This might seem relatively simple. Surely the scientist should commit to the theory which they think is most likely to be correct. To draw on the analogies of Essays #29 and #30, if a person on a sinking ship was not certain whether the life-boat or the life ring would save them, then — as a rational person who wanted to survive — it would make sense for them to take the one that they thought was most likely to save them. If a scientist were not sure which theory I correct, you might imagine they should just opt for the one which was most likely to be correct.
It would seem bizarre to suggest that a rational competent scientist might — much less should — commit to a theory which they expect to be wrong. Counter-intuitive as it may seem, though, there are many good scientific reasons why good and entirely reasonable scientists might do this.
What do you want to achieve?
A mariner on a sinking ship would be expected to go for the option that is most likely to save their skin if they wanted to save their skin. But not all mariners are primarily interested in saving their skin.
A sea-captain may believe it is their duty to stay with — and as necessary, go down with — their ship. A ship’s scientist may want to collect data on the seaworthiness of the lifeboat, regardless of their consequent life chances. If the sea-captain and the ship’s scientist happen to stand any chance of surviving their ordeal, this is a quite incidental consideration.
By the same token, the goal of seeking truth is not a goal shared by all scientists (as discussed in Essay #13). A scientist interested in fluid flow might work on classical models for such flow. The scientist would know that the model does not invoke a factually accurate description of what is happening, and they can gladly ignore such qualms if they hold that a model’s truth is secondary to a model’s usefulness.
Even for scientists who view their task as seeking truth, there are so many true things to seek, they cannot seek them all. They must choose which aspects of truth to seek, and commit to that. And by implication, commit to ignoring certain other aspects of truth. A medical scientist may decide to dedicate themselves to investigating the mechanism of a known medicine, or they may choose to head out and attempt to discover as-yet unknown medicines. They do not have time in their professional career to do both. And they do not know in advance which of the two paths (if either) will bear fruit.
Maybe, we think, if they do not know in advance which work will bear most fruit, they can at least select the one most likely to bear fruit. (Or the one most likely to bring them money, or fame, or tenure, or interesting puzzles, or whatever else is their thing.)
Alas, even if two scientists agree on the thing they want to achieve, it is not trivial for them to agree on which work they should pursue; or to which one they should commit. In order to understand this, let us consider the nature of certainty and commitment. Let us consider two different wagers, each based on the toss of a coin.
Money and blood
If I ask you to predict the result of a coin toss, you may say heads, or you may say tails. If I ask you how certain you are of your prediction, you would probably answer that you are “50% certain.”
Wager #1: A coin will be tossed. To buy into the wager, you must place a financial stake on your prediction of the outcome (heads or tails). If your prediction is correct, you will receive back $100. If your prediction is incorrect, you will receive back nothing.
The question, “How certain are you of your prediction?” might now be rephrased “What is the maximum stake you would place to buy into this game?” Instead of saying “50% certain” you might equivalently say you are “$50 certain.” A rational person would stake up to $50 to buy in, but no more. Anyone who has a grasp of statistics could do the calculation and come up with the same answer. This is possible because the stake ($50), the reward (getting $100 back), and the risk (losing your stake) are commensurable.
Wager 2: A coin will be tossed. To buy into the wager, you must place a financial stake on your prediction of the outcome (heads or tails). If your prediction is correct, you will receive back $100. If your prediction is incorrect, you will have your stake returned and be stabbed in the leg.
What is the maximum stake you would place to buy into this game?
The question is entirely meaningful. And yet there is no mathematical calculation which can tell you the ‘rational’ answer. It is not irrational to refuse to play altogether, but neither is it rational. This is because there is no ‘ratio’. The reward (getting $100) and the risk (getting stabbed) are incommensurate.
A university student who needs the money might take the risk and buy into the game. A few years later, after landing a job in finance, the same person probably wouldn’t buy in. This is not because they have become more (or less) rational. Rather, it is because they don’t need the money so badly.
The first wager, with commensurate risks and rewards, can be analysed in terms of probabilities. Faced with the question, “How certain are you that the toss will come up heads?” it is appropriate to answer, “I am 50% certain.” Alternatively, “I am certain enough to risk $50 to joining in your game,” would also be an appropriate answer.
The second wager, with incommensurate risks and rewards, cannot be analysed in terms of probability. Faced with the question, “How certain are you that the toss will come up heads?” the student’s answer is “I am certain enough to risk getting stabbed in the leg for $100.” The financier’s answer is “I am not nearly certain enough to risk joining in your game.”
As discussed in Essay #7, incommensurability suffuses science; and does so necessarily. In deciding which theory they should commit to, scientists are therefore in the situation of Wager 2, not Wager 1. “Confidence in something being true” is not measured in terms of probability. It is measured in terms of what you are willing to risk.
Risk and reward
If there is a radical theory that is likely wrong, a scientist must weigh up the risk and the reward:
Consider one scientist weighing up their options regarding a radical new theory:
— If the theory is wrong, they will have wasted the effort of five PhD students.
— If the theory is right, they will fundamentally change our understanding of science.
Even if you believe that the theory stands a good chance of being wrong, committing to such a theory, given the relevant risks and rewards, is not necessarily a bad choice. It is not even obviously bad science.
Consider another scientist weighing up their options regarding a well-established theory:
— Even if the theory is right, it will probably not revolutionise the world.
— Even if the theory is ultimately wrong, it will lead to a steady stream of papers, a steady stream of research funding, and tenure.
There is no shame in committing to such a theory. And committing to such a theory is not obviously bad science.
Two reasonable people, with the same information available to them, might choose to commit to different theories. And that is OK.
When do I find out what I am signing up for?
Having stated that scientists find themselves in the situation of Wager 2, we must note that the actual situation is not quite that simple. In Wager 2, the risks and rewards are incommensurate, just as they are in science. However, in Wager 2 the risks and rewards are clearly laid out in advance, so that you can make a decision on what you are committing to before you commit to it.
In science, we discover what the risks and rewards are through the process of scientific investigation. This means that discovering what the risks and rewards are for researching a particular theory requires, first, a commitment to researching that theory. Unlike the wager, therefore, scientists find out the terms and conditions of commitment only after they have committed themselves.
Case Study: Blibble Theory
A young scientist, looking for a PhD topic, is approached by a prospective supervisor with an offer: “Come and work with me on blibble theory! It will transform how we zonk!” The young scientist does not know what blibble theory is, much less understand its significance. They do not know what zonking is, and they certainly don’t understand the ramifications of transforming how it is done.
How do they know if blibble theory really is great, or if it is just some hype, or a scam, or a fringe theory propounded by lunatics? They could take their supervisor’s word for it: the supervisor seems to know what they are talking about and they seem trustworthy. But should the young scientist be willing to commit themself to something based on the word of an authority figure? That may start to look remarkably like religion.
There is another academic along the corridor who says that blibble theory is rubbish; barely even real science. But how is the student to understand their arguments against blibble theory? It is one person’s word against another. What can the student do to rise above this and understand for themself whether blibble theory will really transform how we zonk?
The student must commit to it! They must immerse themselves in it. They must act as though they believe it to be true, and act as though they believe it is worth committing to. Only then will they understand it. Only then will they know whether it is, indeed, worth committing to, or not. Only then will they know whether the reward of commitment is worth the risk.
The student thinks that this deal — having to commit before you know what you are committing to — is not so bad. It doesn’t seem like they really lose anything by looking into the subject. If the theory turns out to be wrong or uninteresting, they can just go and do something else. But the student is mistaken. The commitment comes at a cost.
It might take an entire PhD to find out that blibble theory is wrong. At the end of their PhD they will be highly qualified in a subject that no one cares about, and overqualified to go back and do a PhD in a subject that people do care about. They will be four years older, and behind their peers in whatever alternative paths they had chosen.
Do not believe that “there is no harm in just looking into it.” You will never get the time back. Life is a game that plays for keeps.
Let us assume that you are willing to commit to the program, accepting the risks, and knowing that you do not know exactly what the risks are. That commitment involves submission. You must learn certain equations; learn certain terms; learn certain practices. You must read the books you are given. You must come into the lab every day. You must stare down a microscope, hour after hour. It may seem dull. It may seem boring. But, you are told, only by submission to this discipline will you understand the deep truths of the universe. The uninitiated will never understand the true meaning of blibble theory. But the uninitiated may also never know whether it is a scam. The only way to find out one way or the other is to commit — and submit — to it.
After four years of a PhD, some of your fellow students are totally on-board with the program. They get it. They say they understand it. It has clicked for them. But it still seems weird to you. Do you drop out of academia and do something else? You may never know if blibble theory was all it was supposed to be. Maybe, if only you had held in there for another year, you would have finally got it, like your friends did. Maybe it really is a scam and your friends are now in on the conspiracy. Is it worth committing the next 40 years of your life trying to unearth a conspiracy? If you are wrong, you will only find out by committing to the task. And by the time you find out, you may be sixty. And sad. And alone.
In some alternative world, the prospective PhD student chooses not to commit to the scientific path, but instead takes a different way. A wise teacher tells them that the truth is not to be found in books, or laboratories, or microscopes. The only way a person can transform how they zonk is to leave their books. Leave their house. Leave their friends. Leave their clothes. Commune each day with the forests. Contemplate the cosmos, hour after hour.
Maybe that really is the way to enlightenment, and transforming how you zonk. How can you tell? Commit to it! Maybe it is a scam. How can you tell? Commit to it. If you are wrong, you may only find that out when you are sixty and sad and alone. And naked. Unfortunately, by that time it is too late to go back and do a PhD in blibble theory.
The full extent of commitment: our very selves
It may be uncomfortable to think that knowledge requires us to commit to it in advance, without knowing what we are committing to. The description so far has not, however, exhausted the extent to which the implications of that commitment must suffuse all we are.
We saw above that, when we take time to understand something, we do not get that time back. We have irrevocably lost something. While this may seem unfortunate, the thing we have lost seems to be external to us. We have lost time, or opportunities, or friends. But we have not lost ourselves. It is comforting to think that, however daunting the process, the ‘me’ that finished my PhD — while being a little older and knowing a little more than the ‘me’ that started my PhD — is still basically the same person on the inside.
It is comforting. And it is false.
It is often thought that there is a neat divide between facts and values. Facts are things that you know. Values are things that you hold. Facts are the things uncovered by science. Values are things imposed by religion. Facts are universal. Values are personal. Facts can be known by a person quite independent of the values that person holds. Facts and values are viewed as being absolutely and necessarily distinct.
With such a separation of facts and values — public facts outside me, private values inside me — the things I risk to gain knowledge of facts are external to me. Against this view, though, we established in Essay #28 that certain virtues are necessary for attaining certain aspects of knowledge, and that the process of attaining knowledge necessarily develops certain virtues in the knower.
As such, by choosing to know about one thing rather than another — by choosing to investigate something with a microscope in a lab, rather than with meditation on a mountaintop — you are choosing to create within you one set of virtues, and to not create within you another set. By committing to knowledge, you are risking your very self. And you can never know the benefits, or costs, of finding yourself, or loosing yourself, or changing yourself until after the event.
A scientist must develop curiosity. Importantly, they must develop a very specific type of curiosity. If you are curious about nothing, you will never bother sitting down to a research puzzle. If you are curious about everything, you will flit from puzzle to puzzle without ever digging down deep enough in any one area to make progress. A scientist must be very curious about their specific puzzle, and broadly incurious about the million projects that they would have been able to solve if only they had a million lifetimes to live.
The pursuit of scientific understanding changes a person. The person who finishes a PhD is not the same person as the one that started it. A person may start on their quest for blibble theory out of their own selfish pride at wanting to be called Dr Smith. But utterly selfish experimentalists do not get far in the collaborative world of experimental science. Either science will break them, or they will break science, or they will conform themselves to the need to play well with others.
To gain scientific knowledge — indeed to gain religious knowledge, cultural knowledge, or any other kind of knowledge — is to change who you are. But you cannot know in advance what those changes will be. And — to the extent that you think, in advance, that the risks of such changes are worth the rewards — you cannot know in advance if the ‘you’ that emerges from the process will agree with the ‘you’ that went in, that it was all worthwhile.
Nonsense, until you accept it
In summary, then, science — like religion — makes statements which seem to outsiders to be outlandish, incomprehensible, or trivially untrue. Quantum mechanics says that Schrödinger’s cat can be alive and dead. Christianity says that wisdom begins with the fear of the Lord. An outsider can scoff at such claims as silly. But an outsider — exactly because they are an outsider — cannot understand the claims, and so is in no position to pass judgement on the claims.
To understand scientific statements or religious statements, one must first accept that they can be true, and then accept that they are true. Only then can one understand what they mean, how they can be true, and what their truth means.
Commitment to the program means embracing the risk of being utterly mistaken; accepting that, at the point the commitment must be made, you cannot comprehend the potential rewards or potential risks involved; and being bound to the fact that the commitment is both total and irreversible. You cannot go back to how you were before.
In committing to the process of understanding, you must submit to the process. This means accepting certain facts as being true. It means accepting the authority of an epistemic community and of specific individuals within that community. It means adopting certain modes of thought; certain ways of conceiving of, apprehending, and interpreting the world around you. It means adopting certain types of behaviour; certain ways of speaking, dressing, acting or not acting. It means accepting certain requirements and prohibitions. It means submitting to a program in which your character will be shaped, and certain virtues, values, and vices inculcated.
This description of science is not a description of some aberrant travesty that has departed from the pure ideal of objectivity. Rather, this description of science is the only way science can work. This is not even close to the Enlightenment picture of objective, dispassionate, dehumanised scientists working without reference to their opinions, character, or religion; able to impress their knowledge on anyone and everyone, as simply as a printing press reproduces information on a blank page. But that is OK. Because the Enlightenment picture (which was introduced in Essay #5) could never have produced scientific knowledge.
If there is any objectivity in science, that objectivity cannot be obtained by removing humanity and virtue and passion from science and hiding behind an ‘objective’ method. Far from it: scientific knowledge requires us to hone — to heighten — our humanity, our virtue, our passion. I shall leave the closing words of the Essay to Michael Polanyi.
In honour of the sentiment Polanyi expresses, I suggest his words cannot be read quietly and seated. Please read them out loud, standing up. Preferably with bold gesticulations:
“Personal Knowledge in science… commits us, passionately and far beyond our comprehension, to a vision of reality. Of this responsibility we cannot divest ourselves by setting up objective criteria of verifiability or falsifiability, or what you will. For we live in it as in the garment of our own skin. Like love, to which it is akin, this commitment is a shirt of flame, blazing with passion and, also like love, consumed by devotion to a universal demand. Such is the true sense of objectivity in science.”