Why you shouldn’t argue with fundamentalists, or a primer on Bayesian probability

Kareem Amin
5 min readSep 4, 2013

--

I first encountered Bayes Theorem (BT) in a probability course in college. I have since forgotten and re-learnt it multiple times. Recently, I came across the concept in Nate Silver’s book the Signal and the Noise, and found its implications fascinating. I decided to spend some time grappling with the seemingly simple equation so that I could arrive at an intuitive understanding of it, and finally commit it to memory. As I worked through a number of examples and hypothetical situations, I found one to be particularly enlightening: BT provides a satisfying explanation for why I find it so difficult to have a meaningful debate with those who have an unwavering commitment to a particular viewpoint. Using Bayes’s theorem, I will explain why you can’t change someone’s mind if they have an absolute a priori belief. In turn, this example will illuminate how BT works.

Let’s start with an intuitive definition of Bayes Theorem. It describes how you should adjust your confidence in a hypothesis after being presented with some evidence. You adjust your confidence by interpreting the probability of an event; a high probability means that you should have high confidence, and vice versa. For example, imagine that you are living with your partner and one day you find a gift box hidden in the closet. Now, you want to develop a hypothesis about the intended recipient of this gift. The question is, is the gift for you? The evidence is the gift that you found in the closet. What BT allows you to do is to answer the question systematically, given the evidence.

Let’s begin with the tentative hypothesis that the gift is for you. To test it, you first need to estimate the likelihood that the gift is for you. If today is your birthday then the estimate for getting a gift is high—since your partner is very thoughtful. If, however, today is not your birthday, but is instead your father-in-law’s birthday, then this makes your initial estimate of receiving a gift much lower. The other question that you must take into account, is how do you adjust the initial probability given that you just found a hidden gift box? Finding this evidence should increase your confidence in the hypothesis, because if it is meant for you, then it makes sense that your partner tried to conceal the box in the closet. Finally, you have to evaluate: how common is it for a gift box to be in your closet, regardless of whether it is your birthday? In most cases, the chance that a gift box would appear in your closet for any other purpose is low. The evidence thus increases your confidence in the original hypothesis that the gift is for you.

This thought process is the key idea in BT. The theorem gives you a mathematical formula that allows you to calculate how much the evidence should change your original confidence in a hypothesis. Let’s call this modifying factor the evidence confidence factor, or ECF. The ECF is calculated by estimating how strongly linked the evidence is to the hypothesis. We do this by first thinking about how likely is it for this evidence to occur if the hypothesis is true, and then weighing that against how common the evidence is regardless of whether the hypothesis is true or not.

Let’s review the previous example to see how this works. How strongly linked is a gift box in your closet to the hypothesis that the gift is for you? Intuitively it seems obvious that there is a strong link. One of the achievements of BT is that it gives us a concrete, repeatable method for evaluating the strength of this link. The prescription is to assume that the hypothesis is true, and then to evaluate whether the evidence would occur or not. It’s straightforward in this example. If your partner did indeed buy you a gift, then it is quite likely that there would be a gift box hidden somewhere in the apartment. However, it’s not certain. The gift could be at work. Or maybe it’s in the car. In many cases, it’s hard to evaluate how relevant the evidence is, without this crucial exercise. It’s also important to interpret the evidence in the context of the situation—that’s the other piece of information that you need to calculate the ECF. How likely is the evidence to occur regardless of whether the hypothesis is true? If your partner works at a gift shop and always has many gift boxes at home, then even though a gift box is strongly linked to a gift for you, there is a high chance of its being in your closet for other reasons. This makes the evidence less likely to change your confidence in the hypothesis.

Notice that BT does not say anything about the truth of a hypothesis. It only defines a process for figuring out what your subjective belief in a hypothesis should be, given what you previously thought, and what you now know. In our example, there is an objective answer to the question: either your partner bought a gift, or your partner did not buy you a gift. However, BT only gives you an answer to how much confidence you should have that the gift is for you, given the information that you know. It does not tell you how the world is—merely your confidence in how it is, given what you know about it. It is fundamentally a theorem about our epistemological limits.

Now, we can define the theorem more accurately to be the following:

Confidence in a hypothesis = Evidence Confidence Factor x Confidence in the hypothesis before knowing about the evidence

Evidence Confidence Factor = Strength of the evidence’s link to the hypothesis / How common the evidence is in the context of the situation in which the hypothesis is being evaluated

This means that if your confidence in a hypothesis initially started out at 0%, no amount of evidence can ever change your mind! By definition, fundamentalists have an initial confidence in their beliefs (or their hypotheses about certain aspects of the world) that is very close to certainty, the corollary is that their confidence in a hypothesis that is contrary to their belief is close to zero. Evidence in the form of events and arguments has almost no power to alter the confidence of a fundamentalist, according to BT. In this case, you can think of BT as a mathematical way to show that an absolute belief precludes entertaining any evidence that is contrary to that belief.

I am not making a statement about whether fundamentalists’ beliefs are correct—as we now know, BT does not allow us to take that kind of stance. I am advocating, though, that having a slightly skeptical mind, one that does not completely believe in any one hypothesis, allows you to interact with the world more fully, and to be affected by it. It allows you to change your mind when the facts change, but at the same time, not be fickle. All things considered, it may make you a better conversationalist, a better partner—and *almost* certainly, a better thinker.

--

--