Confirmation Bias is Rational

Kevin Dorst
Science and Philosophy
5 min readOct 17, 2020

It’s September 21, 2020. Justice Ruth Bader Ginsburg has just died. Republicans are moving to fill her seat; Democrats are crying foul.

Fox News publishes an op-ed by Ted Cruz arguing that the Senate has a duty to fill her seat before the election. The New York Times publishes an op-ed on Republicans’ hypocrisy and Democrats’ options.

Becca and I each read both. I — along with my liberal friends — conclude that Republicans are hypocritically and dangerous violating precedent. Becca — along with her conservative friends — concludes that Republicans are doing what needs to be done, and that Democrats are threatening to violate democratic norms (“court packing??”) in response.

In short: we both see the same evidence, but we react in opposite ways — ways that lead each of us to be confident in our opposing beliefs. In doing so, we exhibit a well-known form of confirmation bias known as biased assimilation.

And we are rational to do so: we both are doing what we should expect will make our beliefs most accurate. Here’s why.

Consider what those who exhibit biased assimilation actually do.

They are presented with two pieces of evidence — one telling in favor of a claim, one telling against it. They have limited time and energy to process this evidence. As a result, the group that believes the claim spends more time scrutinizing the evidence against it; the group that disbelieves it spends more time scrutinizing the evidence in favor of it.

In scrutinizing the evidence against their prior belief, what they are doing is looking for a flaw in the argument; a gap in the reasoning; or, more generally, an alternative explanation that could nullify the force of the evidence.

For example, when I read both op-eds, I spent a lot more time thinking about Cruz’s reasons in favor of appointing someone (I even did some googling to fact check them). In doing so, I was able to spot the fact that some of the reasoning was misleadingly worded; for instance:

Twenty-nine times in our nation’s history we’ve seen a Supreme Court vacancy in an election year or before an inauguration, and in every instance, the president proceeded with a nomination.”

True. But this glosses over the fact that just 4 years ago, Obama did indeed “proceed with a nomination” — and in response Senate Republicans (with Cruz’s support) blocked that nomination using the excuse that it was an election year.

The point? I decided to spend little time thinking about the details of the New York Times’s argument, and so found little reason to object to it; instead, I spent my time scrutinizing Cruz’s argument, and when I did I found reasons to discount it.

Meanwhile, Becca did the opposite: she scrutinized the New York Times’s argument more than Cruz’s, and in doing so no doubt found flaws in the argument.

Notice what that means: although Becca and I were presented with the same evidence initially, the way we chose to process it meant we ended up with different evidence by the end of it. I knew subtle details about Cruz’s argument that Becca didn’t notice; Becca knew subtle details about the New York Times argument that I didn’t notice.

This selective scrutiny leads us to polarize. Why?

Scrutinizing a piece of evidence is a form of cognitive search: you are searching for an alternative explanation that would fit the facts of the argument but remove its force.

If you’ve kept up with this blog, that should sound familiar: it’s a lot like searching your lexicon for a word that fits a string — i.e. a word-completion task. When I look closely at Cruz’s argument and search for flaws, cognitively what I’m doing is just like when I look closely at a string of letters — say, ‘_E_RT’ — and search for a word that completes it. (Hint: what’s in your chest?)

In both cases, if I find what I’m looking for (a problem with Cruz’s argument; a word that completes the string) I get strong, unambiguous evidence, and so I know what to think (the argument is no good; the string is completable). But if I try and fail to find what I’m looking for, I get weak, ambiguous evidence — I should be unsure whether to think the argument is any good; I should be unsure how confident to be that the string is completable.

Thus scrutinizing an argument leads to predictable polarization. If I find a flaw in Cruz’s argument, my confidence in my prior belief goes way up; if I don’t find a flaw, it goes only a little bit down. Thus, on average, selective scrutiny will increase my confidence.

Nevertheless, such selective scrutiny is rational. Why?

Because it’s a good way to avoid ambiguous evidence — and, therefore, is often a good way to make your beliefs more accurate.

Thus if you’re given a choice between scrutinizing Cruz’s argument, or the NYT’s, often the best way to get accurate beliefs is to scrutinize the one where you expect to find a flaw.

Which one is that? More likely than not, the argument that disconfirms your prior beliefs! For, given your prior beliefs, you should think such arguments are more likely to contain flaws, and that their flaws will be easier to recognize.

Thus I expect Cruz’s argument to contain a flaw, so I scrutinize it; and Becca expects the NYT’s argument to contain flaw, so she scrutinizes it. These choices are rational — despite the fact that they predictably lead our believes to polarize.

We can demonstrate this with a simulation.

Suppose Becca and I started out each expecting 50% of the pieces of evidence for/against replacing Ginsburg to contain flaws, but I am slightly better at finding flaws in the supporting evidence, and she is slightly better at finding flaws in the detracting evidence.

Suppose then we are presented with a series of random pairs of pieces of evidence — one in favor, one against — and at each stage we decide to scrutinize the one that we expect to make us more accurate. Since accuracy is correlated with whether we expect to find flaws, this means that I will be slightly more likely to scrutinize the supporting evidence, and she will be slightly more likely to scrutinize the detracting evidence.

As a result — even if 50% of the pieces of evidence tell in each direction — we’ll polarize:

Simulation of two groups who are better at scrutinizing different types of evidence, and always do their best to be accurate

Upshot: biased assimilation can be rational. People with opposing beliefs who care only about the truth and are presented with the same evidence can be expected to polarize, since the best way to assess that evidence will often be to apply selective scrutiny to the evidence that disconfirms their beliefs.

Originally published at https://www.psychologytoday.com.

--

--

Kevin Dorst
Science and Philosophy

Philosopher at University of Pittsburgh, working on the question of how irrational we truly are.