What is “Rational” Polarization?

Kevin Dorst
Science and Philosophy
10 min readOct 8, 2020

So far in this series, I’ve (1) argued that we need a rational explanation of polarization, (2) described an experiment showing how in principle we could give one, and (3) suggested that this explanation can be applied to the psychological mechanisms that drive polarization.

Over the next two weeks, I’ll put these normative claims on a firm theoretical foundation. Today I’ll explain why ambiguous evidence is both necessary and sufficient for predictable polarization to be rational. Next week I’ll use this theory to explain our experimental results and show how predictable, profound, persistent polarization can emerge from rational processes.

With those theoretical tools in place, we’ll be in a position to use them to explain the psychological mechanisms that in fact drive polarization.

​So: what do I mean by “rational” polarization; and why is “ambiguous” evidence the key?

It’s standard to distinguish practical from epistemic rationality. Practical rationality is doing the best that you can to fulfill your goals, given the options available to you. Epistemic rationality is doing the best that you can to believe the truth, given the evidence available to you.

It’s practically rational to believe that climate change is a hoax if you know that doing otherwise will lead you to be ostracized by your friends and family. It’s not epistemically rational to do so unless your evidence — including the opinions of those you trust — makes it likely that climate change is a hoax.

My claim is about epistemic rationality, not practical rationality. Given how important our political beliefs are to our social identities, it’s not surprising that it’s in our interest to have liberal beliefs if our friends are liberal, and to have conservative beliefs if our friends are conservative. Thus is should be uncontroversial that the mechanisms that drive polarization can be practically rational — as people like Ezra Klein and Dan Kahan claim.

The more surprising claim I want to defend is that ambiguities in political evidence make it so that liberals and conservatives who are doing the best they can to believe the truth will tend to become more confident in their opposing beliefs.

To defend this claim, we need concrete theory of epistemic rationality.

The Standard Theory

The standard theory is what we can call unambiguous Bayesianism. It says that the rational degrees of confidence at a time can always be represented with a single probability distribution, and that new evidence is always unambiguous, in the sense that you can always know exactly how confident to be in light of that evidence.

Simple example: suppose there’s a fair lottery with 10 tickets. You hold 3 of them, Beth holds 2, and Charlie holds 5. Given that information, how confident should you be in the various outcomes? That’s easy: you should be 30% confident you’ll win, 20% confident Beth will, and 50% confident Charlie will.

Now suppose I give you some unambiguous evidence: I tell you whether or not Charlie won. Again, you’ll know exactly what to do with this information: if I tell you he won, you know you should be 100% confident he won; if I tell you he lost, that means there are 5 tickets remaining, 3 of which belong to you — so you should be 3/5 = 60% confident that you won and 40% confident that Beth did.

In effect, unambiguous Bayesianism assimilates every case of information-gain to a situation like our lottery, wherein you always know what probabilities to have both before and after the evidence comes in.

This has a surprising consequence:

Fact 1. Unambiguous Bayesianism implies that, no matter what evidence you might get, predictable polarization is always irrational.

(​The Technical Appendix contains all formal statements and proofs.)

In particular, consider me back in 2010, thinking about the political attitudes I’d have in 2020. Unambiguous Bayesianism implies that no matter what evidence I might get — no matter that I was going to a liberal university, for instance — I shouldn’t have expected it to be rational for me to become any more liberal than I was then.

Moreover, Fact 1 also implies that if me and Becca shared opinions in 2010, then we couldn’t have expected rational forces to lead me to become more liberal than her.

Why is Fact 1 true — and what does it mean?

Why it’s true: Return to the simple lottery case. Suppose you are only allowed to ask questions which you know I’ll give a clear answer to. You’re currently 30% confident that you won. Is there anything you can ask me that you expect will make your more confident of this? No.

You could ask me, “Did I win?” — but although there’s a 30% chance I’ll say ‘Yes’, and your confidence will jump to 100%, there’s a 70% chance I’ll say ‘No’ and it’ll drop to 0%. Notice that (0.3)(1) + (0.7)(0) = 30%.

You could instead ask me something that’s more likely to give you confirming evidence, such as “Did Beth or I win?” In that case it’s 50% likely that I’ll say ‘Yes’ — but if I do your confidence will only jump to 60% (since there’ll still be a 40% chance that Beth won); and if I say ‘No’, your confidence will drop to 0%. And again, (0.5)(0.6) + (0.5)(0) = 30%.

This is no coincidence. Fact 1 implies that if you can only ask questions with unambiguous answers, there’s no question you can ask that you can expect to make you more confident that you won. And recall: unambiguous Bayesianism assimilates every scenario to one like this.

What it means: Fact 1 implies that if unambiguous Bayesianism is the right theory of epistemic rationality, then the polarization we observe in politics must be irrational.

After all, a core feature of this polarization is that it is possible to see it coming. When my friend Becca and I went our separate ways in 2010, I expected that her opinions would get more conservative, and mine would get more liberal. Unambiguous Bayesianism implies, therefore, that I must chalk such predictably polarization up to irrationality.

But, as I’ve argued, there’s strong reason to think I can’t chalk it up to irrationality — for if I’m to hold onto my political beliefs now, I can’t think they were formed irrationally.

This — now stated more precisely — is the puzzle of predictable polarization with which I began this series.

Ambiguous Evidence

The solution is ambiguous evidence.

Evidence is ambiguous when it doesn’t wear its verdicts on its sleeve — when even rational people should be unsure how to react to it. Precisely: your evidence is ambiguous if, in light of it, you should be unsure how confident to be in some claim.

(More precisely: letting P be the rational probabilities to have given your evidence, there is some claim q such that P(P(q)=t)<1, for all t. See the Technical Appendix.)

Here is the key result driving this project:

Fact 2. Whenever evidence is ambiguous, there is a claim on which can be predictably polarizing.

In other words, someone who receives ambiguous evidence can expect it to be rational to increase their confidence in some claim. Therefore, if two people will receive ambiguous evidence, it’s possible for them to expect that that their beliefs will diverge in a particular direction.

As we saw in our experiment — and as I’ll explain in more depth next week — this means that ambiguous evidence can lead to predictable, rational shifts in your beliefs.

Without going into the formal argument, this is something that I think we all grasp, intuitively. Consider an activity like asking a friend for encouragement. For example, suppose that instead of a lottery, you were competing with Charlie and Beth for a job. I don’t know who will get the offer, but you’re nervous and come to me seeking reassurance. What will I do?

I’ll provide you reasons to think you will get it — help you focus on how your interview went well, how qualified you are, etc.

Of course, when you go to me seeking reassurance you know that I’m going to encourage you in this way. So the mere fact that I’m giving you such reasons isn’t, in itself, evidence that you got the job. Nevertheless, we go to our friends for encouragement in this way because we do tend to feel more confident afterwards. Why is that?

If I’m a good encourager, then I’ll do my best to make the evidence in favor of you getting the position clear and unambiguous, while that against you getting it unclear and ambiguous. I’ll say, “They were really excited about you in the interview, right?” — highlighting unambiguous evidence that you’ll get the job. And when you worry, “But one of the interviewers looked unhappy throughout it”, I’ll say, “Bill? I hear he’s always grumpy, so it’s probably got nothing to do with you” — thus making evidence that you didn’t get the job more ambiguous and so weaker. On the whole, this back-and-forth can be expected to make you more confident that you’ll get the job.

This informal account is sketchy, but I hope you can see the rough outlines of how this story will go. We’ll return to filling it out in the details in due course.

But before we do that, there’s a more fundamental question we need to ask.

I’ve introduced the notion of ambiguous evidence and proved a result connecting it to polarization. But how do we know that the models of ambiguous evidence which allow for predictable polarization are good models of (epistemic) rationality? Unambiguous Bayesianism has a distinguished pedigree as a model of rational belief; how do we know that allowing ambiguous evidence isn’t just a way of distorting it into an irrational model?

The Value of Rationality

This can be given a precise answer in terms of the value of evidence.

What distinguishes rational from irrational transitions in belief? It’s rational to update your beliefs about the lottery by asking me a question about who won. It’s irrational to update your beliefs hypnotizing yourself to believe you won. Why the difference?

Answer: asking me a question is valuable, in the sense that you can expect it to make your beliefs more accurate, and therefore to improve the quality of your decisions. Conversely, hypnotizing yourself is not valuable in this sense: if right now you’re 30% confident you won, you don’t expect that hypnotizing yourself to become 100% confident will make your opinions more accurate — rather, it’ll just make you certain of something that’s likely false!

This idea can be made formally precise using tools from decision theory. Say that a transition in beliefs is valuable if, no matter what decision you face, you prefer to make the transition before making your decision, rather than simply making your decision now. (See the Technical Appendix for details.)

To illustrate, focus on our simple lottery case. Suppose you’re offered the following:

Bet: If Charlie wins the lottery, you gain 5 dollars; if not, you lose 1 dollar.

Since Charlie is 50% likely to win, this is a bet in your favor.

Would you rather (1) decide whether to take the Bet now, or (2) first ask me a question about who won, and then decide whether to take the Bet?

Obviously the latter. If you must decide now, you’ll take the Bet — with some chance of gaining 5 dollars, and some chance of losing 1 dollar. But what if instead you first ask, “Did Charlie win?”, before making your decision? Then if I say ‘Yes’, you’ll take the Bet and walk away with 5 dollars; and if I say ‘No’, you leave the Bet and avoid losing 1 dollar.

In short: asking a question allows you to keep the benefit and reduce the risk. That’s why asking questions is epistemically rational.

Contrast questions with hypnosis. Would you rather (1) decide whether to take the Bet now, or (2) first hypnotize yourself to believe that you won, and then decide whether to take the Bet?

Obviously you’d rather not hypnotize yourself. After all, if you don’t hypnotize yourself, you’ll take the Bet — and it’s a bet in your favor. If you do hypnotize yourself, the Bet will still be in your favor (it’s still 50% likely that Charlie will win); but, since you’ve hypnotized yourself to think that you won (so Charlie lost), you won’t take the bet — losing out on a good opportunity.

All of this can be generalized and formalized into a theory of epistemic rationality:

Rationality as Value: epistemically rational belief-transitions are those that are valuable, in the sense that you should always expect them to lead to better decisions.

There are two core facts this theory gives us.

First: if we assume that evidence is unambiguous, then Rationality as Value implies unambiguous Bayesianism:

Fact 3. Rationality as Value implies that, when evidence is unambiguous, unambiguous Bayesianism is the right theory of epistemic rationality.

Thus our theory of epistemic rationality subsumes the standard theory as a special case. In particular, it implies that when evidence is unambiguous, predictable polarization is irrational.

Second: once we allow ambiguous evidence, predictable polarization can be rational:

Fact 4. There are belief-transitions that are valuable but contain ambiguous evidence — and which, therefore, are predictably polarizing.

Fact 4 is the foundation for the theory of rational polarization that I’m putting forward. It provides a theoretical “possibility proof”, to complement our empirical one: it shows that, when evidence is ambiguous, you can be rational expect it to lead you to the truth, despite expecting it to polarize you. This is how we are going to solve the puzzle of predictable polarization.

In particular, it turns out that we can string together a series of independent questions and pieces of (ambiguous) evidence with the following features:

  • Relative to each question, the evidence you’ll receive is valuable and yet (slightly) predictably polarizing;
  • Yet relative to the collection of questions as a whole, the evidence is predictably and profoundly polarizing.

This, I’ll argue, is how predictable, persistent, and profound polarization can be rational — how, back in 2010, Becca and I could predict that we’d come to disagree radically without predicting that either of us would be systematically irrational.

What next?
The formal details and proofs of Facts 1–4 can be found in the Technical Appendix.
​If you liked this post, consider signing up for the newsletter, following me on Twitter, or spreading the word.
Coming up next week: an argument that the predictable polarization observed in our word-completion experiment was rational, and an explanation of how predictable, profound, persistent polarization can arise rationally from ambiguous evidence.
​​​

Originally published at https://www.kevindorst.com.

--

--

Kevin Dorst
Science and Philosophy

Philosopher at University of Pittsburgh, working on the question of how irrational we truly are.