Normative Subjective Probability

“The Philosophy of Probability”, 12/9/2016

Introduction

The subjective interpretation of probability identifies the concept with an agent’s degrees of belief. This is plausible in light of familiar statements such as “the probability that I will pass my exam is 30%”, or “there is a 60% chance I’ll catch a fish today”. This interpretation contrasts with objective theories such as frequency (occurrences of some event A out of total occurrences) or propensity (disposition of a system). After a brief technical setup, I will concede that this interpretation is not meant to be descriptive. Moreover, we will see that under a normative framework, it can still satisfy our desiderata for an interpretation of probability.

Utility/Betting Framework (Technical Setup)

De Finetti, Savage, and Ramsey formalize the notion of measuring subjective credences. Ramsey is able to build up an account of credences given just an agent’s preference ordering amongst some outcomes. Given numerical utilities and mathematical assumptions regarding the richness of the outcome space, we can find degrees of belief in any proposition P’ if we prefer an outcome L equally to M if P’, N if not P’. In friendly terms of money rather than utility, an agent’s probability is the amount she would pay for a bet that returned one unit only if P’ were true.

Subjective Credences and Rationality

Having made this notion more precise, why should we think degrees of belief express the concept of probability? Suppose we relate these partial beliefs back to our mathematical or formal axiomatization of the probability function pr:

1. For all propositions P, pr(P) >= 0.

2. If P is a logical truth or tautology, pr(P) is 1.

3. For disjoint P and Q, pr(P or Q) = pr(P) + pr(Q).

From experience, it is clear that real or unconstrained degrees of belief do not follow these axioms (are not “coherent”). In ordinary circumstances, we often do not assign full confidence to tautologies sufficiently complicated — and we are of course not logically omniscient to be able to. Moreover, as the work of Kahneman and Tversky (1974) shows, our conception and use of ‘probability’ systematically violates the probability theory. In studies, informed and naive subjects alike will judge the conjunction of two events to be more likely, and will judge the disjunction of two events to be less likely than the events themselves. Given that real degrees of belief are not coherent, our account can be modified as follows:

S: Probability of p is the degree of belief of a rational agent would assign p.

What is a rational agent? In so far as being coherent with respect to the axioms, it has been demonstrated that any agent who avoids sure loss in betting scenarios (equivalently, questions of preference) as described above will automatically conform to the axioms, and vice versa. Even with such a minimalist constraint on betting behavior, it is difficult to imagine that such an agent exists given the finitude of human computation. To escape sure loss, real agents might avoid betting scenarios altogether, and yet their beliefs remain incoherent. Therefore, the pure subjective interpretation of probability does not pick out anything in the real world. Instead of abandoning the mathematical axioms altogether, I propose that we consider the subjective interpretation as normative. It forms a guide for what doxastic states we ought to hold.

The Strength of a Normative Interpretation

How does our revised account fair as an interpretation of probability? We require an interpretation to be (i) admissible, (ii) ascertainable, and (iii) applicable. We have turned from a positive to a normative account to satisfy admissibility: as the dutch book arguments show, an agent who avoids sure loss (“rational” by our definition) is certain to assign coherent probabilities that map directly onto the axioms. This is more than we can say for some objective accounts such as propensity, where it is unclear what value the propensity of a system takes on, or why we should think it is even related to the axioms.

The criterion of ascertainability requires more discussion. It might be thought that it will be impossible to ascertain, even in principle, the credences of a non-existent rational agent. This is misunderstanding the account: rationality does not require that agents hold specific (or even similar) distribution of probability values, but only that they are coherent. Though normative, our account is still permissive: ascertaining the probability is not a matter of finding one ‘correct’ probability for most propositions P. We should judge the ascertainability of interpretation S on the following criteria:

A. Can we measure degrees of belief?

B. Can we determine whether S allows for these beliefs?

On point (A), the subjective account is already remarkably strong: finding degrees of belief qua action via betting scenarios as described above requires reasonable base assumptions and little extrapolation. It might be objected (see Hajék and Erikkson 2007) that the story is not so simple after all: degrees of belief identified with betting odds leads to the usual problems with operationalism. If they are just measured by betting prices, the link is tenuous given that preference orderings might under-determine beliefs for different expected value functions (or simpler, one might have credences but no preferences). At this step, we have two similar choices:

1. Following Armendt, we should treat partial beliefs as dispositions that influence our choices under certain (and perhaps ideal) conditions. Relaxing a strict empirical notion of ascertainability is not catastrophic, especially since we are still better off than the limiting frequentist and the propensity advocate. (Armendt 1993, 9).

2. As Hajék himself does, we can choose to take degrees of belief as a primitive notion, but admit that in using it — sometimes in ‘betting-type’ situations — we have an intuitive way of ascertaining them.

Upon finding an agent’s degrees of belief, point (B) can follow naturally; one can apply the criteria of coherence to determine whether those beliefs should be considered formally ‘admissible’. Yet, the relevance of this normative aspect leaves more to be said.

Suppose we are now convinced degrees of belief have something to do with probability. Nevertheless, is it applicable to provide an interpretation of this concept — one that we use so effortlessly — if we refrain from connecting it with beliefs of real agents? In other words, can an interpretation in a normative framework serve as a good guide? Consider the counterfactual scenario of interpreting statements of probability through heuristics and biases of real agents that fail to follow a formal set of axioms. This would render much of communication and reasonable inference extremely difficult in natural language. A claim of probable credence in a proposition would not allow you to infer that this agent thinks that its negation is improbable. To express a generalized probabilistic notion would entail spelling out an entire probability distribution, including conjunctions of events. A non-descriptive interpretation certainly has the weakness that it cannot always give us a description of what an agent means by a probability claim (she might be using another metric, such as ‘representativeness’); however, assuming and enforcing a sense of rationality seems to be the only choice we have in interpreting the probability statements of other actors.

The value in a normative framework also helps individuals sort out inconsistencies in their own partial belief sets. In this sense, the normative interpretation of probability is similar to a normative interpretation of the axioms of logic. Real agents form beliefs that violate the axioms of propositional logic: our beliefs might not commute, we might have full belief in contradictions, or we might not follow the standard inference rules of transitivity or modus ponens. Thinking of the belief function (analogous to the probability function) as an instantiation of the axioms of logic is not meant to describe our real beliefs since we are not deductively closed. However, consistency of our full beliefs is still an epistemic virtue and we do not want to be caught having full belief in p and ~p. This is illustrated by contention in the field surrounding the lottery paradox, which is problematic because it allows a rational agent to have a set of beliefs that are internally inconsistent. As Ramsey argues, interpreting probability in light of a rational agent is nothing but an extension of this consistency constraint for full beliefs: if we have equal credence in p and ~p at 90%, we ought to feel uncomfortable and adjust our credences to resolve this (see also Savage 20). While probabilistic coherence does not tell us how to adjust our partial beliefs, neither does logic. However, logic does have the advantage of advocating that rational agents believe truths (not just a priori truths or tautologies). I will only note here that there are various attempts at providing an analog for partial beliefs, including ‘calibration’, which asks our partial beliefs to track relative frequencies. Lastly, establishing a ‘criteria of wrongness’ is important for an interpretation, and the normative framework does this in a way that a purely operational and descriptive one cannot. As discussed by Savion, there are infamous examples of ‘clinging to discredited beliefs’ where agents have stronger belief in propositions after seeing evidence to the contrary. It is only by appealing to to a sense of rationality (though this may require a more thorough Bayesian framework of updating) can we make sense of our intuitions as to why we should consider these agents problematic.

Even having established that a formal axiomatization is useful, it might still be objected that our normative criteria is mistaken: why should we think that the specific sense of rationality advocated by the dutch book arguments should influence our everyday probability concept? In a normative framework where we admit idealization, we should not take the dutch book arguments in a ‘literal-minded’ way. Taken this way, the lack of a physical bookie to make bets against us (and all the artificiality accompanying a two-person interaction) might be sufficient reason to reject S’s normative suggestion about our epistemic states. Christensen has forcefully argued exactly this:

“Dutch Book vulnerability is not a real practical liability […] [and] it would not obviously follow that beliefs were defective from the epistemic standpoint” (Christensen 1996, 2; emphasis not mine).

Instead, Christensen posits the following ‘de-pragmatized’ reasoning for defending the dutch book:

J: A degree of belief in p justifies certain betting odds on p.

Christensen uses J to support the following explicit argument, similar to Hajék’s notion of susceptibility:

1. Suppose A has some betting odds on p where it is possible to inflict a sure loss on A.

2. It follows that these betting odds are defective in nature since they allow a sure loss.

3. If we take J to be true, the odds are justified by beliefs about p.

4. It follows that these beliefs are mistaken.

This entire argument is meant to be a priori rejection of a belief set — we need not imagine a real bookie or a specific type of real-world scenario, and yet we can still conclude that A’s partial beliefs are epistemically ‘wrong’ in the same way we would if A were deductively inconsistent (Ibid., 7). This argument is a particularly compelling reason to take the normative view seriously, because we are able to separate our intuitions of rationality from the axioms without introducing a practical situation where one loses money. Agents who have no idea what the axioms are can still appreciate the a priori force of the rationality constraint (especially if they buy J).

Conclusion

Some have taken the lack of rational and coherent agents as an obvious way to undermine the subjective interpretation of probability. Here, I have argued that this line of thought loses its force if we think of the interpretation in a normative framework. Normatively, the interpretation is admissible and is thoroughly applicable. Lastly, I have put forth Christensen’s arguments to show that the normative aspect of the interpretation is a reasonable constraint despite the ‘artificial’ scenarios where it would lead to negative consequences. Normative interpretations of concepts abound in all of philosophy. After all, we find a normative theory of ethics incredibly useful (there are no perfectly virtuous agents), and normative models of game theoretic and economic behavior tremendously applicable (despite a lack of ideal economic actors). We should not shy away from a normative interpretation of probability.


References

Armendt, Brad. “Dutch Books, Additivity, and Utility Theory.” Philosophical Topics 21.1 (1993): 1–20. Web.

Christensen, David. “Dutch-Book Arguments Depragmatized: Epistemic Consistency For Partial Believers.” Journal of Philosophy 93.9 (1996): 450–79. Web.

Christensen, David Phiroze. “2.” Putting Logic in Its Place: Formal Constraints on Rational Belief. Oxford: Clarendon, 2004. N. pag. Print.

Eriksson, Lina. “What Are Degrees of Belief?” Studia Logica: An International Journal for Symbolic Logic 86.2, Formal Epistemology I (2007): 183–213. JSTOR. Web. 09 Dec. 2016.

Grune-Yanoff, Till. “Rational Choice Theory and Bounded Rationality.” Religion, Economy, and Cooperation Religion and Reason (2010): 61–82. Web.

Hajek, Alan. “Dutch Book Arguments *.” The Handbook of Rational and Social Choice (2009): 173–95. Web.

Hájek, Alan. “Interpretations of Probability.” Stanford Encyclopedia of Philosophy. Stanford University, 21 Oct. 2002. Web. 09 Dec. 2016.

Savage, Leonard J. The Foundations of Statistics. New York: Dover Publications, 1972. Print.

Savion, Leah. “Clinging to Discredited Theories.” The International Journal of Learning: Annual Review 16.2 (2009): 85–94. Web.

Talbott, William. “Bayesian Epistemology.” Stanford Encyclopedia of Philosophy. Stanford University, 12 July 2001. Web. 09 Dec. 2016.