Economists invent models faster than they can understand them. I say this because of a recent(ish) paper by Masatlioglu and Raymond. The paper looks at a model — the Kosezgi-Rabin (KR) version of prospect theory — which has gained popularity in recent years. In early versions of prospect theory, it was proposed that consumption is compared to some reference point r, and that people experience ‘loss aversion’ (a penalty in utility) when their consumption is below r. One of the issues with the earlier version of prospect theory was exactly what determined the reference point, which was left unspecified by Kahnemann and Tversky.
KR proposed that the reference point was determined by rational expectations over what could have happened. So if I win £10 when I could have got £0, I won’t experience any loss; but if instead I win £10 when I could have got £100, I will experience some loss. It makes a degree of sense. But mathematically it is quite complicated, requiring a double weighting of all probabilities: first over all the outcomes, then within each outcome over all the outcomes that could have happened, so at every outcome the individual compares themselves to every other outcome (if your eyes are glazing over, you may now feel some of the suffering I feel when reading a microeconomic theory paper).
What Masatlioglu and Raymond show is that all of this playing with the utility function imposes some restrictions, and makes KR’s model difficult to distinguish from some of the other models in the literature. In particular it is not clear whether people are following KR’s model or ‘Rank Dependent Utility’ (RDU), used for modelling optimism and pessimism, as mathematically the two are essentially reweightings of one another. This leaves us with essentially equivalent models with different names: any observation which appears to offer support for KR’s theory could plausibly also offer support for RDU.
What’s more, a popular version of the model reproduces the famous critique by Rabin himself of expected utility theory: reasonable choices over small lotteries imply utility functions that will make unreasonable choice over large stakes lotteries. In Rabin’s example, someone who “turns down gambles where she loses $100 or gains $110, each with 50% probability… will turn down 50–50 bets of losing $1,000 or gaining any sum of money.” Thus the scope of the KR model is severely limited — in some cases the model may even violate first order stochastic dominance (somebody will choose a lottery that is objectively worse than another option they have).
The point here is that the KR model was produced to offer an acceptable looking (to economists) way of determining the reference point, and then justified with some typically vague intuitive statements about its plausibility. All of this was done in place of a serious investigation into the mathematical properties of the model, its relationship to similar existing models, its implications for behaviour and a discussion of the conditions under which it would work and fail. It’s great that MR have done such a good job of this 10 years later, but it should be par for the course for anybody creating a new model so that models do not spread before the errors are spotted, as has KR’s. Note that EU theory had existed for hundreds of years, and been formalised for over 50, before Rabin made his (IMO devastating, given its widespread use) critique.
This is just one example, but there are others. One is the life cycle model of consumption and savings, in which people are assumed to plan their consumption over their lifetimes, which is critically reviewed in this paper by Daria Pignalosa. Pignalosa’s basic point is that the various possible functional forms for the utility function are chosen because they are tractable and because they may deliver the right results at the macro level, but that they often imply implausible things about the behaviour of the consumer at the micro level. The quadratic utility function implies that the wealthier take less risk (which is wrong); the popular CRRA utility function implies that every aspect of preference is governed by one parameter, so more risk averse people necessarily react less to interest rate changes (which can’t be claimed in general). Yes, there are other more complex utility functions which solve some of these problems, but if you want to show me a utility function make sure to accompany it with full demonstration of its properties and limitations.
Economists may say ‘we use the models to explain one particular thing — like large scale gambling behaviour, or the macro consumption data, not to explain everything!’ Firstly, this could be better specified in classes and papers, as currently it is absent from both and therefore seems like an ad hoc response to this type of criticism. Secondly, the point of these complicated microfounded models are to show us how individual behaviour leads to macroeconomic behaviour. If the individual behaviour in the models is implausible then they are not advancing our understanding in that respect.
I’m not blaming individual economists for this deluge of half-explored models. My own experience leads me to believe that the impetus of theoretically-informed research is to understand or explain a particular class of observations. A model is formulated to do so, and once the researcher is confident that the model is plausibly related to the phenomena in question, they stop investigating its properties. Most models are left unexamined beyond the paper in which they are written, and even popular ones are rarely dissected at both the macro and micro levels to tease out their full implications. But this creates a tension whereby it is insisted that this type of model is necessary for understanding behaviour, yet the models’ implications for behaviour are not fully understood.
The other response will be ‘well, let’s see your model mr big economics critic who hates everything’. Well, in my opinion we are better off sticking to simple, transparent relationships between aggregates if models are just going to confuse things — Jason Smith’s work has impressed me here*, and some econometrics can be OK too (not too much, mind). As for understanding individual behaviour, I think the use of utility functions in the face of risk/uncertainty is basically a dead end based on a mathematical error, and there are much simpler alternatives which will not produce the level of confusion EU-style theories have. But that discussion is for another time.
*Smith also pointed out another, empirical example of what I’m talking about in this post: a prospect theory based life cycle model predicted a transition to retirement of “about 51 minutes which I found entertaining ‒ you’d probably want to look deeper into that than not at all if you were publishing a paper”