Multi-period Expected Utility Theory Predicts Zero Risk Aversion in Copenhagen Experiment, Same as Ergodicity Economics.
In a previous article, I presented a simplified model of the Copenhagen Experiment (CE) and proved that Ole Peters’ claims about Expected Utility Theory (EUT) were not true. In that article I showed that multi-period EUT does in fact predict a substantial change in risk aversion when the dynamics are changed from multiplicative to additive. However, the first-period risk aversion parameter didn’t decline all the way to zero as it appears to in CE.
There was a key difference between my simplified model and the actual CE, however. That model contains what’s known as an “absorbing barrier” at Wmin = 1 to prevent zero or negative wealth values which may cause utility of final wealth to be undefined. If wealth Wi ever reaches the barrier Wi = Wmin, then the game ends immediately.
There’s another way to prevent undefined utility of final wealth though: always let the game proceed to the end, which does allow intermediate wealth to become zero or negative, but define the final payout to be max(W_N, Wmin), where W_N represents wealth at the end of the game (period N) and Wmin is the minimum payout. This method is equivalent to giving the person a put option on final wealth with a strike price of Wmin. As you might expect, this changes the results significantly, as shown in Figure 1 below.
As shown in the figure, first-period risk aversion parameter eta_d declines all the way to zero for N=155 periods for Wmin=0.1.
I also ran a somewhat different experiment, where instead of the person choosing between a +/- 1 gamble vs. no-change (reject), the choice is between two gambles of different size: +/- 1 vs. +/- 2. Therefore, this game never ends early and the only choice is between a small or large gamble. The rest of the parameters remained the same: intial wealth = 4, Wmin=0.1. The results are shown in Fig. 2 below.
As you can see, risk aversion declines to zero more quickly in this case (N=39 periods).
I haven’t tried to analytically derive when or why this decline to zero risk aversion occurs, but I presume it’s related to the effective put option given to the person in this experiment. This put option becomes increasingly valuable as N grows large, so it makes intuitive sense that risk aversion would eventually reach zero.
It can be argued that in the real CE, the effective put option given to the test subject was even more valuable than what I’ve modeled here. What they do is the following: gamble outcomes are hidden from the test subject until the end of the 300 gambles, at which point 10 out of the 300 outcomes are randomly selected and applied to initial wealth. If final wealth ends up negative, they randomly pick a new set of 10 outcomes. It seems like a much simpler method would have been to simply set the payout to zero if final wealth was negative, but according to Oliver Hulme that method didn’t occur to them.
Another detail in CE that’s relevant is that participants were given a guaranteed 1,000 DKK payment for their time, in addition to the 1,000 DKK of gambling money they were given at the start of each of the two days. Therefore, even if they lost all gambles on both days, they were still given at least 1,000 DKK. Thus, one could argue that the put-option strike price (Wmin=1,000 DKK) was a significant percentage of starting wealth on each day (W0=2,000 DKK).
Bottom line: I think the Copenhagen Experiment needs to be redesigned to properly distinguish between multi-period EUT and Ergodicity Economics. The two competing theories predict the same thing in the current experiment.
Special thanks to @breakingthemark for an interesting conversation on this topic.