Behavioral Economics #2: Under Attack

Edward Patrick Akinyemi
Edwardp.me
Published in
10 min readApr 7, 2019

In Part 1 of the Behavioral Economics series, I talked about the foundations of economics and how important philosophy was to its development. I discussed one of the most important assumptions in economics, namely that all humans are homo economicus, or Econs for short. Econs are creatures that never fall victim to cognitive biases, have “an infinite ability to make rational decisions”, are never overconfident, and never struggle with self-control.

I ended Part 1 by talking about how psychologists (and a small subset of economists) began to expose the flaws of this assumption. In fact, many of them shared research that showed that people frequently violated this core tenet of economics.

To continue the story, I’ll take a closer look at the arguments that psychologists (and a new school of economists) used to put some serious dents in the armor of traditional economics. I’ll also discuss the arguments that the economists used to defend their beloved profession from these attacks.

Let’s look at the battle that ensued between the two sides.

“Some days, some nights,
Some live, some die
In the way of the samurai.
Some fight, some bleed,
Sun up to sun down
The sons of a battlecry.”

— Battlecry by Nujabes (Song)

Attack

Traditional economics was under attack.

More and more research was being published that, in one way or the other, showed that regular human beings were not at all like the homo economicus that economists believed (or wanted to believe) they were. Examples of people “misbehaving” were piling up, and were impossible to ignore.

One of the first points of attack came by means of the Endowment Effect, which showed that, contrary to traditional economic theory, people overvalue items that they own compared to identical ones that they don’t. Something as arbitrary as whether you own something or not should be irrelevant for determining an item’s value, but research showed that the effect was very real.

The focus then shifted to decision-making. When dealing with decision-making under uncertainty, traditional economics is extremely loyal to the expected utility theory. Let’s say I have to decide whether to invest in a new business but that there’s a chance that the market will soon decline.

The change in the market will affect the value of this business and, as a result, my utility (which is a concept in economics used to describe the satisfaction or pleasure we derive from a product, service, or state of being).

State: Decline (40% probability, 0.4)

In the case of a market decline, the business will be worth little. Hence, if I invest, my utility will be 15 but if I don’t invest, my utility will be 25.

State: No Decline (60% probability, 0.6)

In the case of a market decline, the business will be worth a lot. Hence, if I invest, my utility will be 50 but if I don’t invest, my utility will be 10.

According to the theory I should multiply the probabilities of each choice with the utilities of both scenarios in order to obtain the expected utility of each choice. Then I should simply go for the choice that has the greatest expected utility.

Expected Utility (Invest) = (0.4*15) + (0.6*50) = 36

Expected Utility (Don’t Invest) = (0.4*25) + (0.6*10) = 26

Expected Utility (Invest) > Expected Utility (Don’t Invest)

Verdict = I should invest.

Expected utility theory is what we call a normative theory, i.e. something that describes what people should rationally do. Traditional economists believed that since this tells us how we should make decisions, it follows that this is how we indeed do make decisions in real life.

However, besides normative theories, there are also descriptive ones. As the name suggests, these describe how people act in reality rather than how they should rationally act. Behavioral economists believed that expected utility theory poorly described how people acted in the real world and, spearheaded by the eccentric Israeli psychologists Amos Tversky and Daniel Kahneman, proposed a more descriptive and realistic theory: prospect theory.

Utility theory was very static because it focused on states of wealth. However, the more dynamic prospect theory showed that people are more concerned with changes in wealth. That is, we make decisions based on how much we gain or lose relative to the reference point of our current state.

We are far more sensitive to a change from $100 to $200 than to a change from $30,000 to $30,100. Traditional economics would have dismissed that claim, but any sensible person knows that the difference between the two is very real due to the reference (starting) point.

The attacks didn’t stop there though. Prospect theory opened the door to a concept that further pierced the armor of traditional economics, namely that of loss aversion. Research showed that people dislike losses more than they like equivalent gains. “The pain of losing is psychologically about twice as powerful as the pleasure of gaining.

“He [James Bond] shrugged his shoulders to shift the pain of failure, the pain of failure that is so much greater than the pleasure of success.”

— From the book “Moonraker” by Ian Fleming

Some say that loss aversion can be partly explained through our evolutionary history. Specifically, when humans were still hunter-gatherers, they would be more likely to survive by focusing on the negative — every unknown movement could be a predator that’s going to kill me — rather than the positive — that weird movement in the bushes was probably just a harmless bird. Oh crap, it’s actually a lion aaaaaaaand I’m dead.

Regardless of the reason, expected utility theory did not and could not explain that people were far more averse to losses than they were to equivalent gains. Contrary to traditional economics, psychology showed us what was actually happening in the real world, with real humans instead of those mysterious Econs.

Wave after wave of studies that poked holes in various parts of traditional economics continued to be published. The Linda experiment, anchoring, sunk cost fallacy, representativeness, libertarian paternalism (i.e. nudging), behavioral finance, the St. Petersburg paradox, the Allais paradox, the possibility and certainty effects; it was a relentless barrage of attacks.

Explaining all of them would require a book (like this one or this one) rather than a blog post, so I’ll limit the discussion to two more pieces of research: the provocatively-named “Asian disease problem” and the mind-bending concept of preference reversals.

Click here to buy my book!

The Asian Disease Problem

Two sets of respondents were given two versions of a problem. The first version goes as follows:

“Imagine that the United States is preparing for the outbreak of an unusual Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimates of the consequences of the programs are as follows:

If program A is adopted, 200 people will be saved.
If program, B is adopted, there is a 1/3 probability that 600 people will be saved and 2/3 probability that no people will be saved.”

— Page 368 of the book “Thinking, Fast and Slow” by Daniel Kahneman

A clear majority of respondents opted for certainty over risk and chose program A. The second version of the problem was identical, but the programs were phrased differently:

“If program A is adopted, 400 people will die.
If program, B is adopted, there is a 1/3 probability that nobody will die and 2/3 probability that 600 people will die.”

— Page 368 of the book “Thinking, Fast and Slow” by Daniel Kahneman

In this scenario, most people chose program B. Who cares, you ask? Well, if you look closely you’ll see that the two scenarios are exactly the same.

Program A in the first example results in 200 people saved. Program A in the second example results in 400 out of 600 people dying, i.e. 200 people alive. Program B requires a bit more work to understand but results in the same conclusion. In the first scenario of B, the expected outcome is ((1/3) * 600 people saved) + ((2/3) * 0 people saved) = 200 people saved.

In the second scenario, the expected outcome is ((1/3) * nobody will die) + ((2/3) * everybody dies) = ((1/3) * 600 people saved) + ((2/3) * 0 people saved) = 200 people saved.

The programs are exactly the same, yet by merely changing the way the programs are presented, people chose different options. Hence, the Asian disease problem was yet another serious blow to traditional economics because economists claimed that the way problems are worded is completely irrelevant to and has no effect on how people make decisions.

Well, it clearly does.

Check out the books I’m reading

Preference Reversals

Since economics is basically the study of how humans with infinite desires make choices given limited resources, economists like to optimize utility. As mentioned earlier, utility essentially describes the satisfaction or pleasure that a person gains from something.

Depending on your preferences, your utility from a romantic vacation in Spain with your husband/wife could be equal to/greater than/less than your utility from a vacation with you, your husband/wife, kids, and in-laws.

In order to optimize utility, though, we have to know people’s preferences and, more importantly, these preferences have to be stable. If you say you like A over B and B over C, you should logically like A over C. This knowledge will help me optimize your utility.

However, if every ten seconds you change your mind and say you like B over A, and then A over B, and then B over A again, and so on, then optimizing your utility becomes close to impossible. Consequently, as long as preferences remain stable (and logical), economists were happy.

Unfortunately, research from the field of psychology showed that the notion of stable preferences wasn’t true. The likes of Lichtenstein and Slovic (1971), Grether and Plott (1979), Hsee (2000), and Sunstein and co. (2001) showed that people’s preferences were frequently unstable and that, just like in the Asian disease problem, merely changing the framing of a problem could make people violate the theory of stable preferences.

The Defense: Incentives, “As If”, the invisible handwave, and more

“We got to defend ourselves.
We intend to defend ourselves.
We did so in the past,
And we gonna do it today
And the day after that and the day after that.”

— Retaliation Suite by Thievery Corporation (Song)

If you’re under attack then, logically, you have to defend itself. Let’s look at how economics defended itself.

In his book Misbehaving, Richard Thaler said that the first argument that established economists used to defend their profession was to claim that “even if people are not capable of actually solving the complex problems that economists assume they can handle, they behave “as if” they can” (Page 44). This was most eloquently described by one of the brightest minds in economics, Milton Friedman:

“Excellent predictions would be yielded by the hypothesis that the billiard player made his shots as if he knew the complicated mathematical formulas that would give the optimum direction of travel, could estimate by eye the angles etc., describing the location of the balls, could make lightning calculations from the formulas, and could then make the balls travel in the direction indicated by the formulas.

Our confidence in this hypothesis is not based on the belief that billiard players, even expert ones, can or do go through the process described; it derives rather from the belief that, unless in some way or other they were capable of reaching essentially the same result, they would not in fact be expert billiard players.”

— Milton Friedman (taken from pages 45–46 of “Misbehaving: The Making of Behavioral Economics”)

From the way I perceive this, Friedman seems to be saying that economics only described the behavior of experts. In other words, if your decisions violated traditional economic behavior, you were not an “expert human.” I guess economics is only for the bourgeois. So much for us regular peasants.

The next argument was that of incentives. Specifically, if the stakes are higher, “people will have greater incentive to think harder, ask for help, or do what is necessary to get the problem right.” Economists claimed that since most of the research listed earlier was done with nothing at stake, their implications could be ignored.

They also argued that in the real world, people have opportunities to learn. Even though they commited the types of mistakes described in this post, they’ll eventually learn and not be fooled anymore. Thaler, however, argued that this only applied to “the small stuff.”

Specifically, he said that “we do the small stuff often enough to learn to get it right, but when it comes to choosing a home, a mortgage, or a job, we don’t get much practice or opportunities to learn. And when it comes to saving for retirement, barring reincarnation we do that exactly once” (Page 50).

I saved my favorite argument for last: the invisible handwave argument. It’s my favorite argument because a) the way Thaler discusses it in Chapter 6 of Misbehaving is wonderfully entertaining and b) it is an oh-so-typical economics argument. This argument simply reeks of traditional economics.

In essence, it claims that markets make people rational and that when people operate in them, they will no longer make these mistakes. Because, you know, markets fix everything. Competition and market forces will drive out people that misbehave in the ways that psychologists described or force these unfortunate individuals to hire experts that will make the tough decisions for them. Whatever the case, the market will fix it.

“The speech goes something like this. “Suppose there were people doing silly things like the subjects in your experiments, and those people had to interact in competitive markets, then…”

I call this argument the invisible handwave because, in my experience, no one has ever finished that sentence with both hands remaining still, and it is thought to be somehow related to Adam Smith’s invisible hand, the workings of which are both overstated and mysterious.”

— Richard Thaler (from pages 51–52 of his book “Misbehaving: The Making of Behavioral Economics”)

Next up: Change

Phew.

Since this has been an extremely long post, I think it’s time to take a breather and wrap up this part of the behavioral economics series.

It’s one thing to wage war through research papers published from the comfort of your office, but it’s a whole different thing to engage in face-to-face discussion.

Hence, in Part 3 I’ll discuss what happened when psychologists and economists actually sat together in the same room to make the case for and against revolutionary changes in this centuries-old profession we call economics.

The saga continues next time, so stay tuned for more!

“The world cannot be changed with pretty words alone.”

— Lelouch Lamperouge (from the show “Code Geass”)

Resources

See you, Space Cowboy.

--

--