(notes) Gerd Gigerenzer on Gut Feelings on #EconTalk

Edvard Kardelj Jr.
Letters on Liberty
Published in
10 min readJan 7, 2020

I am sharing the top points & principles that the psychologist and author Gigerenzer promotes in the discussion on EconTalk between him and Russ (“On Gut Feelings”).

What is a gut feeling?

So, a gut feeling, or an intuition, is based on years of experience, and it has the following features. It usually is quickly in your mind. So you know what you might do. But, you can’t explain why. Nevertheless, it guides much of our personal and also professional decisions. So, a gut feeling is not something arbitrary. It’s not as sixth sense. And it’s also not what women have. Men also have gut feelings. There is a suspicion around gut feelings that is widespread in the Social Sciences that they would always be second best and misleading. The problem is, if one would not listen to one’s gut — that means, to the experience that’s stored in the brain or in those parts of the brain who can talk — then one would lose lots of important information.

Gut feeling is not the opposition of data

The first intuition or gut feelings is not the opposition of getting information. And at the end, a gut decision is always one that started out with evidence. But, if the evidence doesn’t tell you, then you listen to your guts.

And listening to your guts makes only sense if you have years of experience. So that means all the information is there. But it’s in a way in our brain that you can’t explain that.

When are heuristics better then complex calculations?

It’s important to distinguish between a world where you can calculate the risks and other situations where you can’t do that. So, if you can calculate the risk, that would be the case if you played a roulette or a lottery, where all probabilities are known, where all possible outcomes are known, and all consequences are known. Then: better calculate. But in many situations of the world, there is uncertainty, where you cannot know the probability, not even estimate them. And also, moreover, you can’t know all the possible future states of the world. So, if you want to invest, if you want to find out whom to marry, if you decide what professor to hire, these are all situations of uncertainty, and precise calculations are an illusion in this situation. What’s useful is heuristics that’s robust — so, rules of thumb that have a good chance to hit as opposed to a calculation that over-fits the past.

In general, under uncertainty, predicting the future, you avail advice[?] to make it simple. If you want to explain the past, make it complicated.

Behavioral Economics are wrong — gut feeling is not always second best

For instance, some dear colleagues of mine have shown that, in sports — so for instance, if you instruct a golf player, an experienced golf player, to make the put quickly — so in less than three seconds — as opposed to let the person more time, what do you think? Which one will hit more often? It’s the less than three seconds. So, it’s where you have little time and room to pay attention. But this only holds for experienced people. It doesn’t hold for beginners.

Finance & gut feelings

The real world of finance is not one of calculable risk. It’s one of high uncertainty. And, Value at Risk — just to give you an idea — but the calculations done. So the calculation that a large bank has to do to calculate its Value at Risk involve estimating thousands of risk parameters and their correlations, which amounts to millions of correlations, based on five years’, 10 years’ data. That borders on astrology. This is not science.

And we have seen that a value-at-risk-calculation have not prevented any crisis. So, in a world of high uncertainty, we need to have simple methods that are robust . And, value at risk calculations, they also foster an illusion of certainty: that one would mean that this precise number — the one they calculated — is really the true value.

Fast and frugal tree

So, a fast and frugal tree is like a decision tree, but it’s much more simpler. You start with a certain feature — and that could be the leverage ratio. If the leverage ratio of a bank is higher than a certain level, then it gets a red flag. So, that’s it. And not even anything else is looked up. If it is not higher — it’s lower — then a second question is asked. And this is the way you proceed. So, a fast and frugal tree doesn’t make any trade-offs. So it’s like a body. So, if you a failing heart, then a good kidney doesn’t help you much. So it’s not like a linear progression where everything compensates with everyone.

And we have tested this fast and frugal trees in many situations. And meanwhile they are used in medicine and in many other areas. And also what’s very important is that a doctor using such a fast and frugal tree or a banker or a central banker can actually understand what’s happening.

Google Flu Trends

One example I would like to give you is Google Flu Trends. You may recall that Google tried to prove that big data analytics can predict the spread of the flu. And it was hailed with fanfares all around the world when they published a Nature article in 2008 or 2009. And so they had done everything right. So they had fitted four years of data and then tested data means. They had about 550 million search terms and then they had maybe 100,000 algorithms that they tried and took the best one, and had also tested it in the following year.

And then they made predictions. And here we are really under uncertainty.

The flu is hard to control, and people’s search terms are also hard to control. And, what happened is something unexpected — namely, the swine flu came in 2009, while Google Flu Trends, the algorithm had learned that flu is high in winter and low in summer. The swine flu came in the summer. So, it started early in April and had its peak late in September. And of course the algorithm failed because it fine-tuned on the past and couldn’t know that.

Now, the Google engineers revised the algorithm. By the way, the algorithm was a secret, a business secret. We only knew that it had 45 variables and probably was a linear algorithm. Now, in our research, what I would do is now realize you are under uncertainty: Make it simpler. No. The Google engineers had the idea if a complex I algorithm fails, make it more complex.

And they changed it to 160 variables — so up from 45 — and made predictions for four years. It didn’t do well. And then it silently was out, buried it.

Alternative to Google Flu Trends?

So, one of the features that humans use to make prediction on the uncertainty is recency. They’re looking for the thing that happened last of the time because you can’t trust the very far past. And the most recent information about flu-related doctor visits is from the CDC, the Center for Disease Control, about two weeks ago.

So we use the two-weeks-ago variable and only this one variable and nothing else, and set up a heuristic. The flu-related doctor visits in a region are the same as those two weeks ago. That’s an absolutely simple heuristic. And then we tested it on the entire four years of the revised Google Flu Trends algorithm. And, what do you think? How well did it do? It predicted better.

Yes. It has a bias. But it is much more flexible, and it’s also much, much cheaper and much more transparent, and people can actually use it.

When more information is better?

More information is not always better. Now the real question is: when is it better? So again, this distinction between the risk and uncertainty can help. It’s an old distinction that goes back to Frank Knight, to others, and Keynes and Savage and most decision theorists had made this distinction. It’s just ignored most of the time.

So, in a world of risk, you can fine tune and that means more information is always better, if you forget the costs and the time that you spend.

In a world of uncertainty, that doesn’t hold. And even if you forget the time and costs.

So, we have shown for a number of heuristics that they do better if you have only a certain amount of information; that doesn’t mean no information. Something in between. So, ‘more is always better’ is an illusion. More is sometimes better and it’s better when you are in a situation where the past is like the future and the future like the past. So the information, so the saying ‘more information is always better’ also assumes that the information is true. But most of the time, not the case. There are misses. There are false alarms.

Chesterton Fence

… the Chesterton Fence is this idea that you come across a fence in the middle of a field; you think, ‘Well, this looks just like it gets in the way. I’m going to tear it down,’ and you don’t know why it’s there. And when you tear it down, you find out it had a purpose. But it doesn’t make sense to you. And so you just decide it must be irrational.

And many heuristics, many rules with thumb are like that. They’ve evolved over time. They are consistent with the way our brain works. And yet as arrogant experts, we often say, ‘Oh, well, this just must be a mistake,’ and we change policy or make decisions accordingly. And I think a respect for some traditions is incredibly important and it’s a way to access information you wouldn’t otherwise get.

Defensive decision-making

That is, a decision-maker like a doctor does not recommend the patient what he or she the doctor thinks it’s the best to do, but something second best. Why would doctors do that? Defensive decision-making is done in order to protect yourself as a doctor from the patient who might sue you if something happens. And that usually leads to unnecessary imaging, to unnecessary cancer screenings, to unnecessary antibiotics; and just mostly that, too much, too much, too much, too much.

In the studies in the United States, doctors typically say — so, when doctors are asked, about 90–95% of them say, ‘Yes, that’s what I’m doing, and I have no choice.’

So it’s very important that a patient is aware of that situation in which the doctor is, because the patient is the problem. It’s the patient who sues or the lawyer that runs around.

And so this kind of structural understanding is very important. Sometimes, a simple heuristic helps here. So: Don’t ask your doctor what he or she recommends to you, but ask the doctor what she would recommend to her own brother or sister or mother. The mother wouldn’t sue. I typically have gotten a very different answer to both of these questions.

Nudging in general

So, I’m not a fan of nudging as a policy of governments. First the question is, what government? So is it Obama or Trump that nudges you? And second, I bet much more on informing people and make them risk survey, but also then steer them like ships.

On nudging in general, I mean nothing is nothing new. It’s basically the method of marketing and others have used before to influence us. And what we really need in the 21st century is people who understand what’s being done to them. People who are a risk literate, who are health literate, who are able to deal with money, and who are also able to control the digital media. We need to invest in making people stronger. This is my view. We don’t need more paternalism in the 20th century. Sorry, we don’t need more paternalism in 21st century. We had enough, last one.

Most people who make bad decision don’t, in my experience, make bad decisions because something has misfired in their brain — that’s the usual take of the nudging people — but, because there’s an industry who sells them products that are unhealthy. There is a tobacco industry who sells some cigarettes that are unhealthy, and so it goes on. And to nudge people — meaning using the same methods against big industry — has little prospects.

The bias in behavioral economics

That literature, which is brought in Psychology, is not well known in Behavioral Economics. I’ve written a paper called “The Bias Bias in Behavioral Economics.” That there is the tendency to find biases even if there are none, and it will be well done to realize that people aren’t so stupid. They can be seduced. Yeah. But most of the arguments are about people’s probability judgments, intuitions about change, and so on. And these intuitions are fairly good and that psychological research has documented since decades. And a few studies who claimed the opposite. They are featured. That’s part of the bias bias.

How to be careful and not go too far in the other direction

I think the first step is to distinguish between situations of risk and uncertainty. In situations of risk, then do your calculations. It’s also the world where big data is most promising, and also the world of machine learning.

So this assumes a stable world. The more you have to do with situations of uncertainty, the more you need to simplify and to make things more robust because you cannot know how the future will be.

It is correct what you said, that heuristics can fail. But they also can be excellent. And the important question is to use them in the right situation. For instance, in situations of uncertainty.

So, the important point is, it’s often said: Heuristics can do well, but they can fail. But it’s almost never said that Bayesian analysis can do well or fail. Or that Value at Risk can do well or fail.

It is: Whatever tool we use — if it’s a heuristic, if it’s a fast-and-frugal tree, for example, or if it’s some analytical calculation — it’s a tool that is a good tool for a certain type of environment and a bad tool for another one. A hammer needs a nail, but not a screwdriver.

--

--