Highlights from “Thinking, Fast and Slow” (Parts I and II)

Pavel Sokolovsky
22 min readDec 30, 2017

--

This book by Daniel Kahneman was a great read! If you have time for the whole thing, I totally recommend it.

If you don’t have that much time, I highlighted parts and pieces that I believe helped convey the most useful messages in the book (and that particularly resonated with me).

The book was long, and my highlights were copious, so these notes are split into a couple post to make it easier to read. This first post contains highlights and my commentary from Part I (The Two Systems) and Part II (Heuristics and Biases).

Introduction

The psychology of accurate intuition involves no magic. Perhaps the best short statement of it is by the great Herbert Simon, who studied chess masters and showed that after thousands of hours of practice they come to see the pieces on the board differently from the rest of us. You can feel Simon’s impatience with the mythologizing of expert intuition when he writes: “The situation has provided a cue; this cue has given the expert access to information stored in memory, and the information provides the answer. Intuition is nothing more and nothing less than recognition.”

Part I. Two Systems

1. The Characters of the Story

Kahneman uses an analogy throughout the book of the brain as two systems.

System 1 operates automatically and quickly, with little or no effort and no sense of voluntary control. System 2 allocates attention to the effortful mental activities that demand it, including complex computations. The operations of System 2 are often associated with the subjective experience of agency, choice, and concentration.

So, different behaviors can be attributed to the different systems. Each has its own properties. As an example, attention tends to be the domain of System 2 because it requires effort and concentration.

The often-used phrase “pay attention” is apt: you dispose of a limited budget of attention that you can allocate to activities, and if you try to go beyond your budget, you will fail.

The author offers a couple interesting anecdotes here…

First, the invisible gorilla… Watch this video before reading on: https://www.youtube.com/watch?v=vJG698U2Mvo

The most dramatic demonstration was offered by Christopher Chabris and Daniel Simons in their book The Invisible Gorilla. They constructed a short film of two teams passing basketballs, one team wearing white shirts, the other wearing black. The viewers of the film are instructed to count the number of passes made by the white team, ignoring the black players. This task is difficult and completely absorbing. Halfway through the video, a woman wearing a gorilla suit appears, crosses the court, thumps her chest, and moves on. The gorilla is in view for 9 seconds. Many thousands of people have seen the video, and about half of them do not notice anything unusual. It is the counting task — and especially the instruction to ignore one of the teams — that causes the blindness.

Second, the length of lines…

Müller-Lyer illusion

Now that you have measured the lines, you — your System 2, the conscious being you call “I” — have a new belief: you know that the lines are equally long. If asked about their length, you will say what you know. But you still see the bottom line as longer. You have chosen to believe the measurement, but you cannot prevent System 1 from doing its thing; you cannot decide to see the lines as equal, although you know they are.

2. Attention and Effort

I thought this factoid was super interesting!

Much like the electricity meter outside your house or apartment, the pupils offer an index of the current rate at which mental energy is used.

And this one is telling about how humans behave…

A general “law of least effort” applies to cognitive as well as physical exertion. The law asserts that if there are several ways of achieving the same goal, people will eventually gravitate to the least demanding course of action.

3. The Lazy Controller

Baumeister’s group has repeatedly found that an effort of will or self-control is tiring; if you have had to force yourself to do something, you are less willing or less able to exert self-control when the next challenge comes around. The phenomenon has been named ego depletion.

The testers found that training attention not only improved executive control; scores on nonverbal tests of intelligence also improved and the improvement was maintained for several months.

4. The Associative Machine

I decided not to share any highlights from this chapter because the research cited was questionable. I say this because the authors below questioned it (worth a skim), and the Daniel Kahneman responded directly in the comments (worth a full read) acknowledging the weakness of the sample sizes of referenced studies.

5. Cognitive Ease

It’s still jarring to see this written so bluntly:

A reliable way to make people believe in falsehoods is frequent repetition, because familiarity is not easily distinguished from truth. Authoritarian institutions and marketers have always known this fact.

Useful tips (that some politicians have already mastered):

If you care about being thought credible and intelligent, do not use complex language where simpler language will do.

Put your ideas in verse if you can; they will be more likely to be taken as truth.

Finally, if you quote a source, choose one with a name that is easy to pronounce.

More tips for gaming people’s behavior by keeping their System 1 engaged and System 2 dormant:

Remember that System 2 is lazy and that mental effort is aversive. If possible, the recipients of your message want to stay away from anything that reminds them of effort, including a source with a complicated name.

Cognitive strain, whatever its source, mobilizes System 2, which is more likely to reject the intuitive answer suggested by System 1.

The mere exposure effect occurs, Zajonc claimed, because the repeated exposure of a stimulus is followed by nothing bad. Such a stimulus will eventually become a safety signal, and safety is good.

Mood evidently affects the operation of System 1: when we are uncomfortable and unhappy, we lose touch with our intuition.

At the other pole, sadness, vigilance, suspicion, an analytic approach, and increased effort also go together.

At the end of each chapter, Kahneman offers some example comments about how to apply these psychological leanings in meetings/conversation. I highlighted some of them. This one reminded me of Elon Musk & what he calls “First Principles Thinking”.

“We must be inclined to believe it because it has been repeated so often, but let’s think it through again.”

6. Norms, Surprises, and Causes

Have you ever been pulled over? I have. Almost always, from that point forward I had an expectation of police at that exact spot whenever I drove past it. Kahneman once saw a burning car on the road…

Because the circumstances of the recurrence were the same, the second incident was sufficient to create an active expectation: for months, perhaps for years, after the event we were reminded of burning cars whenever we reached that spot of the road and were quite prepared to see another one (but of course we never did).

On stories. Humans LOVE a good story.

Finding such causal connections is part of understanding a story and is an automatic operation of System 1. System 2, your conscious self, was offered the causal interpretation and accepted it.

If we can’t find one, we’ll make one up.

We have limited information about what happened on a day, and System 1 is adept at finding a coherent causal story that links the fragments of knowledge at its disposal.

This isn’t learned behavior, it’s instinctive.

We are evidently ready from birth to have impressions of causality, which do not depend on reasoning about patterns of causation. They are products of System 1.

Our minds are always on the starting block, ready to race to make up stories.

Your mind is ready and even eager to identify agents, assign them personality traits and specific intentions, and view their actions as expressing individual propensities.

Another “talking about” example quote…

“She can’t accept that she was just unlucky; she needs a causal story. She will end up thinking that someone intentionally sabotaged her work.”

7. A Machine for Jumping to Conclusions

Think you’re outsmarting this process? Nope…

System 1 does not keep track of alternatives that it rejects, or even of the fact that there were alternatives.

System 1 is gullible and biased to believe, System 2 is in charge of doubting and unbelieving, but System 2 is sometimes busy, and often lazy.

You’re not in as much control of your likes as you think.

If you like the president’s politics, you probably like his voice and his appearance as well. The tendency to like (or dislike) everything about a person — including things you have not observed — is known as the halo effect.

So first impressions are important…

The sequence in which we observe characteristics of a person is often determined by chance. Sequence matters, however, because the halo effect increases the weight of first impressions, sometimes to the point that subsequent information is mostly wasted.

Never forget this about your first impressions:

System 1 is radically insensitive to both the quality and the quantity of the information that gives rise to impressions and intuitions.

8. How Judgments Happen

Todorov has found that people judge competence by combining the two dimensions of strength and trustworthiness. The faces that exude competence combine a strong chin with a slight confident-appearing smile.

Political scientists followed up on Todorov’s initial research by identifying a category of voters for whom the automatic preferences of System 1 are particularly likely to play a large role. They found what they were looking for among politically uninformed voters who watch a great deal of television.

9. Answering an Easier Question

This insight hit me like a ton of bricks. Stop for a moment and think about it.

A remarkable aspect of your mental life is that you are rarely stumped.

With so much in the world that we don’t know, how does this happen?

We concluded that people must somehow simplify that impossible task, and we set out to find how they do it. Our answer was that when called upon to judge probability, people actually judge something else and believe they have judged probability.

The dominance of conclusions over arguments is most pronounced where emotions are involved. The psychologist Paul Slovic has proposed an affect heuristic in which people let their likes and dislikes determine their beliefs about the world.

In summary: Characteristics of System 1

  • generates impressions, feelings, and inclinations; when endorsed by System 2 these become beliefs, attitudes, and intentions
  • operates automatically and quickly, with little or no effort, and no sense of voluntary control
  • can be programmed by System 2 to mobilize attention when a particular pattern is detected (search)
  • executes skilled responses and generates skilled intuitions, after adequate training creates a coherent pattern of activated ideas in associative memory
  • links a sense of cognitive ease to illusions of truth, pleasant feelings, and reduced vigilance
  • distinguishes the surprising from the normal
  • infers and invents causes and intentions
  • neglects ambiguity and suppresses doubt
  • is biased to believe and confirm exaggerates emotional consistency (halo effect)
  • focuses on existing evidence and ignores absent evidence (WYSIATI)
  • generates a limited set of basic assessments
  • represents sets by norms and prototypes, does not integrate
  • matches intensities across scales (e.g., size to loudness)
  • computes more than intended (mental shotgun)
  • sometimes substitutes an easier question for a difficult one (heuristics)
  • is more sensitive to changes than to states (prospect theory)
  • overweights low probabilities
  • shows diminishing sensitivity to quantity (psychophysics)
  • responds more strongly to losses than to gains (loss aversion)
  • frames decision problems narrowly, in isolation from one another

Part II. Heuristics and Biases

10. The Law of Small Numbers

Humans are quite bad at intuitive statistics. Smart, educated guys like Kahneman were no exception.

I had recently discovered that I was not a good intuitive statistician, and I did not believe that I was worse than others.

He brought this discovery to his partner Amos Tversky to debate.

An article I had read shortly before the debate with Amos demonstrated the mistake that researchers made (they still do) by a dramatic observation. The author pointed out that psychologists commonly chose samples so small that they exposed themselves to a 50% risk of failing to confirm their true hypotheses! No researcher in his right mind would accept such a risk.

The false conclusions of System 1 can spread like a virus throughout your thinking, unless countered quickly.

System 1 is not prone to doubt. It suppresses ambiguity and spontaneously constructs stories that are as coherent as possible. Unless the message is immediately negated, the associations that it evokes will spread as if the message were true.

Here’s a remarkable example:

For an example, take the sex of six babies born in sequence at a hospital. The sequence of boys and girls is obviously random; the events are independent of each other, and the number of boys and girls who were born in the hospital in the last few hours has no effect whatsoever on the sex of the next baby. Now consider three possible sequences: BBBGGG GGGGGG BGBBGB Are the sequences equally likely? The intuitive answer — “ of course not!” — is false. Because the events are independent and because the outcomes B and G are (approximately) equally likely, then any possible sequence of six births is as likely as any other. Even now that you know this conclusion is true, it remains counterintuitive, because only the third sequence appears random. As expected, BGBBGB is judged much more likely than the other two sequences.

The following excerpts are in reference to “The Hot Hand”. Amos Tversky claimed to have discredited the theory. I’m not a good enough statistician to know whether Tversky and Kahneman are right, or whether the phenomenon is real. This article presents a credible counterpoint: https://theconversation.com/momentum-isnt-magic-vindicating-the-hot-hand-with-the-mathematics-of-streaks-74786

Some years later, Amos and his students Tom Gilovich and Robert Vallone caused a stir with their study of misperceptions of randomness in basketball. The “fact” that players occasionally acquire a hot hand is generally accepted by players, coaches, and fans.

Analysis of thousands of sequences of shots led to a disappointing conclusion: there is no such thing as a hot hand in professional basketball, either in shooting from the field or scoring from the foul line.

The hot hand is a massive and widespread cognitive illusion.

This is interesting because of the trap Kahneman fell into in Chapter 4 above.

The exaggerated faith in small samples is only one example of a more general illusion — we pay more attention to the content of messages than to information about their reliability, and as a result end up with a view of the world around us that is simpler and more coherent than the data justify. Jumping to conclusions is a safer sport in the world of our imagination than it is in reality.

11. Anchors

The phenomenon we were studying is so common and so important in the everyday world that you should know its name: it is an anchoring effect. It occurs when people consider a particular value for an unknown quantity before estimating that quantity. What happens is one of the most reliable and robust results of experimental psychology: the estimates stay close to the number that people considered — hence the image of an anchor.

How does anchoring actually work in humans?

Amos liked the idea of an adjust-and-anchor heuristic as a strategy for estimating uncertain quantities: start from an anchoring number, assess whether it is too high or too low, and gradually adjust your estimate by mentally “moving” from the anchor. The adjustment typically ends prematurely, because people stop when they are no longer certain that they should move farther.

People adjust less (stay closer to the anchor) when their mental resources are depleted, either because their memory is loaded with digits or because they are slightly drunk. Insufficient adjustment is a failure of a weak or lazy System 2.

Okay, but how big can this effect be? Well, glad you asked!

Some visitors at the San Francisco Exploratorium were asked the following two questions: Is the height of the tallest redwood more or less than 1,200 feet? What is your best guess about the height of the tallest redwood? The “high anchor” in this experiment was 1,200 feet. For other participants, the first question referred to a “low anchor” of 180 feet. The difference between the two anchors was 1,020 feet. As expected, the two groups produced very different mean estimates: 844 and 282 feet. The difference between them was 562 feet. The anchoring index is simply the ratio of the two differences (562/ 1,020) expressed as a percentage: 55%. The anchoring measure would be 100% for people who slavishly adopt the anchor as an estimate, and zero for people who are able to ignore the anchor altogether. The value of 55% that was observed in this example is typical. Similar values have been observed in numerous other problems.

And one more for good measure. Will make you rethink how much weight to put behind your real estate agent’s suggestions…

In an experiment conducted some years ago, real-estate agents were given an opportunity to assess the value of a house that was actually on the market. They visited the house and studied a comprehensive booklet of information that included an asking price. Half the agents saw an asking price that was substantially higher than the listed price of the house; the other half saw an asking price that was substantially lower. Each agent gave her opinion about a reasonable buying price for the house and the lowest price at which she would agree to sell the house if she owned it. The agents were then asked about the factors that had affected their judgment. Remarkably, the asking price was not one of these factors; the agents took pride in their ability to ignore it. They insisted that the listing price had no effect on their responses, but they were wrong: the anchoring effect was 41%.

You should avoid being at the mercy of judges’ sentencing, but in case you ever are…

The power of random anchors has been demonstrated in some unsettling ways. German judges with an average of more than fifteen years of experience on the bench first read a description of a woman who had been caught shoplifting, then rolled a pair of dice that were loaded so every roll resulted in either a 3 or a 9. As soon as the dice came to a stop, the judges were asked whether they would sentence the woman to a term in prison greater or lesser, in months, than the number showing on the dice. Finally, the judges were instructed to specify the exact prison sentence they would give to the shoplifter. On average, those who had rolled a 9 said they would sentence her to 8 months; those who rolled a 3 said they would sentence her to 5 months; the anchoring effect was 50%.

How to fight anchoring in your own life…

The psychologists Adam Galinsky and Thomas Mussweiler proposed more subtle ways to resist the anchoring effect in negotiations. They instructed negotiators to focus their attention and search their memory for arguments against the anchor. The instruction to activate System 2 was successful. For example, the anchoring effect is reduced or eliminated when the second mover focuses his attention on the minimal offer that the opponent would accept, or on the costs to the opponent of failing to reach an agreement. In general, a strategy of deliberately “thinking the opposite” may be a good defense against anchoring effects, because it negates the biased recruitment of thoughts that produces these effects.

12. The Science of Availability

One of our projects was the study of what we called the availability heuristic. We thought of that heuristic when we asked ourselves what people actually do when they wish to estimate the frequency of a category, such as “people who divorce after the age of 60” or “dangerous plants.” The answer was straightforward: instances of the class will be retrieved from memory, and if retrieval is easy and fluent, the category will be judged to be large.

A tactic to get a higher rating from survey takers…

A professor at UCLA found an ingenious way to exploit the availability bias. He asked different groups of students to list ways to improve the course, and he varied the required number of improvements. As expected, the students who listed more ways to improve the class rated it higher!

13. Availability, Emotion, and Risk

How humans react to natural disasters.

After each significant earthquake, Californians are for a while diligent in purchasing insurance and adopting measures of protection and mitigation. They tie down their boiler to reduce quake damage, seal their basement doors against floods, and maintain emergency supplies in good order. However, the memories of the disaster dim over time, and so do worry and diligence. The dynamics of memory help explain the recurrent cycles of disaster, concern, and growing complacency that are familiar to students of large-scale emergencies.

As long ago as pharaonic Egypt, societies have tracked the high-water mark of rivers that periodically flood — and have always prepared accordingly, apparently assuming that floods will not rise higher than the existing high-water mark. Images of a worse disaster do not come easily to mind.

An interesting finding about opinions about benefits/risk of technologies…

Slovic’s research team surveyed opinions about various technologies, including water fluoridation, chemical plants, food preservatives, and cars, and asked their respondents to list both the benefits and the risks of each technology. They observed an implausibly high negative correlation between two estimates that their respondents made: the level of benefit and the level of risk that they attributed to the technologies. When people were favorably disposed toward a technology, they rated it as offering large benefits and imposing little risk; when they disliked a technology, they could think only of its disadvantages, and few advantages came to mind.

The implication is clear: as the psychologist Jonathan Haidt said in another context, “The emotional tail wags the rational dog.”

As Slovic has argued, the amount of concern is not adequately sensitive to the probability of harm; you are imagining the numerator — the tragic story you saw on the news — and not thinking about the denominator. Sunstein has coined the phrase “probability neglect” to describe the pattern.

14. Tom W’s Specialty

The title of this chapter is a reference to a study done about “representativeness”. I didn’t include it here, but a google search will turn up details if you’re interested.

One thing that I liked is that Kahneman offers a reference to Michael Lewis, who later wrote The Undoing Project about Kahneman and Tversky.

Although it is common, prediction by representativeness is not statistically optimal. Michael Lewis’s bestselling Moneyball is a story about the inefficiency of this mode of prediction. Professional baseball scouts traditionally forecast the success of possible players in part by their build and look. The hero of Lewis’s book is Billy Beane, the manager of the Oakland A’s, who made the unpopular decision to overrule his scouts and to select players by the statistics of past performance. The players the A’s picked were inexpensive, because other teams had rejected them for not looking the part. The team soon achieved excellent results at low cost.

Bayesian reasoning is a critical concept to understand in decision making. Take into account “base rates” (what are the odds assuming I know nothing specific), and then update those odds with any new information presented. Often times, people ignore the base rates entirely to their detriment.

There are two ideas to keep in mind about Bayesian reasoning and how we tend to mess it up. The first is that base rates matter, even in the presence of evidence about the case at hand. This is often not intuitively obvious. The second is that intuitive impressions of the diagnosticity of evidence are often exaggerated. The combination of WYSIATI and associative coherence tends to make us believe in the stories we spin for ourselves.

WYSIATI — this stands for “What You See Is All There Is”, and is a common subject of the book, despite the fact that I didn’t highlight much of it. You should google it to learn more, because it’s amazing how many times I’ve encountered it since learning about it.

15. Linda: Less is More

This chapter title is also an homage to an experiment called The Linda Problem. I didn’t highlight it, but again you can google to learn more. Here’s a different example of the same concept it covers…

Consider these two scenarios, which were presented to different groups, with a request to evaluate their probability: A massive flood somewhere in North America next year, in which more than 1,000 people drown An earthquake in California sometime next year, causing a flood in which more than 1,000 people drown The California earthquake scenario is more plausible than the North America scenario, although its probability is certainly smaller. As expected, probability judgments were higher for the richer and more detailed scenario, contrary to logic. This is a trap for forecasters and their clients: adding detail to scenarios makes them more persuasive, but less likely to come true.

The laziness of System 2 is an important fact of life, and the observation that representativeness can block the application of an obvious logical rule is also of some interest.

16. Causes Trump Statistics

Another one — the taxicab problem. See here for more detail. From that link: “The problem is meant to illustrate that we should always place the accuracy of information in relation to the circumstances of the environment. Even an accurate witness will have a hard time avoiding false-positives.”

Resistance to stereotyping is a laudable moral position, but the simplistic idea that the resistance is costless is wrong. The costs are worth paying to achieve a better society, but denying that the costs exist, while satisfying to the soul and politically correct, is not scientifically defensible. Reliance on the affect heuristic is common in politically charged arguments. The positions we favor have no cost and those we oppose have no benefits. We should be able to do better.

There’s a reference to something called The Helping Experiment. It was a study that was taught to psychology students to demonstrate that our expectations of people’s willingness to help are misplaced. The teaching was immediately followed by a test of those same students to see what they learned. The answer was very little. “Now, you might think that the students who were apprised of the experiment’s gloomy results would have been more likely to guess that the individuals in the video didn’t rush to the aid of the seizure victim. But they weren’t. In defiance of the facts, both groups maintained their rosy outlook of human nature.” (quote from the link above).

Teachers of psychology should not despair, however, because Nisbett and Borgida report a way to make their students appreciate the point of the helping experiment. They took a new group of students and taught them the procedure of the experiment but did not tell them the group results. They showed the two videos and simply told their students that the two individuals they had just seen had not helped the stranger, then asked them to guess the global results. The outcome was dramatic: the students’ guesses were extremely accurate. To teach students any psychology they did not know before, you must surprise them. But which surprise will do? Nisbett and Borgida found that when they presented their students with a surprising statistical fact, the students managed to learn nothing at all. But when the students were surprised by individual cases — two nice people who had not helped — they immediately made the generalization and inferred that helping is more difficult than they had thought.

“We can’t assume that they will really learn anything from mere statistics. Let’s show them one or two representative individual cases to influence their System 1.”

17. Regression to the Mean

Kahneman tells a story about IAF officers believing that praise made pilots perform worse, while scolding made them perform better. He was brought in to consult about this. Instead, he showed that this belief was misplaced.

Instead, I used chalk to mark a target on the floor. I asked every officer in the room to turn his back to the target and throw two coins at it in immediate succession, without looking. We measured the distances from the target and wrote the two results of each contestant on the blackboard. Then we rewrote the results in order, from the best to the worst performance on the first try. It was apparent that most (but not all) of those who had done best the first time deteriorated on their second try, and those who had done poorly on the first attempt generally improved. I pointed out to the instructors that what they saw on the board coincided with what we had heard about the performance of aerobatic maneuvers on successive attempts: poor performance was typically followed by improvement and good performance by deterioration, without any help from either praise or punishment.

An example of the same concept, as demonstrated by the Sports Illustrated jinx.

Regression effects are ubiquitous, and so are misguided causal stories to explain them. A well-known example is the “Sports Illustrated jinx,” the claim that an athlete whose picture appears on the cover of the magazine is doomed to perform poorly the following season. Overconfidence and the pressure of meeting high expectations are often offered as explanations. But there is a simpler account of the jinx: an athlete who gets to be on the cover of Sports Illustrated must have performed exceptionally well in the preceding season, probably with the assistance of a nudge from luck — and luck is fickle.

[It] took Francis Galton several years to figure out that correlation and regression are not two concepts — they are different perspectives on the same concept. The general rule is straightforward but has surprising consequences: whenever the correlation between two scores is imperfect, there will be regression to the mean.

The concept of regression isn’t necessarily intuitive, because remember — we love a good story.

Indeed, the statistician David Freedman used to say that if the topic of regression comes up in a criminal or civil trial, the side that must explain regression to the jury will lose the case.

Depressed children treated with an energy drink improve significantly over a three-month period. I made up this newspaper headline, but the fact it reports is true: if you treated a group of depressed children for some time with an energy drink, they would show a clinically significant improvement. It is also the case that depressed children who spend some time standing on their head or hug a cat for twenty minutes a day will also show improvement. Most readers of such headlines will automatically infer that the energy drink or the cat hugging caused an improvement, but this conclusion is completely unjustified.

18. Taming Intuitive Predictions

People are asked for a prediction but they substitute an evaluation of the evidence, without noticing that the question they answer is not the one they were asked. This process is guaranteed to generate predictions that are systematically biased; they completely ignore regression to the mean.

On predicting rare events…

[C]haracteristic of unbiased predictions is that they permit the prediction of rare or extreme events only when the information is very good. If you expect your predictions to be of modest validity, you will never guess an outcome that is either rare or far from the mean. If your predictions are unbiased, you will never have the satisfying experience of correctly calling an extreme case.

Extreme predictions and a willingness to predict rare events from weak evidence are both manifestations of System 1. It is natural for the associative machinery to match the extremeness of predictions to the perceived extremeness of evidence on which it is based — this is how substitution works. And it is natural for System 1 to generate overconfident judgments, because confidence, as we have seen, is determined by the coherence of the best story you can tell from the evidence at hand. Be warned: your intuitions will deliver predictions that are too extreme and you will be inclined to put far too much faith in them.

We will not learn to understand regression from experience. Even when a regression is identified, as we saw in the story of the flight instructors, it will be given a causal interpretation that is almost always wrong.

That’s all for this post! Stay tuned for highlights from the rest of the book.

--

--