WTF Is Rationality?

m
mindlevelup
Published in
11 min readSep 5, 2017

[This section first starts off with a quick intro to what rationality is. We go over the difference between epistemic and instrumental rationality.]

This is a book on getting your shit together.

Ahem.

Actually, this is a book on rationality. But “getting your shit together” isn’t far from the mark. (It’s catchier, too.)

So. Let’s clear up a potential misunderstanding first: The word “rationality” as I’m using it here isn’t the academic term (like how it’s used in economics) or the story trope term (like the emotionless Spock from Star Trek).

Actually both of those images give off the wrong connotations.

Rather, it’s about this loose collection of ideas that have sprouted in the past decade or so that centers around human minds and how they work.

It’s usually split into two subsections: epistemic rationality and instrumental rationality:

Epistemic rationality is about trying to figure out what’s true. This is important because arguments (as just one example) can easily slip into poor reasoning and logical fallacies. Ideas in epistemic rationality try to help ease some of these problems. It looks into questions like what it exactly means to “have evidence” for your side or how to actually listen to other people’s counterarguments.

While epistemic rationality looks at truth, instrumental rationality looks at achieving our goals. Sometimes, for example, our energy to get things done can falter. Other times, we may find a lack of time management to hurt our ability to finish our tasks. Instrumental rationality is about finding the best ways to resolve these obstacles to get what we want. Motivation, productivity, and habits all fall into this field.

The need for both types of rationality arises because our reasoning process isn’t perfect. Our thinking can be subject to cognitive biases, psychological quirks of our brain, which can lead our thinking astray.

(Don’t worry if you don’t know much about biases. We’ll go over a few of them more in the next section.)

While the two sections of rationality might initially seem rather separate, linked only by cognitive biases, the distinction between the two is really quite fuzzy.

After all, a productivity hack only works if it’s based off things that really are true. You could write “I got 10 hours of work done today!” on a sheet of paper, but you can’t actually get 10 hours of work done.

It just won’t happen, even if you think that reality works like that. You’ll just end up with broken expectations.

We want things that are based in the real world because they really work. For figuring out what thing “really work”, we might need evidence and reasons. And for that, we’re back again to epistemic rationality.

The two, then, are intertwined.

But I’m not taking the cop-out and just saying “both things are important” and leaving it at that. There are certain times where you really would prefer one over the other, and it’s good to take note that.

For example, it’s probably not a good idea to spend all your time trying to verify all the information you read in textbooks (i.e. practicing epistemic rationality). At some point, the balance tips over to just using what you already know (i.e. practicing instrumental rationality).

We’ll be exploring other dichotomies later on, and I just think it’s important to explicitly state that “acknowledging that both sides are important” ≠ “you always want an equal balance of both”.

In the case of this book, I’ll be focusing more on instrumental rationality because I think that’s usually more interesting to read, and I also think it’s typically more readily applicable to everyday life.

My approach throughout this book is to try and use the instrumental rationality stuff as the main idea, and slowly intersperse some of the sections with stuff on epistemic rationality.

Instrumental Rationality 101:

[This section first gives a deeper explanation of instrumental rationality. It then looks at 3 potential ways our thinking can go wrong and why this means it’s important to care about debiasing.]

I’ve sort of pointed at this idea of instrumental rationality — motivation, achieving our goals, and other things, but what is it really about?

If I had to summarize in one sentence, I’d say that instrumental rationality basically bottoms out to being able to make the decisions that get you what you want.

(“Want” is actually a tricky term here, but let’s go with the naïve definition of “something you desire as a goal” for now.)

In one sense, it’s to be able to choose and act on better options.

For example, imagine a person who has sworn off sweets. He tries to keep to his commitment. Yet, in the moment, facing a candy bar, he decides to eat it anyway, unable to control himself. After eating it, though, he soon regrets having given into temptation.

To an outside observer, it sort of feels like he “could have” found a way stick to his commitment so that later on, he would avoid regret. Part of instrumental rationality’s goal is to help with these sorts of situations.

These are situations where, upon closer examination, we see potential ways to improve our behavior.

We want, then, to find ways to take more actions that we reflectively endorse, i.e. actions that we’d still support even if we thought about them or spent some time introspecting on them.

Still, what’s wrong with our naïve decision-making process?

Why focus on all these areas of instrumental rationality to try and boost our abilities? Well, as we’ll see both here and later, humans aren’t that great at naïvely achieving our goals. Those pesky cognitive biases I mentioned earlier can lead our thinking astray.

For a quick crash course, here are three instances of how our thinking can go wrong:

1) We’re terrible at planning:

In a famous study, some students were asked to estimate when they were 99% certain they’d finish an academic project. Yet, when the time came, only 45% of them finished at their own “certain” estimate.

Even worse, students in another study were asked to predict when they’d finish their senior thesis (a major project) if “everything went as poorly as it possibly could”. Less than a third of students by their own self-appointed worst-case estimate *1.

It’s far from just the students. Our overconfidence in planning has been replicated across all fields, from financial decisions, to software projects, to major government ventures. Just because something feels certain doesn’t mean it’s really so.

2) We’re screwed over by herd mentality:

Participants were placed either alone or in a group intercom discussion (but they were in separate rooms, so they couldn’t see one another). One of the “participants” then had a seizure (they were really working with the experimenter, and the seizure was faked).

For another study, participants were placed either alone or in a group intercom discussion (but they were in separate rooms, so they couldn’t see one another). One of the “participants” then appeared to have a seizure over the intercom.

(They were actually working with the experimenter.)

When alone, people went to help 85% of the time. But when the participants knew there were four other people in different rooms who had also heard the seizure, only 31% of those groups had people helped out *2.

Alas, there are also many real-life examples of our inability to handle responsibility in a group, often with disastrous results. Being in a group can make it harder to make good decisions.

3) We’re really inconsistent:

In yet another study, people were asked how much they’d pay to help out one child. Then, they were asked how much they’d pay to save one child out of two, so it’d be uncertain which one. On the second question, people were willing to pay less, even though they’d be saving one person in both scenarios *3.

In another study, people were willing to pay $80 to save 2,000 drowning birds, but a similar group of people came up with basically the same number, $78, (actually a little less!) when given the same question for 20,000 birds — 100 times the initial number *4.

Even though we may like to consider ourselves “rational creatures”, it’s clear that we’re still easily tripped up by things like uncertainty.

Or even just big numbers.

Hopefully those three examples can give you some gut-level feelings about where our naïve decision-making can go wrong. And this barely scratches the surface. There are many many more cognitive biases I didn’t cover here.

The main point is that simply relying on our typical mental faculties means relying on a process that can easily make mistakes.

However, these biases aren’t necessarily bad. They have “good intentions”. Sort of.

In the past, things like the overconfidence exhibited in 1) might have been useful in the past for scaring off rival tribes. Or the herd mentality in 2) might have been actually a good indicator of when to help someone out.

But things have changed.

Bluffing doesn’t scare off advancing deadlines. Now we can have friendships with people literally across the globe, past our local “tribes”.

Yet, we’re still stuck with pretty much the same squishy mammal brain as that of our distant ancestors. Mental adaptations that once might have proved helpful on the savannah have become poorly suited for our modern world.

Our brains are nothing more than lumps of wet meat cobbled together from years and years of iteration by evolution. They’re powerful, yes, but they’ve also got a whole bunch of leftover legacy code that isn’t necessarily useful in today’s world.

Thankfully, our brains can also look into themselves. Remember that the brain named itself! Instrumental rationality is about having us examine how our brains work and saying, “Hey, it seems a little weird that our thinking works this way.”

Then we can try to figure out how to do a little better.

Then we get a little better. Maybe make fewer mistakes.

Then we learn a little more, and we realize we’ve still been doing everything wrong.

Then we start all over again.

That’s what I think best captures the spirit of instrumental rationality.

References:

1. Buehler, Roger, Dale Griffin, and Johanna Peetz. “Chapter one-the planning fallacy: cognitive, motivational, and social origins.” Advances in Experimental Social Psychology 43 (2010):162.
https://www.researchgate.net/publication/251449615_The_Planning_Fallacy

2. Darley, John M., and Bibb Latane. “Bystander Intervention in Emergencies: Diffusion of Responsibility.” Journal of Personality and Social Psychology 8.4 p1 (1968): 377.
http://www.wadsworth.com/psychology_d/templates/student_resources/0155060678_rathus/ps/ps19.html

3. Västfjäll, Daniel, Paul Slovic, and Marcus Mayorga. “Whoever Saves One Life Saves the World: Confronting the Challenge of Pseudoinefficacy.” Manuscript submitted for publication (2014).
http://globaljustice.uoregon.edu/files/2014/07/Whoever-Saves-One-Life-Saves-the-World-1wda5u6.pdf

4. Desvousges, William H., et al. “Measuring Nonuse Damages using Contingent Valuation: An Experimental Evaluation of Accuracy.” (1992).
http://www.rti.org/sites/default/files/resources/bk-0001-1009_web.pdf

System 1 and System 2: A Quick Summary

[This is a fast and loose summary of the well-known dual process theory of the brain popularized in Daniel Kahneman’s book Thinking Fast and Slow. It revolves around System 1 (“fast thinking”) and System 2 (“slow thinking”).]

Your brain does a lot of things, but it’s often useful to try and fit your brain’s activities into categories. One such useful distinction is that of System 1 (also known as “S1” or “fast thinking”) and System 2 (also known as “S2” or “slow thinking”).

Let’s start with a quick example:

Draw your attention from these words you’re reading here and look at the clothes you’re wearing.

<Look at clothes>

You could probably instantly recognize the color of your clothes. There’s no conscious input — the color is just how things are.

This sort of instant recognition is an example of System 1 thinking.

In contrast, draw your attention to what you had for breakfast yesterday.

<Think of yesterday’s breakfast>

For this task, thinking about this requires a little more effort. Your eyes might have rolled up, and it might have taken a little longer.

This is an example of System 2 thinking.

Here’s a quick look at what roughly sorts of things set the two apart:

System 1 refers to the class of mental operations that are quick and often instinctive:

Wordless gut feelings (think the immediate revulsion you might feel towards a pile of garbage) also fall into this class, as does pattern-matching. It’s things like recognizing that “2 + 2 = 4” and looking at a precariously balanced laptop on a busy room and knowing that the laptop will fall. System 1 is the effortless side of thinking that switches on almost immediately.

System 2 refers to things that take longer, require more attention, and might be seen as more effortful:

Tasks like walking at a pace faster than usual, figuring out what a word is when spelled backwards, and general recall all fall into this category. System 2-type thinking is what we might typically refer to when we use the word “thinking”. It’s the sort of deliberate mental action that requires us to zone in on the subject.

When learning about this distinction, some people jump to the immediate conclusion that S1 is “bad” because it’s often responsible for many of our mental errors (although S2 also has its own fair share!), so this is a good place to stress that, to do well in life, both Systems are vital.

The immediate recognition from System 1 is very important if you’re comforting a friend. It’s System 1 that allows you quickly read facial cues. If your S1 registers that they look sad, you can quickly adjust your attitude without having to spend many minutes trying to piece together their emotional state using S2.

Likewise, when you’re comparing two products, like two refrigerators, it’s System 2 that helps you see which factors are relevant. It allows you to go through the actual calculations to figure out which one is better for you. Rather than letting your S1 be swayed by any immediate factors like how “cool” each product looks, S2 allows you to make a more informed decision.

It’s not a perfect classification system, but it leads to some very good insights.

Just remember that it’s a human-made classification (for the benefit of our understanding), not a biological one, so there’s not exactly a hard distinction between the two.

Neither of these systems are “real” — there are no real-world brain structures that directly correspond to the systems. Like some other abstractions we’ll meet later on, S1 and S2 are merely useful simplifications of reality.

This classification of two systems is our attempt to draw intuitive boundaries around our thinking to help compress what might otherwise be a complicated collection of phenomena.

In return for better compression of ideas, we’re sacrificing some accuracy.

I think that’s a useful tradeoff — having S1 and S2 in our vocabulary will make many of the later explanations easier — but it’s still important to remember that these are merely models, like how a map merely approximates the true territory.

You don’t need to know where every pebble on the road is for a map to be a good guide to where you want to go. Thus, all models are technically wrong, on some fundamental level of reality.

But, by pointing out the important features and landmarks along the way, some models can still be useful.

--

--