The reason wars are over. Reason won

Hanno Sauer
Science and Philosophy
14 min readAug 13, 2020

By Hanno Sauer

For much of the 20th and 21st century, psychologists and cognitive scientists were busy trying to explain why people are so strange. Rather than studying how human beings think and reason properly, those interested in human cognition and rationality arrived at a grim conclusion: human beings are hopelessly irrational, as their thinking regularly fails to comply with the rules of logic, rational decision making, probability theory, numeracy, or the scientific method. Name a principle of rationality, and it can be shown how humans consistently violate it.[i]

This focus on cognitive malfunction is justified to some extent, because we can often learn a great deal by looking at deviations from normal performance. If we want to know how perception works, it is often a good idea to start with optical illusions. But just like it would be ludicrous to use optical illusions as evidence that human perception is fundamentally distorted, misleading, or inaccurate, it would be silly to infer from the fact that humans are not perfectly rational that their cognition is somehow fundamentally broken.

Modern psychology teaches us that we are prone to all sorts of biases and reasoning errors; that we are terrifyingly gullible at one moment and marvelously unwilling to change our mind in the next; that we are conformist, myopic, tribal, and shallow reasoners. Our judgments are highly susceptible to extraneous influences, and we routinely ignore pertinent evidence. All of this seemed to show that we cannot trust the buggy software our minds run on.

But in recent years, the tide has started to turn. Increasingly, a new generation of researchers is beginning to cast doubt on the irrationality paradigm, and starts to see the idea that human beings are fundamentally irrational as blown out of proportion or just plain false. It remains true that our cognition is far from flawless, and that our thinking could benefit from a lot more training, fine-tuning and scaffolding. But it becomes more and more clear that many of the most surprising and spectacular claims of the irrationalists do not hold water. Many early results did not replicate, or turned out to be artifacts of the experimental design that have little bearing on real-life behavior. Some of the evidence has been straight up refuted, and almost all of it is open to much more plausible interpretations that make the alleged effects appear far less damning, and far more reasonable.

We wanted to know whether we are rational beings or not, so we went to the lab to find out. We tried hard to debunk our flattering self-image, and to show that an animal calling itself the rational animal is probably not being entirely impartial. But what we found was even more surprising: despite many flaws and imperfections, we are fundamentally rational creatures.

Not so fast

Perhaps the most profound attack on human rationality worth taking seriously — I am ignoring “postmodernist” anti-rationalism here, with its peculiar mix of confusion and anxiety — came from the so-called “heuristics and biases”-approach pioneered by Daniel Kahneman and Amos Tversky.[ii]

Here, the main idea is that our mind operates on two tracks: System I is fast and efficient, but inflexible; System II is flexible and precise, but slow and easily exhausted. We can show with cleverly designed studies how much of our thinking is driven by automatic and often subconscious System I processes, and how deeply and frequently our intuitive cognition leads us astray.

This paradigm proved particularly fruitful in economics, where it promised to finally do away with the much ridiculed and wildly unpopular theory of homo economicus, according to which people by and large make decisions on the basis of rationally shaped utility functions. The heuristics and biases-approach, together with its partner in crime behavioral economics[iii], tried to expose this idea as a myth, and to show that people couldn’t care less about the axioms of probability theory or cost/benefit-analysis. Instead, they almost always go with their gut, and make snap decisions on the basis of quick-and-dirty rules of thumb, rather than to weigh the pros and cons of all available options and to go with the optimal one.

But many of the most famous and striking effects generated in this paradigm don’t seem so damning on a closer look. Consider the famous “Linda the bank teller” scenario. People are introduced (on paper) to Linda, and are given all sorts of information about her: that she majored in philosophy, cares about social justice, was active in various environmental causes, and so on. People are then asked whether it is more likely that a) Linda is a bank teller or that b) Linda is a bank teller and active in the feminist movement. Many go for b), thereby apparently committing the “ conjunction fallacy”: A&B can never be more likely than A alone.

This is fun stuff, and if it showed that people disregard the most basic rules of probability, it would be very striking indeed. But it shows no such thing, of course, as it is probably just an effect of conversational pragmatics. When presented with the two options like this, people take a) to mean (based on something like Grice’s maxim of quantity) that Linda is a bank teller who is not active in the feminist movement. And when people are confronted with the two options in a way that removes the implicature, the number of people who have no idea how probabilities work drops dramatically.[iv]

I knew it

In the meantime, many other alleged biases have been vindicated as well. Hindsight bias is the phenomenon that after an event occurred, people think that it was more likely to happen than they would have thought ex ante. But this is not a bias. When an event occurs, people thereby acquire new evidence how likely it was, and how good their total evidence regarding its likelihood was beforehand. People take this into account, as they should.[v]

Confirmation bias is the tendency to seek out and overestimate evidence that confirms, rather than disconfirms, one’s prior beliefs. But there is nothing irrational about this, either. Rather, it reflects a perfectly sensible and efficient distribution of epistemic labor. People try to come up with the best case for their respective opinions.[vi] Who else is going to do it? Open-mindedness is a virtue, and we should remain prepared to revise our beliefs when necessary. But people who constantly try to disconfirm their beliefs are not rational, they are unwell.

Implicit bias typically refers to unconscious discriminatory attitudes. It is supposedly measured by the so-called Implicit Association Test, which you can use right now to check how (depending in which version you pick) racist, sexist, xenophobic or fatphobic you are.[vii] But the IAT doesn’t measure bias. It measures response times, and it’s far from clear what those mean. More importantly, and perhaps more damagingly, the IAT’s test/retest reliability appears to lie between .4 and .6, meaning that when you take the test multiple times, you won’t always get the same result.[viii] Finally, it is far from clear what the predictive value of implicit bias is. When you know someone’s IAT-results, this tells you very little about how they will behave in real life, which is the thing we are after. There is no doubt that people harbor discriminatory attitudes. But the IAT doesn’t seem to measure them, and instead appears to distract from the real problems such as discriminatory social structures and explicit racism or sexism.

Law and Order

The paradigmatic case of irrational thinking is when people’s judgments are influenced by factors that have nothing whatsoever to do with the truth. Order effects happen when people form different beliefs about two scenarios depending on the order in which they look at them. Framing effects make people’s beliefs vary with how different options are “framed”: for instance, a 1/3 chance of winning is the same as a 2/3 chance of losing, but people may still prefer the former to the latter simply because “winning” sounds more appealing.

We know now that framing effects are typically small, and a careful recent analysis of the literature showed that the likelihood of people arriving at a different judgment than they would have had they been exposed to a different frame hovers around 20%.[ix] 80% reliability is certainly far from perfect, but it’s even further from the damning indictment framing effects are often presented as.

Order effects are similarly awkward. Who likes to find out that they are inclined to believe one thing about A-then-B but another thing about B-then-A?[x] Even more awkwardly, professional expertise about the subject matter does not seem to immunize against the influence of presentation.[xi] But many of the most famous studies on order effects did not discover anything irrational. Suppose we find an apparent order effect in people’s moral judgments about sacrificial dilemmas such as the “Trolley” problem. There is a statistically significant difference in the mean permissibility ratings of switching the lever in the Trolley dilemma depending on whether people receive this scenario or the Footbridge case first. But in the former case, people pass their verdict about the Trolley case before they have seen the Footbridge dilemma. In the latter, they make a judgment about the Trolley scenario after having already seen the other. This is a benign updating rather than a pernicious order effect.[xii] People do not irrationally form different beliefs depending on the order of presentation; rather, they learn about a relevant distinction, which then ends up informing their judgment. This is textbook rational behavior.

You don’t know what you’re doing

Some studies seemed to suggest that our irrationality runs so deep, we don’t even hold stable beliefs about many things in the first place. People are often bad at detecting change, even when it happens in plain sight. This is known as “change blindness”, and it also applies to our decisions: in cases of choice blindness, people can be tricked into believing they chose one option when they actually chose a different one.[xiii]

This extends to moral and political beliefs as well. People can make judgments about all sorts of normative issues, such as whether it is ok to sacrifice a person for the common good, whether to support Israel or Palestine, or even which political party to prefer, and when they are given false feedback about which belief they supported, they end up accepting and even justifying those “beliefs” as their own.[xiv]

But here, too, it is far from clear how unreasonable this is. It seems pretty obvious that when one has no reason to suspect being tricked, seeing which option one has picked five minutes ago is excellent evidence for what one believes, especially in a low stakes situation like a brief survey study. And it is not irrational to politely explain to others why one believes what one has good reason to think one believes. Choice blindness doesn’t show that people’s beliefs are flimsy, their justifications even flimsier, and that people change their minds for no reason whatsoever. It shows that when you put in a lot of elaborate effort, a certain amount of people can be tricked into false self-attributions. But such a more nuanced and modest description of what’s going on doesn’t make headlines.

I’m not making this up

Moral judgment and reasoning have been especially fertile ground for irrationalists. Perhaps the suspicion that it’s not just our beliefs, but our values that are arbitrary and contingent is particularly captivating.

The most striking piece of evidence for the irrationality of moral judgment comes from moral dumbfounding.[xv] Sure, people offer considerations in favor of their moral beliefs, but these are probably just confabulatory rationalizations, rather than genuine reasons that played a role in how they arrived at their judgments. If not, then why don’t people revise their judgments when their reasons are shot down?

But even in the original dumbfounding study, 20% of people did revise their judgements. Of those who didn’t, many probably just didn’t buy the far-fetched tales about harmless taboo violations[xvi], others subtly appreciated the riskiness of the described actions.[xvii] A closer look at whether people can be made to question their moral beliefs shows that they can, even though it is hard for them to do so.[xviii]

Facts don’t care about your feelings

Another way of showing that morality is not based on reason could be to show that it is based on emotion. So-called sentimentalists about moral judgment hold that that’s all there is to it: our values are grounded in feelings, nothing else.[xix]

The case for sentimentalism also looks much less robust now than it did a decade ago. For one thing, some of the studies most frequently used to support broadly sentimentalist conclusions have been ground zero for the replication crisis in psychology.[xx] These days, using artificially induced affect to manipulate moral judgments has pretty much died off as a research paradigm.

But even if replicability weren’t an issue, sentimentalism would be in dire straits. Effect sizes have always been small, and were largely restricted to some subgroups of participants, such as people who are especially easily disgusted or frightened. Worse still, most of the relevant studies fail to show what they promised, which is to actually change people’s moral judgments. At most, they can make people’s attitudes slightly more severe.[xxi] And when studies that didn’t find any effect of incidental emotions on moral judgments are taken into account as well, the effect basically vanishes.[xxii]

Improving automatic mode

One particular subset of moral beliefs has been a favorite target of irrationalist debunkers. Deontological moral judgments are judgments that are sensitive to more than merely the outcome of an action. Deontologists, for instance, think that intentions matter, too, or that something can be strictly morally forbidden when and because they violate an individual’s rights, even though it may bring about the best consequences.

When moral psychologists started putting people into brain scanners while they were engaged in moral judgment tasks, anti-deontologists started to rejoice. It seemed that when people endorsed the deontological option in a moral dilemma, the emotional areas of their brain lit up. When they made consequentialist judgments, those same regions remained (comparatively) inactive.[xxiii] This seemed to show that deontological ethics is all about emotion, and therefore (allegedly) no good.

The first attempts in this direction, however, were a bit of a mess: one may see consequentialists as heartless bean counters, but even this theory does not deserve the allegation that one could justify hiring a rapist to restore a relationship, supposedly all in the name of impartial utility-maximization,.[xxiv] The tendency toward such pseudo-utilitarian judgements hardly appears to be connected to actions favored by genuine utilitarians, i.e. philanthropic donations or vegetarianism, but harmonizes with various much more Machiavellian traits.[xxv] Later attempts struggled to get rid of the problem that the stimuli conflated the distinction between deontological and consequentialist judgements with the distinction between intuitive and counterintuitive judgments. Once one considers that there can be counterintuitive deontological and intuitive consequentialist judgements, the envisioned correlations (intuitive/deontological vs. counterintuitive/consequentialist) turn out to be largely spurious.[xxvi]

What remains is that snapshot judgements, regardless of their content, can be problematic, at least when our automatic-intuitive cognition is confronted with an ethical problem it was not prepared for by evolutionary or cultural or individual learning. This is certainly not nothing, but still a disappointing result for those consequentialists who, in light of the apparent denunciation of their theoretical opponents as touchy-feely confabulators, had already popped the champagne in the fridge.

The nail in the coffin of anti-deontological irrationalism came when researchers were able to show that deontological moral distinctions are actually the upshot of perfectly rational statistical learning mechanisms. We extract our moral rules from the evidence we are given, and moral instructions almost always refer to what people are supposed to do (or not), not what they are supposed to let happen (or not). Using very simple rules of statistical inference, children rationally infer that moral rules are about actions rather than outcomes. Other non-consequentialist moral principles are acquired in similar ways. [xxvii]

What’s next?

What’s left of the irrationalist paradigm? I don’t want to overstate my case here. We certainly do make plenty of mistakes. I am not saying that there are no framing or order effects, that people never rationalize their moral gut reactions, or that they are never biased or irrational. But the evidence, as it stands now, does not support the more radical conclusions of the irrationalist movement in psychology, cognitive science and empirically informed philosophy.

Rather, the data support something that could be referred to as rationalist pessimism: rationality is real, but rare. Human beings are not fundamentally irrational, but complying with the rules of reason is difficult and cognitively exhausting, so we tend to be pretty bad at it. And by “we”, I mean all people, most of the time, and most people, all of the time.

Humans have long wondered how rational they are. That question has now been put to a test, and the results are in. Is rationality a myth? No, it is not. Rationality is very much real, and makes an actual difference to how people think, feel and act. But it often works in surprising and unexpected ways, and that’s what our research energy should be directed towards: finding out how human reason works, and how to improve it.

The question whether reason works, on the other hand, should be retired. The reason wars are over. And reason won.

[i] Ariely, D. (2008). Predictably Irrational. The Hidden Forces that Shape Our Decisions. New York, HarperCollins.

[ii] Kahneman, D. (2011). Thinking, Fast and Slow. New York, Farrar, Strauss and Giroux.

[iii] Thaler R. (2015). Misbehaving. The Making of Behavioral Economics. New York, Norton & Company.

[iv] Fiedler, K. (1988). The dependence of the conjunction fallacy on subtle linguistic factors. Psychological Research, 50, 123–129; gigerenzer, how to make biases disappear

[v] Hedden, B. (2019). Hindsight bias is not a bias. Analysis, 79(1), 43–52.

[vi] Mercier, H., & Sperber, D. (2017). The enigma of reason. Harvard University Press.

[vii] Take the test here: https://implicit.harvard.edu

[viii] For a nice write-up of the current situation with regard to the IAT, see https://www.thecut.com/2017/01/psychologys-racism-measuring-tool-isnt-up-to-the-job.html.

[ix] Demaree-Cotton, J. (2016). Do framing effects make moral intuitions unreliable?. Philosophical Psychology, 29(1), 1–22.

[x] Liao, S. M., Wiegmann, A., Alexander, J., & Vong, G. (2012). Putting the trolley in order: Experimental philosophy and the loop case. Philosophical Psychology, 25(5), 661–671.

[xi] Schwitzgebel, E., & Cushman, F. (2012). Expertise in moral reasoning? Order effects on moral judgment in professional philosophers and non‐philosophers. Mind & Language, 27(2), 135–153.

[xii] Horne, Z., & Livengood, J. (2017). Ordering effects, updating effects, and the specter of global skepticism. Synthese, 194(4), 1189–1218.

[xiii] Johansson, P., Hall, L., Sikström, S., Tärning, B., & Lind, A. (2006). How something can be said about telling more than we can know: On choice blindness and introspection. Consciousness and cognition, 15(4), 673–692.

[xiv] Hall, L., Johansson, P., & Strandberg, T. (2012). Lifting the veil of morality: Choice blindness and attitude reversals on a self-transforming survey. PloS one, 7(9), e45457.

[xv] Haidt, J. (2001). The emotional dog and its rational tail: a social intuitionist approach to moral judgment. Psychological Review, 108(4), 814.

[xvi] Royzman, E. B., Kim, K., & Leeman, R. F. (2015). The curious tale of Julie and Mark: unraveling the moral dumbfounding effect. Judgment & Decision Making, 10(4).

[xvii] Stanley, M. L., Yin, S., & Sinnott-Armstrong, W. (2019). A reason-based explanation for moral dumbfounding. Judgment and Decision Making, Vol. 14, №2, March 2019, 120–129.

[xviii] Stanley, M. L., Dougherty, A. M., Yang, B. W., Henne, P., & De Brigard, F. (2018). Reasons probably won’t change your mind: The role of reasons in revising moral decisions. Journal of Experimental Psychology: General, 147(7), 962.

[xix] Prinz, J. (2007). The emotional construction of morals. Oxford University Press.

[xx] Fort he “reproducibility project”, see https://osf.io/ezcuj/.

[xxi] May, J. (2014). Does disgust influence moral judgment?. Australasian Journal of Philosophy, 92(1), 125–141.

[xxii] Landy, J. F., & Goodwin, G. P. (2015). Does incidental disgust amplify moral judgment? A meta-analytic review of experimental evidence. Perspectives on Psychological Science, 10(4), 518–536.

[xxiii] Greene, J. D., Nystrom, L. E., Engell, A. D., Darley, J. M., & Cohen, J. D. (2004). The neural bases of cognitive conflict and control in moral judgment. Neuron, 44(2), 389–400.

[xxiv] McGuire, J., Langdon, R., Coltheart, M., & Mackenzie, C. (2009). A reanalysis of the personal/impersonal distinction in moral psychology research. Journal of Experimental Social Psychology, 45(3), 577–580.

[xxv] (Kahane, G., Everett, J. A., Earp, B. D., Farias, M., & Savulescu, J. (2015). ‘Utilitarian’judgments in sacrificial moral dilemmas do not reflect impartial concern for the greater good. Cognition, 134, 193–209.

[xxvi] Kahane, G., Wiech, K., Shackel, N., Farias, M., Savulescu, J., & Tracey, I. (2012). The neural basis of intuitive and counterintuitive moral judgment. Social cognitive and affective neuroscience, 7(4), 393–402; see also Paxton, J. M., Bruni, T., & Greene, J. D. (2014). Are ‘counter-intuitive’deontological judgments really counter-intuitive? An empirical reply to. Social cognitive and affective neuroscience, 9(9), 1368–1371.

[xxvii] Nichols, S., Kumar, S., Lopez, T., Ayars, A., & Chan, H. Y. (2016). Rational learners and moral rules. Mind & Language, 31(5), 530–554.

--

--