The Ideological Turing Test: How to Be Less Wrong

Here’s something that took me half a decade to figure out: It is really, really, really hard to be right.

If you want to be right all the time, go be an accountant. The rest of us — paleontologists, internet dating specialists, serial entrepreneurs (read: homeless millenials), policymakers, and scholars of Japanese religions — will just have to get used to being wrong.

There’s a strange paradox about wrongness: We go about our lives feeling like we’re right, but — in reality — we spend most of our lives being wrong.

Everyone is Wrong… Except for Me

One of my favorite introductions to human error is Kathryn Schulz’s Being Wrong.

Our default state, says Schulz, is to feel like we’re right all the time:

“A whole lot of us go through life assuming that we are basically right, basically all the time, about basically everything: about our political and intellectual convictions, our religious and moral beliefs, our assessment of other people, our memories, our grasp of facts. As absurd as it sounds when we stop to think about it, our steady state seems to be one of unconsciously assuming that we are very close to omniscient.”

Why do we feel this way?

Check out Schulz’s book for the full story, but here’s the general idea. The world is complex. The future is unknown. To make decisions and act at all, we humans don’t have the time to wait for all the evidence.

So what do we do? We jump to conclusions:

“We don’t gather the maximum possible evidence in order to reach a conclusion; we reach the maximum possible conclusion based on the barest minimum of evidence. … We don’t assess evidence neutrally; we assess it in light of whatever theories we’ve already formed on the basis of whatever other, earlier evidence we have encountered.”

Jumping to conclusions is a feature, not a flaw. We need it to survive.

But sometimes this feature fails us badly. We form theories with little evidence and then — because theories change how we view the world — start to blur, distort, and reshape our vision.

Everyone one of us is a teenager in love. We wear rose-colored glasses, and spent most of our lives seeing only what we want to see.

Never Ask a Philosopher for Baby Names

Now, here’s the second part of the paradox. We all feel like we’re right, but — speaking in probabilities — there’s a good chance that all of us are actually wrong.

Let me try to explain.

Remember how stupid you were as a teenager? If you’re like me, you wanted all sorts of stuff (emo hair, orange sports car, girlfriend with shaved head and neck tattoos, etc.) that you want nothing to do with now.

Naked in the shower a few months ago, I realized something.

Ten years ago, almost everything I believed was wrong. If this is the case, then I have a really bad track record. Sure, I might be a little smarter, but there’s a good chance that I’m still wrong about almost everything today.

When I come up with these things, I always think I’m a genius. But, as usual, I discovered last week that this idea has been around for a while. It’s called the “end-of-history illusion”, and even big-name political philosophers fall for it.

Now here’s the cool thing: This principle transfers to our knowledge of the world.

Take science, for example. Here’s Schulz again:

“Here’s the gist: because so many scientific theories from bygone eras have turned out to be wrong, we must assume that most of today’s theories will eventually prove incorrect as well.”

This is crazy stuff. What we believe as ‘true’ today is just a small blade of grass in a miles-wide graveyard of ideas.

Much of what we believe today is doomed to join other infamous dead theories like Lamarckism (“Giraffes have long necks because they used them a lot.”), bloodletting (“Let me put a leech on your forehead. It’ll cure your allergies. I promise.”), and phrenology (“I’m better than you because I have a bigger head.”).

Philosophers have a name for this concept. To help make it memorable for undergraduates, they kindly titled it the “Pessimistic Meta-Induction from the History of Science”.

Thanks philosophers, you guys are the best.

This principle, says Schulz, isn’t limited to science. It applies everywhere:

“…what goes for science goes in general. Politics, economics, technology, law, religion, medicine, child-rearing, education: no matter the domain of life, one generation’s verities so often become the next generation’s falsehoods that we might as well have a Pessimistic Meta-Induction from the History of Everything.”

Isn’t this so exciting? My compulsory science education left me with the impression that we had all the big problems figured out. It’s not true at all. We’re just getting started.

Everyone Should Have an Existential Crisis

Okay, so that was fun. But here’s my favorite part of all this.

When you both recognize and admit that you might be wrong, something magical happens. Here’s Schulz again:

“The idea behind the meta-induction is that all of our theories are fundamentally provisional and quite possibly wrong. If we can add that idea to our cognitive toolkit, we will be better able to listen with curiosity and empathy to those whose theories contradict our own. We will be better able to pay attention to counter evidence those anomalous bits of data that make our picture of the world a little weirder, more mysterious, less clean, less done. And we will be able to hold our own beliefs a bit more humbly, in the happy knowledge that better ideas are almost certainly on the way.”

Admitting you might be wrong can transform your personality, making you both less arrogant and less dogmatic.

I love writing about this, because I experienced this “shift” firsthand during a small existential crisis in my early twenties. Here’s the three-second summary: There was a girl. I thought I was right about everything. She didn’t. I was wrong.

I lost ten pounds and cried a lot (you know, all the usual tough-guy-discovers-he-is-actually-weak stuff), but I’m glad it happened.

Thanks to my crisis, I’m more empathetic, more humble and more fun to be around.

A Lot More at Steak

There’s a lot more at stake here, though, than one Asian man’s feelings.

There’s a dark relationship between the feeling of “I know I’m right” and violence, which Schulz captures in this paragraph:

“If I believe unshakably in the rightness of my own convictions, it follows that those who hold opposing views are denying the truth and luring others into falsehood. From there, it is a short step to thinking that I am morally entitled — or even morally obliged — to silence such people any way I can, including through conversion, coercion, and, if necessary, murder. It is such a short step, in fact, that history is rife with instances where absolute convictions fomented and rationalized violence.”

I’ll write about morality and violence some other time, but here’s a quote from an essay in Aeon:

“Across practices, across cultures, and throughout historical periods, when people support and engage in violence, their primary motivations are moral. By ‘moral’, I mean that people are violent because they feel they must be; because they feel that their violence is obligatory. They know that they are harming fully human beings. Nonetheless, they believe they should. Violence does not stem from a psychopathic lack of morality. Quite the reverse: it comes from the exercise of perceived moral rights and obligations.”

People who kill people aren’t evil. In their own eyes and in the eyes of their peers, they are good people fighting for what’s right.

Which means, scarily enough, that “people who kill people” could just as well be you or me.

How to Be Less Wrong

Okay, so we’ve taken a brief look at (a) why it’s so easy to think you’re right and (b) why it’s so hard to actually be right.

Since it’s so hard to be right, I prefer to forget about being right at all and just focusing on being less wrong.

A Short Tour of Self-Inflicted Confusion

One way — perhaps the best way — of being less wrong is to expose your ideas to testing.

This requires some emotional maturity. People who hate being wrong will “close their ears” to outside ideas. By disposition, I hate criticism, so this has always been extra-hard for me.

This stuff may be extra-hard, but it isn’t impossible. All sorts of great figures in history learned to do it.

For example, here’s an excerpt from the autobiography of Charles Darwin, one of history’s most-often-referenced and least-often-read authors:

“I had, during many years, followed a golden rule, namely, that whenever a published fact, a new observation or thought came across me, which was opposed to my general results, to make a memorandum of it without fail and at once; for I had found by experience that such facts and thoughts were far more apt to escape from the memory than favorable ones.”

This state of mind, I guess, is especially important for scientists.

Here’s another example from Nassim Taleb’s Antifragile, where he writes about two legendary 20th century thinkers:

“The great Karl Popper often started with an unerring representation of the opponents positions, often exhaustive, as if he were marketing them as his own ideas, before proceeding to systematically dismantle them. Also, take Hayek’s diatribes Contra Keynes and Cambridge: it was a “contra” but not a single line misrepresents Keynes or makes an overt attempt at sensationalizing. (I have to say that it helped that people were too intimidated by Keynes’ intellect and aggressive personality to risk triggering his ire.)”

When most people argue, they find the weakest version of their opponent’s argument and attack that. But that’s like beating up a fourth-grader and stealing his shoes — the kid ends up crying; you look like a coward; and, hell, the shoes don’t even fit.

You know, I don’t really like the idea of arguing at all. There’s too much focus on winning or losing. It’s much more important to improve the thinking of both parties.

The New York Times captures the spirit of this stance pretty well:

“Most importantly, [good disagreements] are never based on a misunderstanding. On the contrary, the disagreements arise from perfect comprehension; from having chewed over the ideas of your intellectual opponent so thoroughly that you can properly spit them out. In other words, to disagree well you must first understand well. You have to read deeply, listen carefully, watch closely. You need to grant your adversary moral respect; give him the intellectual benefit of doubt; have sympathy for his motives and participate empathically with his line of reasoning. And you need to allow for the possibility that you might yet be persuaded of what he has to say.”

Sexy Time: Enter the Ideological Turing Test

In a previous essay, I wrote about how sexy names help us remember abstract concepts.

The best ‘sexy name’ I know for this stance of “understand before you disagree” comes from the economist Bryan Caplan. He came up with the name “Ideological Turing Test”.

The original Turing Test comes from the field of artificial intelligence and works something like this:

“If you go on a date with a well-disguised robot named Sally and, after a few cocktails and several rounds of flirting, you still haven’t figured out that Sally isn’t human, she’s passed the test.”

Basically, true artificial intelligence should be so good that we can’t tell the difference between a human and a robot.

Likewise, Caplan says that I should understand my opponents’ ideas so well that they can’t tell the difference between what I am saying and what they believe:

“The Ideological Turing Test — this is an idea that I came up with something like five years ago. There’s the original Turing Test … [and] I was saying you could actually do a similar thing for a human being. You could say, ‘Is it possible for you to successfully mimic the holder of a view that you disagree with?’ Sort of jumping off of [John Stuart] Mill’s famous line of, ‘He who knows only his own side of the case, knows but little of that.’ … Once you can pass that test, then you have at least indicated that you understand the view that you disagree with.”

I don’t know if it’s practical to actually test people… You could probably fool lots of folks without truly understanding what they’re saying. That’s how we fooled our teachers into letting us graduate high school, isn’t it?

But I like the spirit of it. And I like the name — it’s memorable.

If you want even more metaphorical “oomph”, another term I like is the steel man argument. Instead of attacking the weakest version of someone’s argument (kid with shoes that don’t fit you), you find the best and strongest argument and try to refute that.

There’s some nice robot-steel overlap going on here… and also probably a joke about Iron Man that I’m not talented enough to make.

Last Words

So it’s not impossible to be less wrong, but it is really, really hard.

These ideas have been around for thousands of years, and I suspect that it’s just too much effort for most people to question themselves all the time.

Here’s John Stuart Mill writing in the 19th century:

“Ninety-nine in a hundred of what are called educated men are in this condition, even of those who can argue fluently for their opinions. Their conclusion may be true, but it might be false for anything they know: they have never thrown themselves into the mental position of those who think differently from them, and considered what such persons may have to say; and consequently they do not, in any proper sense of the word, know the doctrine which they themselves profess.”

I tend to swing between pessimism and optimism on our ability to improve, but here’s something I’m fairly sure about.

With how connected the world is now, we have no choice but to be exposed to the ideas of people we disagree with. If we choose to ignore what they say, the result is fragmentation, isolation, and — in some cases — righteous violence.

It’s more important than ever for us to learn that it’s both okay and necessary to be wrong.

I end with a final quote from Schulz:

“Of all the things we are wrong about, this idea of error might well top the list. It is our meta-mistake: we are wrong about what it means to be wrong. Far from being a sign of intellectual inferiority, the capacity to err is crucial to human cognition. Far from being a moral flaw, it is inextricable from some of our most humane and honorable qualities: empathy, optimism, imagination, conviction, and courage. And far from being a mark of indifference or intolerance, wrongness is a vital part of how we learn and change. Thanks to error, we can revise our understanding of ourselves and amend our ideas about the world.”

Three cheers for being wrong, and a gold star for admitting it.


For more bad puns about steak, Join over 25,000 readers getting The Open Circle, a free weekly dose of similar mind-expanding ideas. Plus, I’ll send you a list of my favorite reads and 200+ pages from my private notebooks. Get it here.

Originally published here.