Mental Models, Dragonfloxes, and How to Think Real Good

A lot has been said recently about mental models. If you are at all interested in clear thinking and decision-making, the term seems to show up everywhere.

The most famous proponent of the concept is Warren Buffett’s business partner, Charlie Munger. He considers mental models the key to ‘elementary wordly wisdom.’

Here he is:

“I’ve long believed that a certain [decision-making] system — which almost any intelligent person can learn — works way better than the systems that most people use. …what you need is a latticework of mental models in your head.And you hang your actual experience and your vicarious experience (that you get from reading and so forth) on this latticework of powerful models. And, with that system, things gradually get to fit together in a way that enhances cognition.”

This is a fantastic start, but I have two complaints.

First, I think mental models are about much, much more than just enhanced cognition or better decision-making. More on this later.

Second, I don’t think the idea of the mental model is very beginner-friendly. Many beginners, I suspect, see the term and walk away asking, “What the hell is a mental model, anyway?”

Let’s try to answer that question.

What the hell is a mental model, anyway?

A not-so-bad starting point to help ourselves understand the mental model comes from Peter Bevelin’s Seeking Wisdom:

“A model is an idea that helps us better understand how the world works. Models illustrate consequences and answer questions like ‘why’ and ‘how’. Take the model of social proof as an example. What happens? When people are uncertain they often automatically do what others do without thinking about the correct thing to do. This idea helps explain ‘why’ and predict ‘how’ people are likely to behave in certain situations.”

There’s a seed of meaning here, but I think things are still unclear.

To ‘pump’ some more intuition, let’s use several useful metaphors to help get a better idea of what a mental model is and does.

Nerdy aside: You can think of this as lifting the the mental model up “by its bootstraps” — we’re gonna try and use mental models to, reflexively, try and understand what a mental model is.

Through the Looking Glass

The most common, and perhaps most useful, metaphor for mental models is to think of them as a set of lenses.

In my most popular essay ever (which I thought would be a flop), I wrote:

“You can think of mental models as psychological lenses that color and shape what we see. Some of this is genetic or cultural (Americans focus on very different parts of a picture than the Japanese do), but much of our perception is also shaped by experience — and experience includes the books we read.”

Our eyes are constantly assaulted by a never-ending stream of data. Without some way to filter and organize all this information, it would be impossible for us to do anything at all.

The same goes for our understanding of the world. I always shake my head when someone says, “Just give me the facts.” Facts are everywhere. If I gave you all the facts, they’d fill up your bedroom, your garage, your neighborhood, and eventually crush the entire world under their combined weight. Facts are nothing without interpretation.

Mental models helps us filter, organize, and understand. This gives a simple — but hopefully useful — representation of the world.

Mapmaker, Mapmaker

Another way to think of mental models is to think of them as maps.

Above, we mentioned that a mental model is a simplification of the world. Maps are a lot like this — they simplify the territory they represent.

In fact, maps are only useful because they simplify.

If your boss (who you secretly dislike, despite what you tell yourself) asks for a ‘perfectly accurate map of Tokyo’ on his desk by 3 pm, your only option would be to transport the entire city of Tokyo on top of him — ‘accidentally’ crushing him under several a million tons of asphalt and concrete (along with thousands of drunk salarymen and some very-bewildered sushi chefs).

The only perfect representation of Tokyo is, well, Tokyo itself.

Here’s some more intuition. I use Google Maps a lot navigate, but I almost never use satellite mode, which looks like this:

Just looking at it raises my blood pressure.

Instead, I much a prefer map format, which is much less noisy and puts me in a Zen state of calm:

It works because it simplifies.

Another reason I like to think of mental models as maps is because maps often distort reality on purpose. Maps don’t have to be a perfect representation of the world, and neither do mental models.

Take subway maps, for example. Here’s a subway map of Tokyo:

If I were to lay this map on top of a same-scale map of Tokyo, none of the stations would line up — stations do not come in straight lines, and they certainly aren’t equidistant.

The subway map filters out information useless for most subway riders (distance, absolute location), making it easier for then to do what they need to do — catch a train from A to B.

Mental Models are Tools

In the above sense, maps are tools that have a function — getting you from A to B.

It’s better, I think, to also think of mental models as tools instead of pictures. We don’t want a perfect picture or representation of reality. We want what works.

A visual example from an essay by Nassim Taleb. Purposely introducing distortions can improve aim.

A subway map can distort reality to better help you navigate. Likewise, we humans can distort our view of the world to better help ourselves navigate life.

Some argue that religious beliefs serve this very purpose. It doesn’t matter if karma isn’t real or vengeful spirits of the ancestral dead do not exist — if such beliefs make you love thy neighbor, pay thy taxes, and avoid pickpocketing thy would-be victim on the Tokyo Metro, they are doing their job.

Or take marriage vows. With the divorce rate at what it is, you might argue that it is irrational to vow to your soon-to-be spouse, “till death do us part.”

However, this is beside the point. Believing that you want to be together forever — holding it as an ideal — can help strengthen your relationship.

Mental models don’t have to — don’t need to — perfectly represent the world. They just have to be good enough to work.

You judge a model by how useful it is, not by how right it is.

One more thing. Thinking about mental models as tools helps emphasize how they translate to action. Mental models affect perceptions (lenses) which affect how we see the world (our maps). This then goes on to affect both how we think and act.

In other words: beliefs have consequences.

Now, enough intuition. It’s time to shift gears and take our concept of the mental model to the playing field.


Time to Play

Now we have our metaphors — mental models as lenses (perception organizes, but always leaves something out) and mental models as maps (simplified, imperfect representations of reality), and mental models as tools (distortions linked to useful actions) let’s take a look at how mental models apply to real life.

The rest of this essay is a sort of creative shotgun blast — I’ll be throwing a bunch of fun, possibly valuable, ideas against a wall with the hope that some of them stick.

Owls Are Out. Dragonfloxes Are In.

First, let’s apply mental models to decision-making.

By the end of this section, the following should make sense: To be a good decision-maker, become a dragonflox.

In case you’ve never seen a dragonflox before, here’s a photo of one I took in the wild:

Not very pretty, but looks can be deceiving — these guys are excellent decision makers.

The intuition for this come from Philip Tetlock’s masterful book Superforecasting: The Art and Science of Prediction, which changed my life when I read it years ago.

Tetlock’s team looked at pundits and other pseudo-experts — particularly in the political domain — and studied the accuracy of their predictions. What Tetlock found was that most experts were no better at forecasting than chimpanzees throwing darts. Talk about humiliating. (Note: Of course, I simplify here. Most experts — surgeons and pilots, for example — are not chimps. Only some are.)

It’s always fun to bully experts, but here’s what really gets me excited: Tetlock’s book argues that you can systematically train someone to be a better decision-maker.

So how do you do this?

Well, there are many moving parts here, but one way to think about it is what I call the hedgehog-fox distinction. This distinction goes way back to Aesop’s fables but, for our purposes, here’s what you need to know. There are (to generalize) two kinds of decision-makers: Foxes, who can do many things and hedgehogs, who can only do one thing — curl up into a ball.

Here’s Tetlock, who uses the same lens analogy to describe how hedgehogs see the future:

“…hedgehog forecasters first see things from the tip-of-your-nose perspective. That’s natural enough. But the hedgehog also “knows one big thing,” the Big Idea he uses over and over when trying to figure out what will happen next. Think of that Big Idea like a pair of glasses that the hedgehog never takes off. The hedgehog sees everything through those glasses.”

Foxes are the opposite of hedgehogs. Put simply: Hedgehogs have one BIG mental model, foxes have many mental models.

When you only have a hammer, the whole world starts to look like a nail. Hedgehogs have a hard time seeing outside their own mental models. This makes them vulnerable to confirmation bias — they start trying to make the world fit their own theories.

Foxes, on the other hand, tend to have a more balanced view.

Here’s Tetlock with a great analogy to “pump” our intuition:

“Each [dragonfly] eye is an enormous, bulging sphere, the surface of which is covered with tiny lenses. Depending on the species, there may be as many as thirty thousand of these lenses on a single eye, each one occupying a physical space slightly different from those of the adjacent lenses, giving it a unique perspective. Information from these thousands of unique perspectives flows into the dragonfly’s brain where it is synthesized into vision so superb that the dragonfly can see in almost every direction simultaneously, with the clarity and precision it needs to pick off flying insects at high speed.
“A fox with the bulging eyes of a dragonfly is an ugly mixed metaphor but it captures a key reason why the foresight of foxes is superior to that of hedgehogs with their green-tinted glasses. Foxes aggregate perspectives.

To help fix this image in my mind, I invented the term dragonflox to capture this idea of someone with a many-models, many-lenses view of the world. Part dragonfly, part fox — hence, dragonflox.

Now, here’s where most articles on mental models stop. But I’m just getting started. Next, let’s see what happens when a dragonflox starts to bend over and look inwards…

The Dragonflox Introspects: Mental Models & Psychological Freedom

So far, we’ve looked at how mental models can be used to look ‘outward’ to better understand the world around us.

Recently, I’ve been reading a lot about personality theory, and I realized something: You can also use mental models to look inwards.

In fact, there’s a subset of personality theory called personal construct theory that looks a lot like what we’ve learned so far about mental models:

“According to psychologist George Kelly, personality is composed of the various mental constructs through which each person views reality. Kelly believed that each person was much like a scientist. Just like scientists, we want to understand the world around us, make predictions about what will happen next, and create theories to explain events.” (Source)

This sounds a lot like Tetlock’s superforecasters, who have models of reality that are tested, updated and improved by experience (unless they’re chimps). There are more similarities:

“…according to Kelly, we experience the world through the ‘lens’ of our constructs. These constructs are used to predict and anticipate events, which in turn determines our behaviors, feelings, and thoughts.
“Kelly also believed that all events that happen are open to multiple interpretations, which he referred as constructive alternativism. When we are trying to make sense of an event or situation, he suggested that we are also able to pick and choose which construct we want to use.”

Earlier, we mentioned that beliefs have consequences. Mental models affect our actions, which in turn affect the world. Since mental models also affect us inwardly, it seems mental models also impact our well-being.

To understand this, let me share a passage from Me, Myself and Us, which comes from Harvard personality psychologist Brian Little. Little writes:

“The way you construe others has consequences for your well-being. Generally speaking, the more numerous the lenses or frames through which you can make sense of the world, the more adaptive it is. Having too few constructs or insufficiently validated ones can create problems, particularly when life is moving quickly and you are trying to make sense of it. … The reason personal constructs matter is because they determine, in part, the degrees of freedom we have for shaping our lives.”

Dragonfloxes don’t just make better decisions, they’re also more emotionally robust and — in a very real way — more free.

Where does this freedom come from? Well, freedom is not just about having choices. It’s also about seeing choices. If you are asleep in prison and the guard opens your gate, are you really free? No, because you didn’t wake up to realize it.

The multi-modeled world view that dragonfloxes have lets them see many more paths branching forward. This is important for both anxiety and perceived freedom.

I’ll let Little finish:

“Those who have more constructs available for anticipating events or the challenges of changed environments are less at risk for experiencing anxiety. Those with very few personal constructs, particularly if those constructs have a very narrow range of convenience, may frequently be upended in their anticipation of events: their constructs just don’t apply to many of the new situations they need to deal with in life.
“In other words, the more limited one’s repertoire of personal constructs, the greater the anxiety and the fewer the degrees of freedom one has in anticipating and acting upon events in your daily life. This helps explain why your sister can’t seem to move beyond her divorce, in spite of all your attempts to give her new things to do. She treats everyone in terms of a simple construct, ‘trustworthy vs. will leave me in a flash like Sam did’ and in so doing she reduces her degrees of freedom and retreats from re-engaging with life and moving ahead.”

Now, let’s take our notion of the mental model and move one step higher — to groups.

Models as Ideologies

As we saw in the last sections, the danger of being a one-model hedgehog is that you are highly vulnerable to confirmation bias. All a hammer can see are nails.

This is pretty bad already, but thing get worse when you realize that people share models. Flawed, low-resolution models can spread from person to person, infecting whole populations like a lethal strain of e coli.

This isn’t just about crazy cults. It’s about all of us. Take just the last few decades. We’ve collectively suffered from all sorts of illusions: 9/11 challenged our belief that the world was getting more secular, the ’08 financial crisis challenged beliefs about real estate & risk management, Brexit challenged our beliefs about globalization. And so on.

Who knows what other shared delusions we carry, just waiting to explode in our faces?

When your wife, uncle, boss, barber, bicycle repairman, Shinto priest, and coffee shop barista are all running on the same mental model, it’s pretty hard to stop and challenge it.

Here’s a much-needed warning from Alan Jacobs’ in his book How to Think:

“The most dangerous metaphors for us are the ones that cease to be recognizable as metaphors. For many people the analogy between brain and computer has reached that point: the brain isn’t like a computer, they think, it is a computer. (“A computer made of meat,” some say.) When that happens to us, we are in a bad way, because those screens become permanently implanted, and we lose the ability to redirect our attention toward those elements of reality we have ignored.”

It’s terrifying to think that two groups can see the same series of events in the same place at the same time in totally different ways, and come away with two completely different interpretations.

A Peek Behind the Fabric of Reality

Now, let’s go one step higher and explore what mental models have to say about reality itself.

So I was lying in bed, half-naked in my underwear, reading the philosopher Isaiah Berlin’s book Concepts & Categories when a certain passage made me jump out of my bed and scramble for pen & paper.

Berlin, by the way, is the guy who made the hedgehog-fox distinction famous in his 1953 essay The Hedgehog and the Fox. However, what got me excited was a certain passage from The Purpose of Philosophy, another essay of his.

Here’s one small bit of it:

“Men’s views of one another will differ profoundly as a very consequence of their general conception of the world: the notions of cause and purpose, good and evil, freedom and slavery, things and persons, rights, duties, laws, justice, truth, falsehood, to take some central ideas completely at random, depend directly upon the general framework within which they form, as it were, nodal points.”

“Well,” I thought to myself as I read (at this point I was still in bed), “that sounds a lot like this concept of the mental model.”

Then, I read the following, which is what sent me scrambling:

“These models often collide; some are rendered inadequate by failing to account for too many aspects of experience, and are in their turn replaced by other models which emphasize what these last have omitted but in their turn may obscure what the others have rendered clear. The task of philosophy, often a difficult and painful one, is to extricate and bring to light the hidden categories and models in terms of which human beings think, to reveal what is obscure or contradictory in them, to discern the conflicts between them that prevent the construction of more adequate ways of organising and describing and explaining experience (for all description as well as explanation involves some model in terms of which the describing and explaining is done); and then, at a still ‘higher’ level, to examine the nature of this activity itself (epistemology, philosophical logic, linguistic analysis), and to bring to light the concealed models that operate in this second-order, philosophical, activity itself.”

I hope this quote helps illuminate what I meant when, at the beginning of this essay, I mentioned that mental models are about much, much more than just cognition or decision-making.

And I hope you can feel what Berlin means when he writes:

“The models [men] use [to describe and explain] the world must deeply affect their lives, not least when they are unconscious; much of the misery and frustration of men is due to the mechanical or unconscious, as well as the deliberate, application of models where they do not work.”

Oftentimes, a writer starts writing first and then she figures out what she is writing about. When I started writing two years ago, I didn’t really why I was doing it. If The Polymath Project is to be about anything, I want it to be about this — clarifying and discovering our different & varied ways of seeing the world, exploring the consequences of such ‘lenses’ and ‘maps’ and maybe, just maybe, making the world a little less bad.

I end with a final sentence from Berlin’s essay:

“The goal of philosophy is always the same, to assist men to understand themselves and thus operate in the open, and not wildly, in the dark.”

For more unnecessarily-long essays hastily written over four cups of coffee, join 25,000+ readers of The Open Circle, a free weekly newsletter filled with interesting books, essays I’ve written, good reads, and more. Sometimes, it even has good ideas. Get it here.

Originally posted here.