Cached Thoughts and Variations on a Theme (Revised Draft)

Ryan Shmeizer
21 min readJul 29, 2016

--

Or Why Charlie Munger has known no wise people who didn’t read all the time

Trigger warning: this essay is long. I attempt to build a first-principles argument that reading will make you better at life because of the way the wetwear in your head (i.e. your brain) works. People that have read it to completion seemed to find it compelling.

Let’s begin.

I.

There’s a scene in the Princess Bride where Wesley, the hero, challenges Vizzini, the villain, to a battle of wits to save the princess’s life. Wesley places two glasses on a table, each containing wine and one purportedly containing Iocane poison. He challenges Vizzini to drink from the glass that does not lead to immediate death. Wesley will drink from the other glass.

And so ensues a lesson in game theory as Vizzini shuffles glasses back and forth, with amusing recursive reasoning like:

“Iocane comes from Australia, as everyone knows, and Australia is entirely peopled with criminals, and criminals are used to having people not trust them, as you are not trusted by me, so I can clearly not choose the wine in front of you...And you must have suspected I would have known the powder’s origin, so I can clearly not choose the wine in front of me.”

In the end, Vizzini makes his selection and dies.

It’s crazy for Wesley to stake everything on a 50–50 choice between two cups. And, indeed, he doesn’t. Both cups were poisoned. Wesley spent the last few years building up an immunity to Iocane powder. Vizzini never had a chance.

Unless you have a resistance to Iocane, of course…

Fun scene. But there’s a deeper didactic point: fortune favors the prepared mind.

II.

There has been much written on the benefits of reading. It strengthens memory, reduces stress, improves empathy, expands vocabulary, makes you wiser, smarter and probably more useful. Vitrix fortuna sapietia, goes the aphorism: wisdom conquers fortune. Doubtful panacea, you may say, yet I feel that conventional wisdom still underrates reading’s usefulness.

“Reading will improve your mind” is about as banal a platitude as “exercise will sculpt your abs.” But, as David Foster Wallace says, “in the day to day trenches of adult existence, banal platitudes can have a life or death importance.” Total banality causes us to accept a claim as given, without meditating on its nuance, thereby missing out on the depth of its value. The value of reading becomes more apparent if we consider ways the brain and mind may work; it takes on newfound gravitas if we seek to be maximize the likelihood of our usefulness.

I’ll use a pair of metaphors to make my argument. The first compares a brain’s thoughts to a computer’s memory cache — what feels to us like real time thinking is probably just our brains retrieving stored memories in response to particular triggers. The second metaphor considers how new ideas are formed and how we optimize creativity. It imagines concepts as dynamic, catalytic things surrounded by a sphere of hypothetical variations of what that concept could become. Bear with me, this will (hopefully) make sense when built out more fully in context. The gist is that each “new” idea is a variation of preexisting concepts, which themselves are variations of other preexisting concepts.

If, after reading the argument below, you find yourself agreeing with the diagnosis, then you may benefit from my prognosis: a simple strategy for maximizing our reading effectiveness called Charlie Munger’s Latticework of Mental Models.

III. Metaphor One: Cached Thoughts

There is a concept in neurology called the “100-step rule” which postulates a constraint on the real time processing speed of the brain. A typical neuron can transmit an impulse to a neighboring neuron about once every five milliseconds, or around 200 times a second. If we assume that what feels to us like “real time” thinking happens in about half a second, then information entering your brain can only traverse a chain about 100 neurons long as you compute a real time solution/action/thought. From the moment the light enters your eye to the moment you recognize you are looking at Donald Trump strangling a cat with his bare hands, a chain no longer than 100 neurons could be involved. In other words, there cannot be more than 100 serial (i.e. one after the other) “steps”. For comparison, the Intel Core i7 chip in your MacBook can execute well over 100 billion serial instructions per second.

“Not to worry,” retorts some eccentric looking gentleman at the back of the party, “the brain is a parallel computer. While each neuron can only trigger a 100-neuron long chain in real time, billions of neural cells can simultaneously fire 100-neuron chains in parallel. This parallelism vastly multiplies the real time processing power of the brain.” He then points to your MacBook, which has multiple processing cores, and describes how it breaks a computational problem into discrete parts that can be solved concurrently (i.e. in parallel) by the different processors, each running billions of serial calculations per second. “Like a neural network!”, he shouts.

It’s here that our brain-as-a-computer analogy begins to break down. The brain can compute in 100 steps or fewer what would take a computer billions of steps to solve.

Indeed, the largest conceivable parallel computer can’t do anything useful in 100 steps, no matter how many parallel processors you add.

To understand why, imagine you had to get 100 bohemian nonconformists a distance of five million steps from Times Square, New York to Burning Man, Nevada by pushing them one-by-one in a wheelbarrow (if you have seen pictures of Burning Man, this scenario might make more sense). You decide that this would take a long time (and no one deserves that much exposure to conversations about “non-GMO cruelty-free vegan pumpkin spice squad goals”). One way to speed this up would be to hire 99 Uber wheelbarrow pushers to each take a passenger. Now the task goes 100 times faster. However, it still takes you a minimum of five million steps to actually cross the country. Hiring ten million more Uber wheelbarrow pushers would not provide any additional gain in speed since the problem cannot be solved in less time than it takes to walk the five million steps. So too in parallel computing: after a certain point, adding more processors doesn’t matter and no matter how many processors you add, a computer can’t calculate anything useful in 100 steps.

How then does our brain, that most miraculous three-pound grey blob, achieve in less than 100 steps what the fastest parallel computer imaginable cannot solve in a billion steps?

Well if you had to write real time programs for billions of 100Hz (using Hertz here as a proxy for serial actions per second) parallel processors, one trick you’d use as heavily as possible is caching. That’s when you store the results of previous operations and look them up in memory next time you need them, rather than recomputing from scratch. “It’s a good guess that the actual majority of human cognition consists of cache lookups,” says artificial intelligence researcher Eliezer Yudkowsky. In other words, the brain does not “compute” answers to most problems; it retrieves answers that were stored in memory. When I throw you a ball and your hand moves to catch it, that is not your brain computing Newtonian Physics in real time. You are smart, but no one is that smart. Rather, what happens is that your brain has stored in memory, from years of repetitive practice, the muscle commands required to catch a ball and this temporal sequence is automatically recalled by sight of the ball.

Something similar likely happens with cognition. Somebody says “gun control” and your mind automatically dips into your memory cache to withdraw precomputed thoughts. Recognition, association, pattern completion. Kahneman terms this System 1 Thinking: fast, instinctive and emotional as compared to its slower, more deliberate, and more logical System 2 counterpart.

Say we have a debate about politics or religion or some similar light topic. The discussion flows rapidly back and forth. We each offer arguments, evidence, thoughts, facts, counterarguments. To an observer, our mental volleying seems like an incredible amount of real time cognitive processing, especially given we could not have fully anticipated each other’s exact arguments.

But it’s a good guess that most of this debate is a battle of cached thoughts pulled out in response to invariant trigger words and that very little new real time thinking occurs; that our effectiveness as interlocutors is largely determined by precomputed work. Combine cached thinking pattern completion with the cognitive limits imposed by the 100-Step Rule and it’s no wonder debates on contentious topics are so maddeningly ineffective. We change our minds less often than we think and repeat cached thoughts that we have accepted as truth without deriving them ourselves from first principles.

One cynical conclusion is that debates, particularly political ones, are hardly about convincing your opponent to change beliefs. Indeed this pursuit is often pointless since confirmation bias, commitment and consistency, hindsight bias, narrative fallacy, availability bias, scope insensitivity, anchoring, affect heuristics, and a host of other System 1 malfunctions will trump your 100-neuron chain attempt at seriously considering disconfirming evidence.

The recent presidential debates have been an acute reminder of this futility: we mistake cleverness for content as candidates throw out evocative soundbites to elicit “applause light” reactions from the audience. It reminds me of that scene in Thank You For Smoking, where the protagonist (Nick) is teaching his kid (Joey) how to win debates:

Nick: Okay, let’s say that you’re defending chocolate and I’m defending vanilla. Now, if I were to say to you, “Vanilla’s the best flavor ice cream”, you’d say …?

Joey: “No, chocolate is.”

Nick: Exactly. But you can’t win that argument. So, I’ll ask you: So you think chocolate is the end-all and be-all of ice cream, do you?

Joey: It’s the best ice cream; I wouldn’t order any other.

Nick: Oh. So it’s all chocolate for you, is it?

Joey: Yes, chocolate is all I need.

Nick: Well, I need more than chocolate. And for that matter, I need more than vanilla. I believe that we need freedom and choice when it comes to our ice cream, and that, Joey Naylor, that is the definition of liberty.

Joey: But that’s not what we’re talking about.

Nick: Ah, but that’s what I’m talking about.

Joey: But … you didn’t prove that vanilla’s the best.

Nick: I didn’t have to. I proved that you’re wrong, and if you’re wrong, I’m right.

Joey: But you still didn’t convince me.

Nick: Because I’m not after you. I’m after them

“I’d never fall for a trick like that,” you may say, but unless you are trained to do otherwise — to consider disconfirming evidence in the tiny window where intelligence has a chance to act — you will likely rely on cached thoughts, and repeat fragments of other people’s beliefs without doing any real thinking yourself.

Jonathan Heidt illustrates this point painfully and hilariously in his book The Righteous Mind, where he asks subjects bizarre questions like “Is it wrong to have sex with a dead chicken? How about your sister?” Most people agree these things are wrong when under interrogation in psychology experiments. But none can explain why. It’s like they penciled a conclusion to an exam question at the bottom of the page and, when pressed to justify this conclusion, they went to the top of the page and started scribbling down confirming cached thoughts. It’s kind of like that guy who has to go on TV and automatically justify any position taken by the president.

“Well, what the hell does this have to do with reading?” Glad you asked and thanks for the smooth segue. French microbiologist Louis Pasteur once opined that fortune favors the prepared mind. This is particularly true in a mind constrained by 100-Step limits where preparedness is largely a function of the breadth, depth, and intermingling of cached thoughts. Given these limitations, one obvious strategy for mental preparedness is voracious reading and an accumulation of vicarious experience. The more you read, the more effective you become and, paradoxically, the more humbled you will be by how little you know. You will begin to see novel links and better understand the world around you. You will be quicker and more useful in real-time discussion. Your base of cached thoughts will build and these thoughts will intermingle and combine in novel ways, often resulting in serendipitous invention.

In essence, you will maximize your likelihood of creative thought. After all, variations on a theme are the crux of creatvity.

IV. Metaphor Two: Conceptual Skeletons and Implicospheres

Every once in a while, I read or see or hear something that instantly crystallizes into a single idea a bunch of half articulated thoughts that were floating around in the back of my mind. It’s kind of like a micro-eureka moment where the new tidbit provides the final piece of a conceptual puzzle that I had no idea I was working on, and I am left with a “new” idea.

Until recently, I did not have a good framework for understanding this seemingly spontaneous process and I could only describe it in abstract terms. Then I read Douglas Hofstadter’s essay, Variations on a Theme as the Crux of Creativity. It left a deep impression on me. In the essay, Hofstadter advances a stunningly cogent first principles framework for understanding the essence of creativity. And, as the title of the essay suggests, it is predicated on viewing each new theme as a unique combination of the many themes that preceded it.

“There is nothing new under the sun,” says Ecclesiastes. Or, if your tastes are less canonical, perhaps you prefer Newton: “If I have seen further it is by standing on the shoulders of giants.” Every idea is built upon a thousand related ideas. Each new theme is itself some sort of variation of previous themes. The more we read, the greater the opportunity for theme intermingling and creative breakthrough.

Hofstadter begins his essay by differentiating between an object and a concept using the example of a Rubik’s Cube. In this case, the object is the 3 X 3 X 3 cube with little colored faces that turn. It’s what we see when we look with our eyes. However, at the “core” of the Rubik’s Cube is an essence or theme or concept that arises in the mind of the person who perceives the cube. This concept is not the same in each mind, just as not everyone has the same concept of Miley Cyrus or a couch (if I say think of an elephant, what you imagine in your mind will be different to what I imagine in mine). And it is the concept at the heart of the object, not the object itself, upon which our minds make variations to come up with new themes.

Let’s say that at the heart of a Rubik’s Cube lies a concept called “Rubik’s-Cubicity.” When I looked at the cube I saw an object. But some people saw something deeper, a concept of Rubik’s-Cubicity upon which to make new variations. And thus was born the 4 X 4 X 4 Rubik’s Cube, the triangular Rubik’s Pyramid, and the myriad subsequent variations. It’s as if these inventors were twiddling knobs on a machine, with Rubik’s-Cubicity at its center, to come up with new variations on a concept.

Yet there is a sense in which this trivial example is somehow profoundly different from the “magic spark” behind Einstein’s general relativity or Newton’s laws of motion. There’s a seductive notion of the lone genius, somehow forged from different stuff to us mere mortals, able to conjure a beautiful idea from a transcendent plane due to some unanalyzable, ungraspable mental alchemy. But why should it be so?

Well, of course, inventing a 4 X 4 X 4 cube is far less deep that coming up with a breakthrough in physics but, in all likelihood, a similar path-dependent process is at work in each case. The trick is being able to see the deeply hidden prior themes.

In the case of the 4 X 4 X 4 cube, it is easy to see the 3 X 3 X 3 cube as its predecessor. But the more technically complex the breakthrough, the more deeply hidden are the preceding themes to the layperson, and thus the more tempting it is to attribute the breakthrough to an unattainable and seemingly magical insight. To see the preceding themes upon which general relativity is a variation requires familiarity with Newtonian Physics, Hamiltonians, Lagrangians, differential geometry, linear algebra, vector calculus, and language to name but a few. Each of these, in turn, is itself a derivation of thousands of preceding themes.

To actually come up with general relatively requires FAR more than a cursory understanding of prior themes. It likely requires a recipe of themes mixed in exactly the right amounts, at exactly the right depth of mastery, in exactly the right order, at exactly the right time. It’s probably impossible to reverse engineer or to force this type of creative breakthrough. You cannot come up with general relativity by thinking, “Gee, I must really strain my mind to its limits and come up with a novel, game-changing breakthrough in physics.” No! Einstein likely just did what came naturally to him, which was to meditate on problems he was attracted to and tackle them with ideas, themes, and frameworks that he had stored in his memory cache.

It’s possible that when a great intellect, with the right predisposition, is born in the right era, is exposed to the right stimuli, and ingests the right themes in the right amounts and right order, that person is set on a path-dependent course where the concepts in their cached thoughts intermingle in just the right way to create a new variation. That is, a creative breakthrough that becomes a standalone theme in its own right.

Luck seems pivotal. And path dependence seems inexorable. But that is not a bad thing. We can pull some mental jujitsu and use this cognitive characteristic in our favor. Read more to plant more ideas in your head and maximize the likelihood of useful serendipitous theme intermingling.

It is strange that the idea of path-dependent theme mixing resulting in a new variation (or theme) is seen as mundane and boring versus the seductive notion of divinely inspired genius. We do not like to see how the sausage is made and are invariably disappointed when a “genius” describes their creative process.

And yet the refrain is recurring: Gauss once said, “If others would but reflect on mathematical truths as deeply and as continuously as I have, they would make my discoveries.” Edison posited that “genius is 2 percent inspiration and 98 percent perspiration.” Field’s Medalist Timothy Gowers describes Andrew Wile’s proof of Fermat’s Last Theorem similarly:

“Andrew Wiles, who proved Fermat’s Last Theorem … and thereby solved the world’s most famous unsolved mathematical problem is undoubtedly very clever, but he is not a genius in my sense. How, you might ask, could he possibly have done what he did without some sort of mysterious extra brainpower? The answer is that, remarkable though his achievement was, it is not so remarkable as to defy explanation. I do not know precisely what enabled him to succeed, but he would have needed a great deal of courage, determination, and patience, a wide knowledge of some very difficult work done by others, the good fortune to be in the right mathematical area at the right time, and an exceptional strategic ability.”

However, whilst mastery of existing concepts is a necessary condition for creative breakthrough, it is not a sufficient condition. Recognizing creativity as the outcome of a path-dependent process conditional on combining preceding themes says nothing about how to actually have a creative breakthrough or great new idea.

Simply put: you cannot solve Fermat’s Last Theorem without a wide knowledge of work done by others, but having this knowledge is no guarantee that you will be able to solve Fermat’s Last Theorem!

The actual process of coming up with a new concept seems to follow quite an unpredictable path where concepts “slip” from one into another in a rather nondeliberate yet nonaccidental way. The process of nondeliberate yet nonaccidental “slippage” of concepts may sound esoteric or vague, but it is the very crux of fluid thought and something that goes on day and night in each of us, usually without our slightest awareness of it. In Hofstadter’s words, “nondeliberate yet nonaccidental slippage… is one of those things that, like air or gravity or three-dimensionality, tend to elude our perception because they define the very fabric of our lives.

Think of how easily you imagine counterfactuals where scenarios play out differently, sometimes non-sensically, and consider how often and automatically you do this. Ever imagine yourself with wings gliding through the air? Or fantasize sipping mai tais on the shores of Capri with your celebrity crush? Or your car taking off like a rocket when you are stuck in traffic? Much of this is nondeliberate yet nonaccidental slippage of concepts.

The slippage is nondeliberate in that you cannot force the epiphany of general relativity by purposefully mixing themes from math and physics. Yet it is nonaccidental in that a certain mastery of math and physics concepts is required in order to maximize chances of useful slippage in the first place.

Hofstadter’s idea of slippage suggests that a concept is not a static, frozen perception. It does not exist unchanging in a vacuum. Rather, a concept is a dynamic thing, surrounded by a sphere of hypothetical variations of what that concept could become.

He calls these imaginary spheres “implicospheres,” which stands for “implicit counterfactual spheres, referring to things that never were but that we cannot help seeing anyway.” At the center of an implicosphere lies a conceptual skeleton, the core of a concept.

I imagine a conceptual skeleton as the nucleus of an atom and the flickering, shifting hypothetical variations of the concept as a dense electron cloud orbiting the nucleus. When the clouds of two or more implicosheres overlap, they may combine to form a new molecule, or new idea. I’ve heard this process described as idea sex — when ideas mate and have children. The creative process, by extension, consists of millions of overlapping and intermingling implicospheres, at the center of each of which is a conceptual skeleton.

When a new idea is implanted in the mind, an implicosphere grows around it and intermingles with older implicopheres. This gives opportunity for previously inaccessible theme combinations and creation of new variations.

There may be no way to force the creative process, but we can maximize our chances of serendipitous breakthrough. The more you read, the more implicospheres you add to your mind and the greater the opportunity for serendipitous theme combination.

Borrowing from Metaphor One, a cache of thoughts is a group of implicopheres that have been tagged as belonging to the same category. When we debate the ethics of gun control, I draw on thoughts from my gun control cache, but these are colored by some overlapping implicospheres from my psychology and fear and paternalism caches (among others).

Reading implants new ideas and associated implicopheres in my mind, and if I read strategically I can overlap implicopheres from unrelated caches in a synergistic manner (behavioral economics, as an over-simplification, is an entire field of study derived from the combined ideas of economics, psychology and neuroscience).

Reading won’t make you Einstein, but it will improve the probability of your mind nondeliberately yet nonaccidentally slipping concepts from one implicosphere to another, giving you more shots at creative discovery and effective problem solving. The process is haphazard, more blunt object than surgical knife.

To that end, not all reading is created equal; certain types of reading probably confer more benefits than others. If you are trying to solve a business problem creatively, you will probably be better served reading Competition Demystified than Green Eggs and Ham. Books on psychology will likely have broader benefits (i.e. better maximize chances of serendipitous implicosphere mixing) than reading the news, and so on.

Thus the next step is figuring out what’s worth reading. This is, of course, subjective but I will offer a framework that I have found to be most useful. I also happen to believe that it is widely accessible, easily adoptable, and quick to deliver results. It’s called Charlie Munger’s Latticework of Mental Models.

V. A Reading Strategy: Charlie Munger’s Latticework of Mental Models

After graduating college, I spent a lot of time reading blogs and the news. College learning had been structured intensity and in the absence of this intensity my mind felt adrift. I craved intellectual stimulation but spent little time thinking about what would be useful reading. Instead, I followed the path of least resistance, which typically led to news aggregators and industry blogs. I would read these things for hours and, at the end of the day, my mind would feel “full” in the same way it did after spending six hours in the college library studying for a finance exam. However, after a year of this effort, my mental tools felt dulled — I had forgotten some of the things I had learned in college and my reading that year failed to act as a counterbalance. In fact, I could not think of a single instance where something I had read in the news had made me more effective that year (keep in mind I had read hundreds of hours of news). This realization was depressing. So I started thinking about why it may be so.

My theory is that I was under-prepared for the responsibility of leveling up my skills in the “real world.” A college class does all the hard work of curation — it hands you on a silver platter the sequence in which you have to learn something in order to achieve a semblance of mastery. The so-called “real world” does no such favors. I was unprepared to be my own information curator and substituted vast quantities of intellectual junk food in place of curated frameworks. It took me an embarrassingly long time to realize I was doing this, to understand the important distinction between frameworks and facts. A framework (or theory) is a lens through which facts can be viewed. Facts in the absence of frameworks are near useless. The computing metaphor is facts as data inputs and frameworks as software algorithms — you need the software to analyze the data.

This introspection led me to the works of Charlie Munger, little known vice chairman of Berkshire Hathaway and longtime business partner of Warren Buffet. Charlie is inspiring in many ways (genius, billionaire, ethicist, philanthropist, etc.), perhaps most so for his generosity as a teacher and his advocacy of acquiring “worldly wisdom.” He is obviously a special mind — an extreme outlier — but he offers a system of self-improvement that’s accessible to us all. He calls it the latticework of mental models.

The latticework system requires familiarity with the big (i.e. core or important) models from each of the big academic disciplines and an organization of these frameworks in an interconnected mental network. The multidisciplinary emphasis is important and intentional. In the words of Charlie: “The first rule is that you’ve got to have multiple models — because if you just have one or two that you’re using, the nature of human psychology is such that you’ll torture reality so that it fits your models, or at least you’ll think it does.”

The economics professor will approach problems using the models of equilibria that he has spent 20 years drilling into his head, even when a borrowed model from the psychology department would have done a better job. Similarly, the chiropractor will recommend a back cracking for a scratched knee. All the world’s wisdom is not to be found in one little academic department and there is a risk of torturing problems to fit models we’re familiar with. Psychologist Abraham Maslow is famous for saying “I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.”

The models have to come from an array of disciplines. It’s not enough to memorize isolated facts and repeated them back (i.e. cached thought pattern completion) — a memory cache of facts is not a useful tool for thinking. The facts have to hang on a latticework of theory, or an array of mental models, in order to be usable.

If you are already thinking, “Agh that sounds too hard, I’m going back to reading Business Insider,” do not fret. It turns out that ~100 models will lift the bulk of the mental freight and a subset of those ~100 will lift the plurality. A good starting point is to learn all the big frameworks from all the big academic departments. The 101 intro class should suffice in most cases. If finding a starting point still sounds like too much work, here is a link to a talk Charlie gave at USC outlining a few key models from mathematics, statistics, accounting, engineering, physics, economics, biology, psychology, and finance. Here is Charlie’s talk focused specifically on the psychology frameworks, which he often says are greatly underrated.

The best anthology I have found is Poor Charlie’s Almanac.

Finally, Gabriel Weinberg did an excellent public service collecting many of the frameworks at this Medium Post.

Per Charlie, once you have added the models to your mind, you are better positioned to analyze situations and think through what you are reading. Each discipline can act as a memory cache category and each model can be a conceptual skeleton, around which an implicosphere will flicker to life.

Charlie’s approach will almost certainly make you a better thinker. You can go through the latticework of models in your head checklist style each time you read new information, try solve a problem, have a debate, or need to make a decision. Make the cached thought structure of your mind work in your favor. Pick the right models for the situation at hand. Combine models across disciplines to come up with novel solutions. Then enjoy the added benefit of nondeliberate yet nonaccidental implicosphere slippage and the emergence of new ideas. Use the way your brain works to stack the deck in your favor.

VI.

I began with a thesis that the value of reading becomes more apparent if we consider ways the brain and mind may work. This notion encompasses cached thoughts, 100-step rules, memory retrieval, pattern completion, variations on a theme, conceptual skeletons, implicospheres, and counterfactuals. I proposed the latticework of mental models as a strategy to effectively utilize our mental machinery, acquire worldly wisdom, and maximize opportunities for serendipitous slippage. I’ll end with two thoughts from Charlie:

“In my whole life, I have known no wise people (over a broad subject matter area) who didn’t read all the time — none, zero”

and

“Spend each day trying to be a little wiser than you were when you woke up. Day by day, and at the end of the day-if you live long enough-like most people, you will get out of life what you deserve.”

If you found this essay useful, please consider hitting the heart shaped button below so others may find the essay as well.

And check out my post on heuristic biases for something a bit lighter: 10 Reasons You Will Read This Medium Post.

--

--

Ryan Shmeizer

These are my views, and if you don't like them... well, I have others.