It’s all downhill from here.

What is “Peak Reality” — and have we hit it?

In the mid-2000's “Peak Oil” was frequently in the news. The idea (for those of you reading this from off the grid on a solar-powered laptop) was that at some point, humanity would start losing in its efforts to pull more oil out of the ground. The easy pickings would have been picked — the easiest first, of course — and no amount of improved technology, political will or sheer desperation would ever again make next year a bigger year for oil extraction than this year. Ever.

The barrels-by-year chart would have a “peak,” and the future would be a long slide into the foothills of the mountain-shaped data.

Comparing numeric charts to the silhouette of a mountain range is nothing new, but only certain types of data give that distinct, Mount-Fuji-style peak. In most charts, we can imagine more mountains rising in the future. Stock prices can bounce back, a dieter’s weight can yo-yo, falling academic test scores can be raised by better teaching, smarter students, or even pharmacologically. ;)

Sometimes, though, there’s no bouncing back. 1971 was Peak Bell-Bottoms. Laser-Disc sales maxed out in 1996, and a resurgence is impossible to imagine. Peaks of this sort happen all the time, with us only dimly aware, like a mountain range gliding by outside a car window. We call this progress, and it’s generally a good thing.

But I’m a little concerned about one particular peak, which seems to be looming in the windshield.

Feathers, Balls, Pisa, and Intuition

Galileo is the plucky young hero and Aristotle the stern-faced villain in the story of the Leaning Tower of Pisa. Galileo dropped two balls of differing weights and they struck the ground at exactly the same time — amazing the onlookers.

That’s two small ker-plunks for man, one giant mic-drop for science.

When we hear this story today, it’s in the form of a celebrated, watershed moment in our understanding of the natural world, with the baked-in moral that experiment trumps authority. It doesn’t matter who says so, if any regular Joe with the same experimental set-up can’t reliably make the same thing happen.

That’s the way the story normally gets told, and I’m not here to disagree.

But it’s easy to overlook the fact that it took nearly two thousand years for somebody to come along and successfully nuh-uh Aristotle’s yeah-huh.

This is not just because Aristotle had intellectual street cred (although he did). It’s mostly because Aristotle’s assertion about falling objects — leaves and feathers fall slowly, anvils and rutabagas fast, because the latter are heavy—is damned good as a rule of thumb.

Rules of thumb stick around. If Aristotle had said “rainwater is flammable” or “beavers can fly,” he wouldn’t have been a celebrity you can name-drop thousands of years later; he would have been a forgotten crank.

And Galileo could never have carved his own place in history by debunking a crank.

Aristotle vs. Galileo was a fair fight.

Aristotle had been flicking off challengers for hundreds of years. Technically, anybody with two rocks could have “been” Galileo. So what was it that kept Aristotle sitting so proud on (in?) his laurels all that time when he was demonstrably wrong?

Aristotle was wrong, but his claim was plausible.

Let’s be honest: The “heavy stuff falling hits hard” rule is good enough for most of the people, most of the time. That’s still true now. If you’re looking to ruin someone’s day from your high-altitude perch, take it from Aristotle and grab something heavy.

Yes, you and I both know that, technically, this has to do with momentum, not speed. Even more particularly, the kinetic energy transferred from the grand piano you dropped into the skeletal structure of your unsuspecting victim… But those are just details.

Those sorts of details, however, are emblematic of our much-improved understanding of reality nowadays.

And it’s in this very understanding of reality where I think we are probably nearing a peak.

Pick an author, any author.

Who is your favorite novelist? Maybe it’s Michael Crichton or Alice Walker or Elmore Leonard. Or we can broaden the field to stage and screen, so you can draft a Shakespeare or a Spielberg — but let’s keep things restricted to fiction.

Okay, got a favorite in mind?

Now imagine a person who only knew about other humans from your favorite author’s books (or movies, or whatever). We’ll skip the details about how this theoretical person learned to read or any of that. But ponder for a moment: Without getting to participate in actual human social interactions, could even the best author’s work give our imagined person a passable understanding of how other people are?

There’s room for argument, but my answer is “no.”

With all due respect to the world’s great authors, when you read their work, you are getting their interpretive-dance version of a human being. It’s an artfully constructed approximation, convincing only when viewed from the proper perspective.

Like Aristotle’s Rule of Rocks and Feathers, a well-written fictional character isn’t a close match to reality, but it is a good fit for the circumstances.

The Inevitable Objection.

I can hear someone saying: “Wait a minute. I deny the findings in your thought-experiment. Who says that someone raised on fiction wouldn’t understand real humanity? Fictional characters may not have cholesterol numbers, second-cousins-twice-removed, and other trivial details — but they can be better than reality. Real reality is mostly banal, unremarkable crap. That’s why most of it isn’t worth turning into fiction.”

I hear you, but that’s a different argument.

There’s an old saying in philosophy: “You can’t get an ought from an is.” Put another way: Morality seekers can’t infer how things should be from the matter-of-fact state of the world.

I’m flipping that argument here. You also can’t make an is from an ought. The best fiction in the world is still just a rendering of how the author thinks the world should be portrayed. That can be extremely worthwhile, but it doesn’t make it accurate.

Accuracy and usefulness are both nice ideas. It’s natural to think of them as moving forward together and being mutually reinforcing. And to a point, they are. But somewhere toward the “very accurate” and “very useful” ends of the spectrum, their paths diverge. More accurate becomes less useful, and vice versa.

(If you can’t imagine factual accuracy being counterproductive, I’ll be happy to introduce you to my thirteen-year-old nephew, who would like to recite for you all the digits of pi he has memorized.)

I’ve now laid out all the pieces to make my case for “Peak Reality,” a case that I’ll try to bring together in Part 2 of this post — which will be published in just a couple days. By now, you’ve probably got the gist: Galileo may be right, but Aristotle is making a comeback.


This post originally published in the Brain Breakfast newsletter from Smart Drug Smarts. ;)


If you enjoyed this story, please click the 👏 button and share to help others find it! Feel free to leave a comment below.

The Mission publishes stories, videos, and podcasts that make smart people smarter. You can subscribe to get them here.