The Runaway Engine of Society

Author’s note I originally wrote this essay, in almost its precise current form, in an intense burst in December of 2017. I didn’t publish it at the time because it was too far removed from the types of Product Management-related essays I normally post here, and because the conclusion was too dark. Since then I’ve shared the unpublished draft with perhaps a hundred people, and also fallen down a rabbit hole learning about complex adaptive systems. The farther I’ve gone on that intellectual journey, the more I come to see some of the proto-concepts I developed here could be improved and sharpened to better fit into the more rigorously-defined body of work real experts on complex adaptive systems have created. If I were to sit down to write this essay from scratch today, I would frame many parts differently. However, I ultimately came to the conclusion that these concepts would never be done evolving and I might as well publish the essay publicly as an already-somewhat-out-of-date snapshot of my thinking, imperfect though I know it to be.

One can view society — a term I’ll apply to the entirety of humanity and its output— as an emergent system (specifically, a complex adaptive system) whose goal is to minimize suffering and maximize thriving. That goal is itself emergent, a fundamental property of the system, just as genes’ goal of maximizing copies of themselves is the inexorable result from the fundamental mechanism of natural selection.

Society is an engine that identifies and then capitalizes on good ideas. The more good ideas that are found, the faster and hotter the engine can run, churning out more good ideas, allowing society to run again faster and hotter, in a powerful feedback loop.

Over the eons the mix of “technologies” powering the engine has changed as better alternatives are found and come to provide larger and large portions of the power— just as today’s BMW i3 is an electric vehicle but still includes a gasoline range extender.

We like to think of human ingenuity as the primary driver for society, and in some ways it is. But there are often emergent technologies that are more powerful and complement, extend, or supercede human ingenuity. The original emergent technology for society (or rather the precursor to society) was biological evolution, then culture, then kingdoms, then capitalism with democracy, the internet, and today we are witnessing the emergence of a new emergent technology more powerful than all of the rest: The Algorithm.

As the engine of society runs faster and faster, underlying fundamentals — some of them worrying — are becoming increasingly more visible.

The Graph of Good Ideas

Running the engine of society takes work just to keep it operating as it does today — harnessing the wants and desires of agents in that system as energy to do useful work. But another part of the engine is to constantly expand what it knows how to do, transcending the limitations of today. This can be thought of as identifying new good ideas to put into practice.

Finding good ideas is not easy. But one of the properties of a good idea is that once it is found it is self-evidently a good idea, and as others learn about it they rush to replicate it.

How good a given idea-exploring technology is comes down to a number of dimensions:

  • The Cost of Generating New Ideas The cost of generating a new idea fundamentally limits the number of ideas that can be explored. The higher the cost, the lower the rate of new ideas explored (for a given amount of total energy expended).
  • The Maximum Distance Between Ideas Ideas don’t come fully formed out of nowhere — they are always adjacent to other, previously-discovered good ideas. A given technology is only able to think so far afield, limiting the space of possible ideas that can be explored in a given amount of time. A high cost of idea generation combined with only a small distance of idea that can be explored puts an enormous damper on the engine’s output. This interconnectedness is what makes the graph of good ideas a graph.
  • How Well it Focuses on Ideas That Are Likely To Be Good Ideas that are near good ideas are more likely to be good ideas themselves. Good technologies will spend more of their relative effort exploring ideas that are most likely to be good — while still exploring other ideas, to avoid getting stuck in a particular local maxima.
  • The Difficulty of Judging an Idea Once you have an idea to consider, how hard is it to judge whether or not it’s a good idea? Small ideas are often easy to judge and are often self-evidently a good idea. But some ideas are much larger and harder to comprehend, making them much more costly to judge — or sometimes, simply impossible to judge with a given technology.
  • The Friction of Communication about an Idea Once a good idea is found, how long does it take for others to learn about the idea and replicate it? That is, how long does it take for that new good idea to take hold and become the default across the entire system?

This description reveals that the system can be modeled as a graph, and this class of technologies can be thought of as graph-searching technologies. In other contexts this is conceptualized as a fitness landscape to traverse.

The better a technology is in these dimensions, the more powerful the graph-search technology is — the faster it can run, either through more energy available to use, or by being more efficient with its energy use.

The Limits of Human Ingenuity

Human ingenuity is one technology that we can use to explore the graph of good ideas.

It’s a powerful technology, and one that we romanticize and hold dear. But it’s an expensive and surprisingly limited technology to apply. Human ingenuity comes almost entirely from a deliberate application of our System 2, which humans generally try to avoid as much as possible. That makes the cost of generating new ideas quite expensive.

But there’s an even more fundamental limitation to this technology: the limits of human understanding. Humans have only a limited amount of working space for ideas. Every idea has a certain irreducible complexity; a nugget of complexity that cannot be factored out into smaller and smaller sub-ideas. If an idea’s nugget of irreducible complexity is too big to fit into a human head, it simply cannot be comprehended. This means that when judging an idea, humans are also at a disadvantage, because many good ideas simply do not fit in a human brain.

This limit of human understanding also constrains the maximum distance of ideas that we can consider. Humans simply cannot make particularly large cognitive leaps, in the grand scheme of things, because only a certain size of leap can fit in a human brain. Most apparent eureka moments in history actually came only after the steady accumulation of supporting ideas that made the target idea one whose time had come.

One way humans get around this limitation is by taking a higher level perspective, trading off detail for scope of an idea: entertaining abstract ideas. But these types of ideas necessarily trade off detail, and certain ideas simply are not coherent without the proper amount of detail.

Humans have discovered a practical technology to help address this problem: science. Science is an inductive build-up of human-sized layers of understanding that allows comprehension of high-level ideas backed by concrete detail. The pattern is evident at a macro level — psychology builds on biology builds on chemistry builds on physics builds on math — but holds true at the micro level as well.

Each layer in the system is able to fit into a human mind, because each layer was laid down by human minds. Over time these layers naturally accrete, allowing human understanding to reach truly dizzying heights. The perspective is powerfully high level, but at every point if a human needs to understand more detail, they are capable of peeling back the layer below and comprehending it, too. That means that even if they can never comprehend the whole, they are always able to at least locally understand any given sufficiently small chunk of it.

Norms of intellectual honesty, peer review, and publication of findings help ensure that only good ideas are added to this network, and that scientists around the world can help participate in the grand project together with no central leader.

This network of good ideas that science has produced is truly expansive, but it is fundamentally constructed of ideas that are all human-sized, separated by human-sized leaps of understanding. An entire universe of ideas simply cannot be addressed by human-powered science — the possibility space is too vast to be explored before the heat death of the universe. Other, more powerful, technologies are required.

Emergent Technologies

A complex adaptive system is one where the interrelated decisions and actions of individuals add up to a whole that is bigger — often many orders of magnitudes bigger — than the sum of its parts. Truly simple base actions can compound to astounding levels of complexity, like bird flocking, which can be fully explained by only three simple rules.

Evolution is a powerful graph-search algorithm that underlies the logic of all complex adaptive systems. It arises as a fundamental property of systems whenever there are a population of replicators that can pass down information to their kin in an imperfect process — no matter if the individuals are made of carbon or silicon. Different substrates have different amounts of friction, which changes the ‘metabolic rate’ or ‘clock speed’ of the system. The precise characteristics of the given substrate, and what kinds of graph-search they permit, can be thought of as emergent technologies.

Science writ large can be thought of as a complex adaptive system, because there is no central leader and the local human-sized insights and explorations are added to a huge system together. Science is an complex adaptive technology that runs on human ingenuity.

But human ingenuity is only one type of energy that an complex adaptive system can be built out of. There are many more that have come into being or been discovered over the eons, and at varying times have been the dominant graph-search technology in society’s engine.

One property that all complex adaptive systems — and any emergent systems more generally — share is that, at a fundamental level, they cannot be fully comprehended by humans. Humans can understand them well at a micro-level, and can often build overarching abstractions to comprehend them to varying degrees at a macro-level. But understanding why a complex adaptive system with a given starting state yielded a given result is simply not something that a human can comprehend concretely, because it cannot fit in our head.

These complex adaptive systems are often extremely good at identifying new good ideas, even ones that would have escaped the notice of any human being. They can make leaps that are longer than would fit in any human’s understanding — and they can comprehend ideas that humans literally could not.

In an effort to understand these systems humans have historically used science to chip away as much of the extraneous complexity in complex adaptive systems as they can, factoring them out into separate comprehensible good-idea chunks. We do this by seeing how similar systems have behaved over history, ensconcing observed patterns in higher-level theories that explain the available data, and then refining as more data comes in. But even once you chip away all of the extra complexity, the irreducible nugget at the center of why a specific complex adaptive system behaves precisely the way it does will forever evade our grasp.

One powerful tool for studying complex adaptive systems is to rely on a technology we built using the emergent technology of science: the modern computer. Computers are capable of running all of the calculations to simulate sufficiently-small complex adaptive systems, which at least allows us to run experiments on them to reduce the cost of generating ideas about how they might work to possibly factor out.

Emergent graph-searching technologies can also be conceived of as good ideas in and of themselves — just big ones. As the engine of society uncovers more good ideas that increase the power of our current technologies or uncovers fundamentally new technologies, the whole engine becomes more powerful. The biggest idea that a given technology can comprehend goes up, which allows even bigger ideas to be uncovered. This feedback loop means that the engine of society can uncover exponentially bigger ideas over time.

As the mix of technologies that society depends on shifts, the change is often subtle and smooth — it’s only in retrospect that the magnitude of the shift is evident. Then the technology can be explored and understood, iteratively increasing its efficiency until the next transformative technology is discovered. These transformations tend to happen at an increasing rate as the engine runs ever faster. It’s still the same engine as it was before, just moving into lower and lower friction substrates as it identifies new technologies to harness.

These technologies are not good or bad on their own — they just are. Each of these technologies is merely a means to a greater end, and to the extent that they are not aligned with that end, they either have a bug or a fundamental, unfixable flaw. Every technology — even the ones capable of producing extraordinary good — is capable of producing truly reprehensible outcomes.

The Technology of Biological Evolution

Circa 4.4 billion years ago

Evolution in biological substrates is the first emergent graph-search technology underlying society, but in the grand scheme of things it’s horrendously inefficient.

The cost of exploring an idea is quite high — it’s literally the cost of an entire individual’s whole reproductive lifespan, meaning that it operates on a multi-generational timeline. The maximum distance between ideas is also somewhat small, because it relies on undirected imperfections in DNA replication. The focus is mixed, because DNA transcription errors are random and undirected. The ability to judge the quality of even large ideas is extremely high, though, as the near-miraculous complexity of even the simplest living organism can attest, and bad ideas (organisms ill suited to their environment) will die quickly, leading to useless or maladaptive traits quickly evaporating. And the friction of a good idea spreading can be high, because it must be geographically dispersed across multiple generations.

With this inefficient graph-search technology, the perfect is the enemy of the good. The result is that a number of basic but powerful heuristics are baked deeply into our firmware. These heuristics lead us to make decisions that are short-term advantageous on average, and at least directionally accurate in the long-term.

To name just a few:

  • Cravings When calories are precious and hard to find, every bit of salt, sugar, or fat you find should be gorged on like there’s no tomorrow — because if you don’t eat enough, there literally might not be.
  • Gossip When we lived in bands of 150 or so odd people, the bonds of familial ties were not sufficient to maintain order. That meant it was incredibly important to know everything you could about the others around you — who was a cheat, who would steal, who wouldn’t return favors — so you knew who to interact with and who to shun.
  • Tribalism One of the more powerful — and mortally dangerous, in the limit — heuristics is us-vs-them. Everyone you meet who isn’t one of “us” — who isn’t part of the tribe — is not to be trusted or aided in the slightest. In some contexts, a “them” is literally sub-human, meaning that any means against them that advances our own end is morally good.

The high-friction and high-stakes environment functioned as a moderator, meaning that these heuristics could never be taken to excess — you simply never got a chance. And they served us extremely well.

The Technology of Culture

Circa 2 million to 50,000 years ago (estimates vary)

Over time humans evolved the ability to speak and understand one another. This allowed the algorithm of evolution to jump into a lower-friction substrate of language, allowing an even better technology to flourish: culture.

Instead of good ideas taking generations to bake into our firmware, a good idea, once discovered, could be shared widely via the spoken word with a tiny fraction of the friction of evolution.

Groups that had an effective culture — a culture that allowed a sustainable, thriving group — tended to flourish, and over many generations replace cultures that were less suited to the environment. Of course, this particular technology over the millennia has led to horrendous overwhelming of one group by another, sometimes explicitly in the name of “progress”.

Later, written word became a powerful accelerant for sharing culture across time and distance, reducing the friction

The Technology of Kingdoms

Circa 3,000 years ago

Complex adaptive systems tend to have emergent hierarchy, and as society became more of a complex adaptive system itself, it too happened upon a powerful form of hierarchy in the form of coherent kingdoms. Having a hierarchy of control at first sounds like a step back: now there’s one empowered leader, and power is corrupting.

There was one important benefit, however: it flipped what had been a zero-sum game (competition between groups) into a positive-sum game, where from the leader’s perspective, all of their subjects surviving was more important than any one individual succeeding. The result was less overall energy spent in violent conflict, allowing more energy to be spent on sustaining and innovating the society.

Over time the technology organized more and more of the world into a hierarchy of control, from nations down to states down to cities.

Cities were another accelerating technology: having so many people close together made the friction of sharing ideas exponentially lower, allowing cities to become hotbeds of innovation.

The Technology of Capitalism and Democracy

Circa 300 years ago

In the past few hundreds of years capitalism and democracy have taken hold as a powerful new technology. (I use the terms capitalism and democracy here in only the broadest strokes.) They fix the bug of corrupt leadership and the inadequacies of central planning (which is fundamentally limited by human ingenuity) to instead rely on an incredibly efficient complex adaptive system.

The central insight is clear only in retrospect. The inherent drive of self-interest that fundamentally arises from evolution’s existential imperative is a powerful force that cannot be wished away, and can lead to ruin. It can, however, be harnessed to do useful work. By checking ambition with ambition, the runaway effects of self-interest can be muted, while still aligning that motive energy in a direction that is correlated with building a thriving society.

Every technology is dangerous in the limit, and capitalism is no exception — it has a tendency to lead to competition-destroying monopolies, which we’ve imperfectly addressed via antitrust regulation, and also a bug where structural inequalities get exacerbated, breaking the ability of ambition to check ambition and leading to increasing inequality.

Capitalism, backed by liberal democracy, is the best technology we’ve found yet at aligning nearly all of society’s productive energy towards a thriving society, allowing the engine to run hotter and faster than it did before by orders of magnitude.

The Technology of The Internet

Circa 30 years ago

Capitalism’s focus is extraordinarily strong, but it means that powerful incumbents will focus their energy on incremental improvements to the local maxima they have discovered — and often fundamentally oppose or undercut the discovery of new maximas that might threaten their position.

Capitalism delivered a powerful new technology, however: the internet. The internet reduced friction of distributing information to nearly zero. Many of the previously powerful gatekeepers had power primarily due to their ability to distribute information, which has led to the upending of previously-stable business models, and a more free and open transmitting of ideas than ever possible before.

The result is that when any good idea is discovered, anywhere in the world, it can spread around the entire globe in literally hours. The engine is able to run hotter than ever before.

The Technology of The Algorithm

Circa 4 years ago

The technology of the internet created a near infinite sea of ideas to wade through. The ability to identify good ideas in this overwhelming sea became a serious obstacle.

A new emergent technology was uncovered that helped us make sense of this sea of data: neural networks.

Neural networks are not self-evidently a good idea. They’d been bouncing around academia for many decades, and many people thought they were a dead end. But like any emergent technology, the power of the idea eluded our ability to understand it at first. Now that the awesome power is beginning to dawn on us, a trend that is even bigger than neural networks as they exist today has become clear: the all-powerful, inscrutable Algorithm.

Computing up until this point has been an emergent system in the style of science: impossible to comprehend all at once, but constructed of a collection of human-sized ideas that can be reasoned about cleanly.

But The Algorithm is different, because these are truly emergent systems. We understand how they work at the micro level, and we understand how they work at the macro level. But it is impossible for humans to comprehend how they work in detail (although there are efforts to understand as best we can). That means that they despite their sometimes super-human abilities, they can still make mistakes that we cannot even comprehend.

The Algorithm is complemented by another technology, The Feed. The Feed is a user experience pattern that aggregates content from many sources into one infinitely scrollable list, allowing masses of users to quickly sift through large amounts of information and content with minimal friction. These aggregate consumption patterns can be harnessed to improve The Algorithm in a self-reinforcing loop.

The Algorithm, combined with The Feed, create the world’s lowest-friction graph-searching algorithm ever seen.

No one could have guessed that children would find opening Kinder eggs so unsettlingly compelling. But when someone happened across the idea, and made the prototypical Kinder egg video with only $5 and no more than 30 minutes of effort, the Algorithm quickly noticed its potential and distributed it widely. Other content creators, spurred on by an efficient capitalism system, recognized the potential and dutifully pumped out kinder egg videos until the market was awash in it. And this kind of intense, super-hot feedback cycle, running at terrifying speeds, happens constantly.

It’s not meaningful to talk about The Algorithm in isolation, just as it’s not reasonable to talk about Culture in isolation. Culture as a technology didn’t go away with the rise of Kingdoms and Capitalism; if anything it arguably got stronger. As more emergent technologies are added to the mix, they build an ever more powerful, complex graph-searching meta-technology.

A Terrifying Possibility

The Algorithm can sift through ideas at an alarming rate — and The Internet, paired with Capitalism, ensure a near-infinite sea of ideas to sift through. But now we must ask a question that up until now has never needed to be examined. What does society consider “good”? Earlier I casually asserted that a society’s fundamental fitness function was minimizing suffering and maximizing thriving. But even that formulation is ambiguous: what, precisely, does it mean for a society to “thrive”? We’re becoming astoundingly efficient at exploring the graph of ideas. But what is this emergent beast of an engine searching for?

In the past couple of years, a terrifying possibility has become increasingly clear. The fitness function of this entire engine of society may actually be simply optimizing for the imperfect heuristics that evolution originally put in place and are baked into our firmware.

Those heuristics served us well in a high-friction evolutionary environment. But we never got rid of them, even as we added more and more powerful technologies to the mix at higher layers of abstraction. Those very same heuristics, when taken to the extreme in our near-zero friction environment, are ruinous.

Cravings, in the limit, leads to obscene junk food that is impeccably engineered to hit our food pleasure centers. Enormous amounts of self-control are required to avoid becoming obese.

Gossip, in the limit, leads to people being sucked into the gravity well of “engagement”, endlessly checking their feed for viral news about their friends, or weaponized memes that elicit strong emotions. When they post, they post only curated highlights, making everyone else reading the feed have unrealistic expectations and feel depressed.

Tribalism, in the limit, has led to unspeakable calamities like genocides. But it also has helped create our Post-Truth environment, where one party in the US has embraced an intellectual frame that actively rejects as “Fake News” any facts that are inconvenient to them, making intellectually honest debate impossible. And if the major distribution points made an effort to feed consumers their intellectual vegetables, consumers are more empowered than ever before to route around and find the intellectual junk food that they crave: stories that feed into their confirmation bias.

None of us are above the overwhelming evolutionary impulses baked into our firmware. We’ve simply never been in an environment that is so perfectly engineered to satisfy those cravings as we now are.

If we don’t intervene carefully, the inexorable end state of this situation is the vast majority of humans spending their lives in artificially-created environments that are perfectly engineered to make us happy on the short term, with increasingly autonomous machines and infrastructure searching for ways to more efficiently and sustainably keep us happy before ultimately deciding it’s easier to just stop worrying about the humans in the first place.

Of course, almost all past doomsday predictions have failed to come to pass, and even look silly in retrospect — and this is likely no different. But it’s an important enough possibility to at least seriously consider.

Aligning the Short and Long-Term

The engine of society has brought almost unimaginable benefit to the world as as a whole, and on net it is perhaps the single most positive effect in the world.

But anything in excess is unhealthy. The trick is to find the right balance point that allows us to get as much of the good as possible while muting or controlling the bad.

The engine, after all, is merely a means to the larger end of a society that minimizes suffering and maximizes thriving. But that goal is a long-term one. Our short-term desires, baked into our brains by evolution, are much more overpowering day-to-day, and lead many of our decisions.

Ignoring our short-term goals is doomed to calamitous failure — it’s not enough for us to simply wish away our short-term desires. The answer is to always keep an eye on the long-term goal as our northstar. At every step, we must look for opportunities that just so happen to have our short-term and long-term goals mostly in alignment, and take that path.

As a society, we can be intentional about aligning our systems of governance and regulation to put up subtle guardrails that nudge people towards the long-term more sustainable path, and away from the tempting short-term spoils. The fundamental insight of both capitalism and liberal democracy is that ambition on its own is a steamroller, but ambition checked by ambition allows much of the positive to be captured while limiting the negative. In a similar vein, we must identify ways to harness this awesome energy towards useful outcomes.

We must intentionally seek out additional opportunities where seemingly small differences in configuration, when played out in practice, lead to vastly different outcomes that are better in the long term. This is an enormously challenging exercise, which requires rigorous collaborative debate, bringing relevant domain expertise to bear, and fundamentally an intellectually honest stance.

Written by

Generalist fascinated by complex adaptive systems. Product Manager by day. All opinions my own.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store