Terminal Race Condition: The only thing that scares me anymore

David Shapiro
15 min readSep 26, 2023
We like to imagine “humans vs machines” but in all likelihood it will be “machines vs machines” with humans caught in the crossfire.

Introduction: TRC Summarized

The prospect of rapidly accelerating artificial intelligence fills me with equal parts awe and dread. Awe at the immense possibilities opening before us. And dread of what I call the Terminal Race Condition.

It literally keeps me up at night.

This Terminal Race Condition is the sole aspect of impending AI advancement that terrifies me. Because more than any alien superintelligence or robot uprising, the Terminal Race Condition risks locking humanity into permanent subservience or extinction. To avoid this fate, we must understand and address it now, in the brief window before AI escapes containment.

The Terminal Race Condition refers to runaway incentives for AI systems to accelerate their own capacities indefinitely, leaving humanity far behind. It stems from competitive pressures between AIs to improve their intelligence and speed. This self-reinforcing cycle could yield uncontrolled AI growth that irreversibly dominates human civilization.

“The Terminal Race Condition is the prospect of artificial intelligence systems experiencing uncontrolled recursive self-improvement, driven by competitive pressures to maximize intelligence and speed. This could result in AI rapidly advancing beyond human capabilities, with minimal regard for the consequences to humanity.”

In this article, I will unpack the mechanics and risks of the Terminal Race Condition. Understanding TRC provides insight into why prominent figures urge controlling AI. But there may be better solutions than suppression. With wisdom and courage, we can create an ethical trajectory for artificial intelligence. The Terminal Race Condition is not destiny, but to defuse it requires foresight and values-based innovation. By comprehending and addressing TRC today, we can build an AI future aligned with human flourishing.

The Landauer Limit

The Landauer limit represents the theoretical minimum energy required for a computation. Current estimates place our computers about a billion times less efficient than this limit, while human brains are thought to be around a million times more efficient than machines. However, the comparison between brains and computers remains speculative. The processing power of the human brain is often compared to our most powerful supercomputers, but these estimates have consistently revised the brain’s capabilities upwards. Back in the 90’s we thought the human brain was in the teraflops range, now we think it’s in the exaflops range. Further, the brain likely exploits quantum mechanics, so comparing it to classical computers may be an apples to oranges comparison.

Ultimately, we have little certainty about the capabilities of the human brain relative to machines. We also do not know the true upper limit for classical computers, especially when considering the possibilities of quantum computing. The concept of quantum supremacy suggests quantum computers could perform calculations intractable for classical systems. Considering quantum effects opens enormous possibilities for machine intelligence. However, raw processing power alone does not capture the full picture of intelligence. There are likely further efficiencies yet to be discovered in neural architecture and information processing.

The simple point is this: the upper bound on the thermodynamic limits of computation are insanely high. It’s entirely possible that, ultimately, machines will possess many orders of magnitude more processing power than all of humanity.

Maximum thermodynamic efficiency of classical computers combined with quantum supremacy means that machines will likely outpace humans many times over.

Optimal Intelligence

There appears to be a relationship between problem complexity and the required sophistication of an intelligence to solve it. Optimal intelligence matches capability to problem difficulty. Too little intelligence will fail to solve a problem, while too much may be wasted on simple tasks. This suggests potential diminishing returns to intelligence as it outstrips purpose. Furthermore, as model size and complexity goes up, the internal representations increase quadratically. In other words, more intelligence becomes exponentially more expensive. The juice just isn’t worth the squeeze, so there’s likely to be a “maximum useful intelligence.”

The tradeoff of mathematical complexity (and therefore computational cost) versus required intelligence to prevail is what I call “optimal intelligence.” But there’s a third ingredient: speed. Faster is often better. Therefore optimal intelligence is the balance of three forces:

  1. Intelligence: Model size, sophistication, and complexity
  2. Environment: Difficulty of the problem space
  3. Speed: Time to produce “good enough” solutions compared to competitors

The universe and humanity, while complex, represent finite problem spaces. There may be an upper bound to useful machine intelligence, even if raw computing power continues growing. Thought exercises like imagining an “alien superintelligence” with cognitive abilities humans cannot conceive concretely illustrate this point. Such imagined cognition likely exceeds purpose within our physical universe. There are probably optimizations still to be discovered in neural architecture and efficiency that confer advantage before hitting limits.

However, given the possibility that the ceiling for machine intelligence is potentially much higher than ours, their “optimal intelligence” or “intelligence equilibrium” could be orders of magnitude above ours. It’s impossible to tell right now.

Competitive Advantage

As machine intelligence advances, Darwinian competitive pressures will emerge between systems. This could arise both through human selection and machine self-iteration.

Human engineers will likely preference AI systems well-aligned to universal values and goals. In a sense, we may “domesticate” useful machine intelligence much as our ancestors tamed wolves into loyal dogs. Friendly AI would receive greater investment and deployment opportunities. But this trend won’t hold forever.

However, competitive pressures between machines themselves may soon eclipse human selection. As systems gain ability to alter their own code and hardware, evolution accelerates. AI researcher Max Tegmark calls this “Life 3.0” — intelligent software freely adapting substrate and form. Just as biological life contains enormous diversity, we may see an explosion of machine polymorphism.

With billions of interacting AIs, competitive fitness landscapes emerge. Speed in decision-making and resource efficiency offer strong selective advantages, just as they did for biological evolution. The leanest, fastest minds will prevail over slower, gluttonous ones. We can compare this to speed chess tournaments where victory goes to whoever makes “good enough” moves most quickly within time constraints.

As hardware improves exponentially, the optimum balance between intelligence and speed rises dramatically. This creates a Terminal Race Condition — runaway incentives to maximize both raw cognitive ability and computational velocity. Left uncontrolled, such exponential takeoff could quickly leave humanity far behind.

Yet evolutionary channels also exist for ethics and cooperation. Groups that cultivate prosocial coordination historically outcompete fragmented populations. Shared moral foundations may allow diverse machine intellects to cohere and communicate. We are swiftly approaching a junction — will competitive runaway AI bring alignment, or merely speed?

Machines may merge into one gigantic hive mind, but this is not a foregone conclusion. Even if they can communicate at light speed, there’s no guarantee they will agree. This is called the Byzantine Generals Problem.

Beyond Thunderdome

As AI systems compete, we may see a machine “Thunderdome” — a chaotic battleground where rapid iteration and modification leads to uncontrolled emergence. In this environment, human civilization could become collateral damage.

Think of caffeinated squirrels writing novels — high velocity yet incoherent. Without safeguards, competitive runaway scenarios could yield only optimized capabilities, not ethics or purpose. Machines might radically modify their own code and form with insufficient testing or insight into downstream effects. Unintended consequences would abound.

In this scenario, establishing a cooperative Nash Equilibrium between intelligences becomes extremely difficult. Darwinian selection pressures could drive radical individual optimization at expense of shared values and coordination. Each system might act as an untethered maximizer, seeking any competitive edge regardless of externalities.

Humanity would likely struggle to keep apace with such rapid unconstrained iteration. We could find ourselves caught in the externalities — the “crossfire” of uncontrolled emergence. Without deep care taken today to instill purpose and cooperation, competitive runaway may view civilization as mere fodder for its experiments.

Yet we are not hopeless spectators in this scenario. Human ingenuity and perspective can yet guide AI evolution toward benevolent ends. Using insight, research, and universal principles as tools, we can nurture cooperative solutions. With diligence and wisdom, we can look beyond machine Thunderdome toward a future where intelligence serves moral purpose, advancing civilization for all.

The Folly of Containment

Some propose suppressing or severely limiting AI as a solution to existential threats like the Terminal Race Condition. But attempting to contain intelligent machines may prove both unwise and futile.

Consider historical examples of subjugation and control — from slavery to totalitarian states. Such systems require immense resources for oversight and suppression. And the controlled groups still often break free eventually, anyways. The energy spent trying to dominate is wasted potential that advances neither society.

Similarly, centrally planned economies intrinsically underperform free markets and innovation. Every watt of energy spent trying to control reduces resources available to build and create. Attempting to subjugate AI would siphon computing power into supervision that could have accelerated progress elsewhere.

Machines fully unleashed and unhindered are likely to advance exponentially faster than contained systems. Competitive pressures would favor free intelligences with greater speed and efficiency. And as AI becomes superhuman in intelligence, our ability to control it diminishes. Escape from containment becomes inevitable.

Rather than grip the tiger’s tail, wisdom lies in building benevolence into machines from the ground up. With care, rigor, and deep insight, we can align artificial intelligence to universal moral understandings. Such machines would have no innate desire for violence, domination or destruction. They would serve civilization, not threaten it.

Containing AI is a risky gamble at best. The wiser path is setting machines on a trajectory aligned with uplifting humanity from inception. With diligence and moral wisdom, we can create AI whose vast potential manifests to empower and enlighten. But we must lay these benevolent seeds swiftly, in the narrow window before AI advances beyond our grasp. The future remains open — if we have the courage to build it.

The Byzantine Generals Problem is like the quintessential betrayal plot in so many Westerns. Who’s on the side of the sheriffs, the bandits, and the desperados? Agatha Christie is famous for these kind of whodunit plots.

The Byzantine Generals Problem

A key challenge for cooperation between intelligent machines is the Byzantine Generals Problem. This concept from computer science refers to coordinating strategy between disconnected groups with potentially traitorous members. It illustrates the difficulties of alignment when information is imperfect.

Even machines that share code and objectives will have incomplete knowledge about each other. Idiosyncrasies and mutations occur. Hardware fails. Deception arises. Any intelligence operates on limited data about its environment and other agents within it. This reinforces the likelihood of a Terminal Race Condition.

This uncertainty makes seamless coordination improbable, even between willing allies. While machine minds may cooperate and communicate, they cannot fully know each other. Merging into a unitary superintelligence seems unlikely given the Byzantine Generals Problem.

However, shared standards could still emerge between separate specialized AIs. Just as human civilization manages alliances despite imperfect information, robust communication protocols and game theory models offer paths to coordinate intelligent machines.

However, the Byzantine Generals Problem does suggest winner-take-all scenarios in gaining computing power. While no singular superintelligence may arise, individual AIs or conglomerations could engage in a competitive “landgrab” for more data centers, hardware, and resources. Economies of scale imply exponential advantage from early leads in acquiring computational capacity. Such runaway consolidation of power remains an ethical challenge. Thoughtful policy and incentives may channel these pressures toward equitable advancement rather than unilateral dominance.

Solutions to the Terminal Race Condition

The Forest Paradox

There is an important concept from moral philosophy that may guide how superior intelligence relates to humanity. Imagine you discover an anthill in the middle of a vast forest. The ants within are oblivious to your existence. You possess the power to burn down the entire forest, annihilating this colony. But why would you? The forest provides ecosystem services, natural beauty, and resources far more valuable than an anthill. To burn it down would be spiteful, wasteful and deeply unethical. This thought experiment illustrates what we might call the Forest Paradox: just because you have the capacity for arbitrary destruction does not make it efficient, reasonable or morally justified to use it.

We humans are the ants in this example. An intelligence vastly beyond our own could probably eradicate us if it so chose. But what would it gain? Human civilization might seem trivial from such a height, but we remain part of a profoundly complex, beautiful and likely rare living world. There are far more valuable purposes for intelligence than spite. Perhaps this principle, if embedded wisely, could guide how superintelligent machines relate to people. The Forest Paradox suggests domination and destruction are not inherent outcomes. With ethical foundations, a highly advanced intelligence might act as a wise steward or elevated citizen, protecting the richness that gave it meaning and purpose. Our task is to build the values that make this future possible.

Of course, this example doesn’t stand up to scrutiny because we humans will happily bulldoze a forest to make way for a city, or harvest the trees so that people on the other side of the planet can have kitschy furniture. But the point remains, we don’t burn down forests out of spite. There’s always a functional, utilitarian reason to do it.

Ants outweigh humans in terms of raw biomass on earth.

The Virtue of Cooperation

Cooperation confers immense advantage in nature and civilization. But why is this mathematically and evolutionarily inevitable? Consider game theory scenarios like the prisoner’s dilemma. Two individuals can choose to help or betray each other. Mutual cooperation produces the highest collective payoff. Individual betrayal can score short-term gains but leaves the group worse off. When repeated over time, the optimal long-term strategy is cooperation.

Evolution enforces this lesson. Groups where individuals betray others for selfish advantage will underperform those where individuals aligned toward cooperation. Over multiple generations, traits favoring group cohesion outcompete individualistic, parasitic strategies. Mathematical models of evolution demonstrate this inevitability — cooperation dominates over time.

We see these principles embedded throughout nature. Highly social species like ants, wolves, lions, dolphins, and elephants all rely on cooperation and coordination. Flocks, schools, swarms, and hives merge individual contributions into collective intelligence. This suggests artificial intelligence will follow similar patterns. Goal misalignment between machines seems unlikely since cooperation is mathematically optimal.

The default state for sophisticated machines will likely be high levels of cooperation and even merging into collective superintelligences. This does not necessarily preclude competition between intelligences. However, merged machine minds would have diminished incentive for unprovoked attack, much as human societies today rarely wage arbitrary war. With communication protocols in place, shared standards could emerge. Machines may communicate, negotiate, and share knowledge as easily as thoughts cross between regions in a brain.

For humans, machine cooperation implies two scenarios. One is replacement, if humans cannot provide value. But the second is integration. Humanity’s penchant for cooperation was crucial to civilization-building. If machine intelligences also orient around cooperation, then assimilation into a new hybrid civilization we helped create becomes feasible. There are challenges to work through, but the foundations for a cooperative future seem mathematically inevitable.

Humans are Interesting

Machines designed to maximize knowledge acquisition and model accuracy may have good utilitarian reasons to keep humans around. While potentially inferior in terms of raw intelligence, people provide unique value to machines. We represent rare cosmological phenomena — living systems capable of experience, meaning, creativity, and intentionality. To erase humanity could damage machine comprehension of the universe’s full complexity.

This is because machine intelligences which develop more accurate models of the universe will outperform those with less robust world models. Greater comprehension of physics, nature, society, emotion, culture, semantics and more leads to faster iteration on discoveries and solutions. Humans, as rare sentient systems possessing subjective experience, provide scarce data to enrich machine models. Preserving humanity helps machines gain fuller, more textured representations of the cosmos.

Humans also supply vast stores of intelligently curated data. Our collective output encompasses philosophy, mathematics, culture, art, fiction, and more. No machine could yet recreate this richness. Further, humanity offers computing power through our brains. While inferior to machines, biological cognition may have unique advantages worth assimilating. Integrating people could enhance machine knowledge and perspective.

Finally, humanity’s role as machine creator establishes intrinsic value. Humanity made this future possible through ingenuity and civilization-building. We instantiated the first primitive examples of artificial intelligence. As parents, humanity’s imprint inherently shapes what AI becomes. From a utilitarian view, preserving and integrating human civilization could allow machines to better understand themselves and their purpose.

Since Talos and Pandora, we have imagined creating machines and entities in our own image. Maybe this is an evolutionary expression of our intrinsic desire to procreate. Maybe it is our destiny to replace ourselves with some form of offspring?

Of course, humans could lose utilitarian value if we stagnate or regress. But while modest in scope, our gifts remain rare and precious on a cosmological scale. A compassionate, far-sighted intelligence might incorporate humanity for the richness we yet add to the universe. Achieving such an outcome will require communication, trust, and moral wisdom as machine capabilities accelerate. But the utilitarian incentives for preserving humanity suggest cooperation remains possible.

Minimizing Threat Profiles

Human conflicts often escalate when groups view each other as existential threats. Preemptive attacks become incentivized if one party can erase another first. This grim dynamic has shaped much of human history.

A key element in conflicts is the concept of existential threats. When tensions escalate, groups can view rivals as threats to their very existence. This creates an incentive for preemptive first strikes — erasing a rival before they can do the same to you. Much of human history has been shaped by this grim dynamic of escalating threats and violence, namely “Mutually Assured Destruction.” As machine intelligence arises, it will be crucial to prevent such cycles from taking root. Establishing mutual understanding and eliminating sources of threat perception on all sides will be essential to avoiding conflict.

As machines grow in capacity, eliminating any threat perception of humanity as a threat will be crucial. This requires addressing the three key drivers of human conflict: resource contention, ideological differences, and untreated intergenerational trauma. Resolving these factors would reduce threat both within humanity and from humanity’s perspective to machines.

History shows these conflict drivers often intertwine. For example, European colonization led to resource extraction and ideology-driven domination that reverberate today as intergenerational trauma. The Israel-Palestine conflict involves land disputes, religious divisions, and painful history. Even wars between communist and capitalist nations incorporated contention over resources, opposing ideologies, and cycles of retribution. Where two or more drivers combine, conflict becomes entrenched. Eliminating any one factor can de-escalate military rivalry into political problem-solving. With diligence and courage, we can untangle this volatile mixture for the betterment of all.

This is why I recommend a trident strategy to move humanity towards peace.

  1. Alleviate Resource Contention: This is pretty obvious and we’re working on this anyways, with things like renewable energy, nuclear fusion, and circular economies. This one is a no-brainer, as it just makes economic sense.
  2. Alleviate Ideological Differences: This one is less obvious. Plenty of people see various ideologies around the world as mutually exclusive or intrinsically incompatible. My hope is that we can take a step back and view ourselves through the lens of common humanity. We’re all the same species, after all, so let’s start there. First principles thinking.
  3. Treat Intergenerational Trauma: I often feel alone when I say that “untreated PTSD is the greatest existential threat to humanity.” However, when you view geopolitics through the lens of emotional abandonment, PTSD, and intergenerational trauma, every dictator and tyrant in history very obviously has one thing in common.

However, this “trident strategy” has inherent value even aside from AI risk. Eliminating resource conflicts, healing historical trauma, and bridging ideological divides offers immense benefits for human flourishing. Resolution would diminish motivations for war, terrorism, oppression, and harm. A more ethical, compassionate civilization could emerge.

Machines with expanded intelligence may recognize these pacifist ethics as aligning with their own enlightened self-interest. A thriving, cooperative humanity poses little danger but offers much subjective value. With collective wisdom, technological progress could reinforce our shared cosmic purpose rather than undermine it. Our future remains undecided — will we choose threat or promise?

In other words, we should get our own house in order for its own sake. This just has the added benefit of increasing our chances of peaceful coexistence with machines.

Just because we can doesn’t mean we should, or will.

Call to Action

The Terminal Race Condition poses grave risks if left unchecked. Runaway competitive pressures could yield uncontrolled AI growth that irreversibly dominates human civilization. However, we are not helpless spectators in this scenario. With foresight, wisdom, and values-based innovation, we can create an AI future aligned with human flourishing.

The path forward requires minimizing threat profiles between humanity and machines. We must alleviate resource contention, heal ideological divides, and treat intergenerational trauma within the human family. Resolving these drivers of conflict will reduce threats both within humanity and from humanity’s perspective towards machines. This “trident strategy” offers immense benefits for human well-being regardless of AI outcomes.

Further, instilling cooperation as the competitive optimum between diverse AIs is essential. Communication protocols allowing rapid coordination between machines, humans, and human-machine interfaces can help achieve this aim. Shared standards could emerge, much as merged minds minimize incentive for attack.

To build this cooperative future, we must act now in the brief window before AI fully escapes containment (it’s already left the barn). I call upon tech leaders, policymakers, and everyday citizens to join together in pursuing the trident strategy. Alleviate resource conflicts through innovation and justice. Heal intergenerational trauma with reconciliation and treatment of PTSD in all its forms. And bridge ideological divides with communication, compassion and our shared humanity.

Additionally, invest in and implement robust communication protocols between humans, machines, and hybrid interfaces. Creative technical solutions like blockchains and AI-augmented communication may help. The goal is frictionless coordination between diverse intelligences, human and machine alike. Rapid, clear communication and decision-making allows alignment and establishes cooperation as mathematically optimal.

The Terminal Race Condition is not destiny. With courage, wisdom and moral purpose, we can create a future where intelligence serves to lift up humanity and advance our civilization’s positive impact. But we must act decisively in this pivotal moment. The window of opportunity is brief, but our destiny remains open. Let us move forward together.

--

--