How to Save the Universe: testable physics from game theory.

dylan coyle
23 min readJan 13, 2017

--

Game theory shows the universe won’t end, life isn’t meaningless, and there is a cheaper way to make antimatter.

Image by Dino.

Introduction: games and motivation.

If we want to follow patterns based on historical data or compute formulas with precision, then a consistent and complete math is priceless. After all, we want 1+1 to equal 2 every time right? But if we are in the business of planning for an uncertain future and averting crises (like trying to fix the economy or save the universe), then we need to expect new and different patterns. Luckily, math has a suite of tools specifically for working with shared and progressive complex systems including game theory and agent based modeling.

Let’s define game theory as a formal logic or math about making choices and strategies given the choices and strategies other agents made and may make. If we’re playing poker, we need to think about more than probability. Game theory can help you analyze when and why different people will bluff, raise, or call, and how you should act or react. (Optimal bluffing according to a random trigger will help you break even.) But the theory goes beyond literal games, and we see it used in politics, economics, computer science, and biology. In this article, I show how we can use game theory for physics too.

Although games and strategy have been studied for thousands of years, the game theory we are working with today was rigorously defined in the book Theory of Games and Economic Behavior by John von Neumann and Oskar Morgenstern in 1944. Game theory can provide critical insight when other logic or maths are struggling. But if one’s metrics and values allow, then the optimal choices suggested by game theory could lead to balance of terror such as with the arms race and cold war.

Von Neumann co-wrote the foundational book at same time he was working on atomic bombs with the Manhattan project. After WWII, he seriously wanted to nuke the USSR before they demonstrated nuclear weapon capacity in 1948. He imagined a preventative war would save more lives than a war between two nuclear armed foes. He also pushed for ICBMs to expand nuclear arms range to strike across the planet. Von Neumann is credited with coining the term Mutual Assured Destruction (MAD). No nation should outright attack the other if they knew they would be destroyed too, so von Neumann pushed for more destructive arms to make the world theoretically more peaceful.

Game theory doesn’t need to be about terror. Elinor Ostrom used game theory and agent based modeling to show how cooperation emerges among stakeholders to help each other take care of a common-pooled resource. A tragedy of commons occurs when individuals act selfishly and destroy a shared resource such as forests or fisheries. But Ostrom showed that stakeholders take the choices of others into account and make choices better for the long term. This includes making formal institutions like boundaries and laws and informal institutions like public shame and honor.

Although hawks like von Neumann used game theory to show that it is difficult to rationally trust others, Ostrom proved that trust functions well everyday. [See her Noble prize lecture pdf here and the source of ‘Ostrom’s Law’ here.]

With either terror or cooperation, game theory often relies on a future shadow to work. That is, choices now need to have consequences later to be functional. This seems obvious, right? But we must consider our collective relationship with the future if we are trying to comprehend seemingly irrational and self-destructive economic and political behavior in today’s world. One of the big problems affecting our outlook on the future, and thus our ability to make long term plans, is physics.

Have you ever said or heard something like, “It doesn’t matter, the universe will end anyway,” or, “This is all an illusion, so we can do whatever we want”? The mood was recently evoked in the Dr. Strange film, “We’re just another tiny, momentary speck in an indifferent universe.” It’s like a perversion of the Carl Sagan quote given when he shared the photo of the pale blue dot picture suspended in a mote of light. Instead of inspiring the cooperation and stewardship Sagan felt for the Earth, the narrative of the vast and mysterious universe threatens to reinforce a feeling of despair. Feelings of fear, nihilism, and hopelessness can be compounded when physicists declare that the universe is doomed according to their maths.

If we don’t believe in a future shadow, then we can make irrational decisions with impunity. Most of the time, the fate of the universe doesn’t seem to make a difference to people. We go on day to day making decent short term choices. But doomed universe becomes the excuse for acting selfishly or foolishly particularly in the long term. Future discounting hits hard when there is supposedly no future. I believe, if we want to make better long term choices, we must fix our models of physics to provide more accurate information about the future.

The idea had been simmering in my mind since the failed end of the world in the year 2000, but I started this project in 2012 in partial response to Mayan calendar mania and healthy disdain for bros shouting YOLO. More specifically, I ran into concerns about the universe ending all the time when discussing long term plans such as Basic Income or the ethics of food consumption. I feared all the work I wanted to do would be met with resistance unless I could also explain the physics of why the universe wouldn’t end. It’s taken longer than planned, but here are some of the results.

I tested a ton of models in the past few years and broke as many as I could. Game theory, my weapon of choice, is great for breaking things. But several models kept getting stronger and eventually merged together. I need help to either break them or polish them into peer-reviewed manuscripts. The models show the universe won’t end, but it also shows there is a lot of work to do now and throughout the future.

Big Game Theory vs Big Bang Theory

The beginning of universe is generally thought to have been hot and dense until it rapidly inflated and expanded into what we have today. If we extrapolate backwards from contemporary universal accelerated expansion, a Big Bang from a singularity is logical. Unfortunately this leads to a number of doom scenarios: a big crunch, a big tear/rip, a heat death, the eventual distancing of galaxies from each other, or a multiverse dominated by expanding bubbles that make scientific predictions meaningless. To avoid the end of the universe, we must understand the beginning.

From an informational perspective, the distant past is not necessarily hot and dense. It is uncertain, less organized, and far away. This let’s us start our models with noise from which order can emerge and progress. This is in contrast to starting from zero sum nothingness which can lead back to nothingness. To make the universe an agent based system, we need some of the newest math, computer science, and quantum physics we can find or create.

1.1 Defining a point.

The twentieth century was hamstrung by the inability to prove the Poincaré conjecture. At first glance, it doesn’t seem important to know how to show if a sphere is unlike a donut. But when G. Perelman combined Ricci flow, surgery, and minimal surfaces to make his proof, he gave the physicists and computer scientists the missing link to solve both of their major open problems. By externalizing the variables in a shape, we are left with a point-like curve on a manifold.

This proof gives us the definition that “No variables” equals “a point.” Anything with variables is bigger than a point; anything that has no variables is exactly the size of a point on a curve. An informational definition of curvature let’s us use information across arguably all mathematical disciplines, especially physics. To be sure, if we wish to make the universe from scratch, we must first invent pi.

1.2 Defining a Turing machine.

Perelman’s proof crashed the most basic computational model in the world, the Turing machine. The original 1936 version of a deterministic Turing machine is designed to read and respond to discrete cells on tape. The problem with this model is that the size of a cell is bigger than a point. According to the new definition, a cell based Turing machine must be nondeterministic. Sticking to the 1936 cell-based Turing machine makes it impossible to separate the classes of nondeterministic and deterministic. This has caused a major open problem known as P versus NP.

The 1948 unpublished version of the Turing machine, corrects this oversight with the differentiation between continuous Turing machines and discreet Turing machines.

“We may call a machine ‘discrete’ when it is natural to describe its possible states as a discrete set, the motion of the machine occurring by jumping from one state to another. The states of ‘continuous’ machinery on the other hand form a continuous manifold, and the behaviour of the machine is described by a curve on this manifold. All machinery can be regarded as continuous, but when it is possible to regard it as discrete it is usually best to do so.” [source]

When we combine this with the Perelman proof, we now have two distinct classes: continuous point-sized deterministic Turing machines and discrete multi-point-sized nondeterministic machines.

Three types of Turing machines: the first is the obsolete model, the second is represented as a point anywhere on the curve, and the third is represented as an inefficient vector machine (explained below).

1.3 Define the geometry of the universe.

To stretch Turing machines to physics, we need to have a spacetime made of continuous curves on manifolds. Theorems about timelike curves by Hawking, King, and McCarthy in 1976 and David Malament in 1977 [paywall] have quietly simmered in the physics community, particularly with the Causal Set theorists. So we’ll use curves instead of relativistic tensors or quantum particles. Now: (A) Turing machines are curves (or made of curves), (B) the geometry of the universe can be defined as curves, thus (C) the geometry of the universe can be analyzed as Turing machines. And Turing machines are the ideal agents in game theory and agent based models, giving us rigorous liberty to use strategic analysis on physics models.

1.4. Combine geometry and computer science.

Next we need to understand the difference between deterministic and nondeterministic Turing machines. The trick is in the 1984 book by Robert Axelrod, “The Evolution of Cooperation”, (and the 1981 article of the same name co-written by W.D. Hamilton,) and the 1995 paper by Axelrod and Jianzhong Wu “How to Cope with Noise in the Iterated Prisoner’s Dilemma.” The papers outlined ecological tournaments between many agents playing different strategies to see who would win a non-cooperative game with and without noise. The rules were simple. Mutual aid maintained the amount of representation of players facing each other in the next round of the game. Mutual betrayal reduced the representation of both players in the next round. One-sided betrayal increased one player’s representation at the expense of the sucker trying to aid the other. After a number of series of plays, one type of strategy emerged as victorious.

The winner in the noiseless tournaments was called the tit-for-tat strategy. Agents that simply repeated their partner’s last play dominated the game. Even if they did worse than their partners in every game, they did consistently almost as good as every partner. A forgiving version of that strategy called contrite tit-for-tat (CTFT) dominated games with noise. CTFT had 97 percent representation by generation 2000 with one percent noise (and more representation when the game was noisier).

This video explains the Evolution of Cooperation better than I can.

Coming back to the Perelman proof, we can define tit-for-tat as a deterministic strategy. It needs no variables because information simply passes through. This makes every tit-for-tat Turing machine point-sized. This also makes every lone point in a field deterministically play the tit-for-tat strategy. Information efficiently passes through points. And we apply this to a field, any initial conditions swiftly lead to a predominantly efficient, transparent, and flat geometry.

Alternative strategies to tit-for-tat are marginalized. But the field becomes exactly what imperfect agents needs to effectively coordinate. Information can pass through large areas of space and time reliably because the intermediary agents simply repeat information. Even though inefficient agents are marginalized in the primary ecological game, they play their own subgame for dominating the coordination field. The ecological sub-game between inefficient agents in a persistent and efficient field is what makes an interesting universe.

1.5 Use Game Theory to create the universe.

The Big Game model creates the Big Bang model. We tried the zero-sum model of the universe and it collapsed: the universe becomes deterministic and is exploited by nondeterministic agents until the zero-sum model becomes point-sized. The model works best starting from noise, and we define noise as ‘more information than signal’. Basically, any multiverse with any variety of rules and strategies will federate together into what we have now.

Moreover, the Big Game model let’s us start and continue to thrive in a universe that is inconsistent and incomplete because it is shared and progressive. Any emergent system that is excessively belligerent is suppressed, sequestered, or avoided by the greater ecological system. This ultimately protects the universe from random or inevitable destruction.

[video at pirsa.org. Skip to 16:45 for the ‘no multiverse’ part]

The Big Game model seems to agree with the anamorphic model discovered by Anna Ijjas and Paul J. Steinhardt. Depending on the metric, the universe both expands (as measured with the electron based Planck length) and contracts (when measured with the electron based Compton wavelength). The Big Game model and anamorphic model avoid the problems with the Big Bang theories. Both the classic and post-modern inflationary Big Bang models lead to a universe where anything is equally possible and predictions are useless. The anamorphic model uses a single field that initially contracts; potential bubbles never lose contact with each other so there is no multiverse problem nor initial condition problem. The Big Game model gives a universal vector based language for the bubbles to compete and cooperate for representation.

Cheaper antimatter.

Theory is fun, but we need some hard tests to make theory useful. The anamorphic model anticipated and agrees with the Planck2015 satellite data. Our Big Game Theory needs something special. We find this through the inherent asymmetry of vectors.

The theories of relativistic-based gravity and quantum particles have never seemed to work together perfectly. I mean, they work together, but there is no successful unified theory of quantum theory. Using an agent based system is counter to searching for a unified theory because it respects the differences of individual agents.

The Big Game model doesn’t start with quantum particles, harmonic oscillators, relativistic scalars or tensors. It gives us points, as we discussed before, and vectors, or little arrows of continuum flow. Everything (outside of points) can be made of vectors.

We took everything we knew about particles and tried describing them with the game theory vector based framework [works in progress on figshare].

One aspect of the prisoner’s dilemma game that stood out was that the winners of each round had more causal representation. To make this physical, we needed to be able to have vectors causally suppress each other. The model showed we can map quantum charge to the flow of vectors into points as positive charge and out of points as negative charge. Arrows are by definition asymmetric, and asymmetric charge is one aspect of using a vector based model.

To our surprise, the model showed that mapping charge to causal representation should fix the possibilities of big crunches and big tears as well as the question of why the universe is dominated by cis-matter instead of antimatter.

According to the standard model, the only difference between cis-matter (what virtually everything we know is made of) and antimatter (which is mysteriously rare in the observable universe) should be the charge. And according to the standard model, cis-matter and antimatter should be created and represented in equal amounts. If a new particle is made, there should be a 50–50 chance that it’s cis-matter or antimatter, and we see symmetrical pair production (and the effects of virtual pair production) all the time. But charge is symmetrical in the standard model. Because charge is defined asymmetrically in the Big Game model, the model shows a bias for cis-matter and anti-matter creation depending on the where it is made.

2.1 The vector based Big Game model shows that there is a cis-matter creation bias between diverging systems.

Let’s say we roll some balls down two mountains. If we move the mountains away from each other, then the balls going towards the valley between them will look like they are rolling faster (but making slower progress) than normal. The balls acquire an additional roll from the frame dragging underneath. When the balls meet, they will be rolling more into each other than if the mountains were stationary. This surges information into a tight area leading to the dissimilar vectors becoming suppressed. Suppression is mapped to positive charge. So, analogy aside, there is more positive charged mass between diverging systems in our model, and cis-matter is more positive charged (protons have more mass than electrons).

When the energy escapes expanding systems, it will make more cis-matter between them. The universe can expand at an accelerated rate, but vectors are resistant to being boxed in, and escaped information makes new systems between the doomed parts. This mitigates the expansion rate. It also assists in the evolution of the universe because the child systems are made with some of the best parts of the parent systems. There is no big tear in this model because new cis-matter will be created in the cracks of the expanding systems.

2.2 There is an antimatter creation bias between converging systems.

Now let’s roll some balls down two mountains moving toward each other. The balls look like they are rolling slower (and making faster progress) toward each other than normal. They acquire a negative roll from the frame pushing underneath. When these balls meet, they will be rolling more away from each other. This releases more vectors from suppression and gives a bias to more negative charged mass between converging systems.

When two cis-matter systems converge, the models shows a bias towards making more antimatter between them. The new antimatter annihilates matter as they continue to converge, and energy can escape the systems via gamma rays. Cis-matter collapses are slowed down and big crunches are prevented.

Additionally, the model suggested that converging antimatter systems also make more antimatter between them. This hastens the suppression and collapse of antimatter areas. Conservation of information is maintained because kurtosis of the ecosystem raises–rare events have higher influence.

Regardless of initial conditions, a Big Game universe results in an expanding network of cis-matter systems.

2.3 Searching and testing.

This is the most falsifiable part of the Big Game model. Expected locations of matter or antimatter rings and clouds can be calculated and checked. The origins of observed antimatter rings and clouds, including the asymmetric cloud near the center of our galaxy can be discerned. Converging and diverging generators of particle beams can be built to check for the charge bias.

If the theory is correct, we have added assurance that the universe really is antifragile. On a practical side, cheap antimatter production should have many applications. If the results of the next section are true and we need to leave the galaxy, then cheap antimatter production could be the key technology for improved extra-planar travel. Note that antimatter is now the most expensive substance ever by weight, costing trillions of USD per gram, so this is kind of a big deal.

Avert doom by making intelligent choices.

The 2008 financial crisis showed the fatal flaws of the two best economics models at the time, empirical statistical models and dynamic stochastic general equilibrium models. The former fails to predict great changes, and the later rules out giant crises. In 2009, J. Doyne Farmer and Duncan Foley from the Sante Fe Institute published an article in Nature titled “The economy needs agent based modeling” to support an alternative.

Agent based modeling helps show emergent dynamics of relatively autonomous actors. Farmer and Foley made a case that a complexity minded economy simulation could help us understand and hopefully avoid future crises. Farmer moved to Oxford in 2012 to be the scientific coordinator of Project Crisis and the Director of Complexity Economics at the Institute for New Economic Thinking at the Oxford Martin School.

I’m afraid that the current models in physics can lead to crises also. Incorrect calculations and axioms can leave a serious gap in our perception of reality. What started as a project to help our psychological and sociological relationship with the future evolved into a project to correct flaws in our system that could lead to unnecessary disasters.

One way to search for flaws is to identify and deconstruct our deepest assumptions. For example, conservation of information is sometimes called the negative first law (by Leonard Susskind) because it is so fundamental. Historical evidence and equilibrium models show information is sacred and cannot be destroyed. Is it wise to take information security for granted? Laws in the Big Game model are not given, they are emergent. This model shows the universe is more robust assuming information is constantly at risk.

“Saying a puppy is not important is absurd.”

3.1 Inefficiency creates extra heat.

Recall that in the Big Game Theory model, the only efficient strategy was tit-for-tat, and the model is only efficient when information deterministically passes through points. When information meets moments of choice the model is nondeterministic and inefficient.

Choices are moments where agents must do more than repeat information. Even if the individual agent has a strategy for every choice and makes a predictable choice when given the same information, it is part of a subgame of other agents that may react differently when given that same information. Predicting or controlling what agents will make the choices, or what order they will, is inefficient also because the tools are compositely nondeterministic too.

Multi-point agents must store information nonlocally in a shared and progressive environment, and this information may be altered by other agents before the original agent has a chance to reuse it. Further, the storage and access of information has a cost, most readily seen as quasi-random release of vectors from their queue, or more simply stated, the release of heat.

The increase in information from the inherent inefficiency of vectors is seen as the cosmological constant or dark energy.

3.2 Redundant information is externalized.

For inefficient strategies to succeed in the ecological subgame, they need to work together effectively to have an edge over other agents. If an agent system becomes too deterministic, then other systems can exploit them for their own gain. But a system still needs to be predictable enough for other parts to be able to respond in mutually beneficial ways. Vector systems are more robust by balancing deterministic and nondeterministic aspects. (Remember how game theory says to randomly bluff sometimes, the universe knows to do this too.)

Avoiding predictability requires saving information about prior states within the environment. This model never sanctified the conservation of information; instead, the law of conservation is emergent because systems that collectively protect their information thrive and exploit systems that don’t.

So from noise we have all these little systems emerging, and then they start to federate together. The federation processes consolidates information and externalizes the redundancies.

3.3 Dissimilar vectors suppress each other.

When dissimilar vectors meet, they slow down in space and speed up in time. This concentrates their causal influence to a smaller area. It also creates an informational absence that releases vectors from suppression where the vector would have maintained pressure, so suppression is balanced by creating heat elsewhere.

This simple dynamic let’s us build all the curvature of spacetime and gives us relativistic based gravity, but it gives us a seamless transition into every quantum particle. Everything is part of the same vector field.

Quantized particles arise where the vectors form sustainable shapes. The universe doesn’t need to start from a single point to ensure the uniform and tuned conditions we see because the particles we have are logical flows of vectors that emerge independently throughout the universe.

3.5 Channels and bubbles lens more information.

Vector interactions are not limited to the standard model particles, and some flows of vectors are vast and powerful.

In the process of federating together, the room to externalize excess information overlaps with other systems. This should create a problem where disorder rises, but the system is saved by emerging complexity.

Vast channels of vectors form like a network of rivers winding through the universe. They are too big to absorb or produce photons at scales we are used to, so they are designated as ‘cold dark matter’ (CDM). Giant bubbles form of the same vector flows, especially where CDM channels meet.

The vectors that make the CDM lens more vectors into them. This effectively removes noise from the areas outside the CDM, allowing coordination to continue.

3.6 Galaxies and more massive and complex particles form in CDM.

The vectors inside the CDM face tougher competition and must raise in complexity rapidly to maintain their information conservation. Without increasing complexity, the systems will lose integrity and start becoming predictable and thus exploitable. The far-from-equilibrium setting inside CDM creates the ideal conditions for increasing complexity. Informational arbitrage is the fuel for growing sophistication. Capitalizing on imbalance helps further suppress redundancy; the motivation to adapt to noise creates an incentive to reduce the noise as well.

Our models suggest that there is no limit to the amount of complexity a system can take in. If there is no other recourse to externalize the information, the system makes particles, and then the particles get more flavorful.

The Big Game model shows that the Planck scale and Bekenstein information bound can be surpassed by compressing information into more massive leptons. In other words, black holes are not singularities because the size of the smallest particles can be arbitrarily smaller. Black holes in our model are filled with elements made with mu, tau, and possibly denser flavored charged leptons rather than just electron flavored leptons. Black holes are more accurately bounded horizons because it is hard to see past denser than electron areas with electron based tools. The separation of basic metrics in the anamorphic model helps support this with theory. Results from muonic elements experiments and the proton radius puzzle open a possibility for new theories like this too. The Big Game model fully anticipates a Post-Lepton Universality Standard Model (PLUS Model).

Because there is no universal Planck constant in this model, no limit to lepton density, and no minimum mass to CDM vector flows, there is no Yang-Mills gap in the universe. Vectors are the same things that make up CDM bubbles larger than galaxies, little electrons, and smaller gluons. This means that glueball, theoretic bosons made of these vectors, can be any size. Long story short, the model says that the universe there is no minimum size to information chunks. The universe at all scales is both smooth and noisy. As a bonus, this means that the universe isn’t a hologram.

3.7 Redundant information is destroyed in galaxies.

To summarize, the universe is messy, but infrastructure emerges to take care of the mess. From the perspective of agents in galaxies, the observable universe is expanding. Galaxy based agents see the light from remote galaxies red-shifting. But agents inside galaxies live inside giant lenses and are deluged with vectors. Passing through vectors progresses agents through spacetime causing the perception of outside events to accelerate.

For agents outside galaxies lenses, the expansion is slower or reversed. The inefficiency of vectors requires more spacetime complexity resources to be invested into infrastructure, making the universe functionally smaller (but also in the way that telephones and airplanes made the world smaller). So galaxies are expanding apart even though the universe is contracting. This is an example of Simpon’s statistical paradox; the parts are different than the whole.

This perspective is critical to take in account when weighing any universe ending scenarios. For example, if the accelerating universal expansion causes galaxies to expand faster away from each other than light or other information can travel, the result is a lonely observable universe [like in L. Krauss’ book A Universe from Nothing]. Combine this with the explosive potential of the super-massive black holes in the galactic cores, and this scenario would be nice to avoid.

The key to remember is that galaxies are not the center of the universe. The CDM infrastructure concentrates information into galaxies, and then redundant information is sorted and suppressed and unique information contributes to more complex systems. Galaxies are like noise filters, heat sinks, or trash dumps for the rest of the universe. If galaxies are completely full of redundant information, then they can be pushed away until they are causally negligible. (Again, they technically raise kurtosis, so they are reserve noise bombs waiting to burst if the universe gets boring, but yeah.)

TLDR; Cold Dark Matter concentrates excess heat into galaxies. Galaxies might be doomed, so maybe we should leave.

3.8 Intelligent agents are not destroyed by galaxies.

The Fermi paradox asks why we we don’t see any other life in the universe. But the paradox rests on the assumption that intelligent life would balloon out to other planets and solar systems and be easy to find. Our study suggests that intelligent systems should eventually recognize the possibility of imminent destruction if they stay in a galaxy. Agents in galaxies are exposed to more vectors because of lensing, and if they leave the galaxies, time and expansion slow down. Krauss’ doom can be correct iff agents don’t take action to leave the doomed galaxies. It is strategically better to spend one’s resources to leave the galaxy than to balloon out in like an empire on a sinking ship.

Up to half the stars in the universe are calculated to be outside galaxies. Intelligent agents can settle outside the galaxy and use the universe’s infrastructure to remove heat from their choices and computations — like life crawling out of the ocean and settling near rivers.

The Fermi paradox is solved when we shift from thinking like a particle to thinking like a vector. If there is intelligent life, then we need to look for signatures of departure from the galaxy, or outside the galaxy for settlement. We are spending a small fortune on looking for intelligent life in the nearby Galaxy. The Big Game Theory says we should be looking outside the galaxy, because the smart ones are already gone.

3.9 Information is conserved by compression and mapping.

The possibility of information destruction increases the causal representation of agents that filter noise and compress information. Intelligent agents can recognize when they are in a zone designated for extreme compression and eventual destruction. This incentivizes the agents to adapt or escape the doomed zone.

Here’s the fun part. In the process of making a map to escape, the agents recognize patterns, appreciate unique qualities, and compress information about the zone. This information is saved from destruction as the agents remove it from the zone, foiling the universe’s ability to destroy information. The conservation of information is an emergent law, and it is tight.

Conclusion

This leaves us with the mission to see if the Big Game model is correct. If it is, then we need to make long term plans to leave the galaxy ourselves. I am skeptical that galaxies are doomed because of other large scale emergent feedback mechanisms, but I see value in migrating outside of the intensive CDM lens to manage our progression in time and exploit the heat removal infrastructure the universe provides.

Leaving the galaxy does not mean vacating Earth. Leaving the galaxy will take time, focus, and coordination. Meanwhile, we still need to live on this planet together for a very long time. It does no good to prepare for eternity while neglecting the present. (Besides, it is possible that the planet will be flung out of the galaxy anyway, and we just need to adapt to a growing sun here.)

A good strategy is to invest into the both long and short duration projects like a barbell, ignoring the middle (See Antifragile, or any other book by N.N. Taleb). A focus on immediate environmental security and socioeconomic justice is intimately connected with advancing energy and communication infrastructure necessary for sending some of us out of the milky way. An investment in a secure long term plans gives us latitude to try risky short term projects as well.

The Big Game model shows the future shadow is long, there’s no reason to believe the universe will end, and we have every reason to work together now and for the many new generations.

So what is the point of life, creativity, pain, and death? In the Big Game Theory, we are local stewards tasked with keeping our systems quasi-deterministic and quasi-nondeterministic so they can’t be exploited by other predatory systems yet continue to coordinate with other helpful systems. We do this by learning, adapting, and surviving. We already collect things, see patterns, appreciate beauty, and organize information. To win this game, we collectively just keep playing.

Above all, the Big Game Theory shows that our choices matter. The universe is shared and progressive, and we must work together to tackle the wicked problems it will throw at us. The universe is not doomed, but if we deny that we are a significant part of the system, then we might not make the best choices to preserve ourselves. We must prove that we are not redundant information. We must earn our existence.

Completely opposed to von Neumann’s philosophy, we must be responsible for the world that we’re in. To save the universe we must pay attention, cooperate, make intelligent choices. And possibly leave the galaxy.

--

--