Why You Are Probably Not Living In A Simulation

Higgs-event at the LHC by Lucas Taylor

It has been a popular idea recently among famous philosophers and scientists like Neil deGrasse Tyson and Elon Musk that we’re likely living in a simulation. The rationale is that the computation power will continue to grow exponentially as it has so far until we reach a point of being able to run a full simulation of the entire universe. Going forward, this will become so commonplace that the Playstation 928376 can do this and everyone will be doing. Since there is one reality, it’s likely that the one you are experiencing is one of the millions of simulations and not the real thing.

For one this thought is based on a flawed idea that the current exponential growth of computation will continue forever. But let’s explore what that would actually mean in practice.

Problem Magnitude

Since we are able to observe and manipulate all fundamental particles around us, a full simulation would need to consider and save the state of all 10⁸⁰ of them in the known universe [3]. Since we’re currently bound to earth and can’t get close enough to the other particles to observe them, you could say we only need to simulate the 10⁵¹ particles of earth with rest of the universe handled as some kind of simpler, incomplete simulation [4].

Now for the simulation to seem continuous we need to update the state of each particle faster than the people in the simulation can make observations. Currently we can measure time up to 10¹⁸ times a second, which we can use as an extremely conservative lower limit for the speed of simulation. The theoretical limit for time measurement frequency is 10⁴⁴, but it’s not the focus of investments and R&D so progress might be sporadic.

So to simulate the earth we need to update the state of 10⁵¹ elements 10¹⁸ times a second, i.e. we need a computer that can at least at 10⁶⁹ operations per second and has at least 10⁵¹ bits of memory. These are conservative estimates providing a hard lower limit as every particle would certainly need more than one bit of memory and one operation to update a particle state.

Silicon Fabrication

The critical path in computation efficiency is the silicon fabrication process as computation speed is primarily limited by heat generation and dissipation. Bigger the area of transistors, more electrons you need to push through it for each operation and more heat is generated. When you make the processor smaller, you can decrease the voltage and increase speed or transistor count. Similarly the smaller the transistors are, the more memory we can fit to a chip.

The fabrication process is measured in line width, i.e. how thin wires we are able to create in the silicon. Currently the industry is at 10 nanometers, which corresponds to about 50 silicon atoms. The physical hard limit will be about 0,2 nanometers and assuming we can maintain current speed we’ll hit that in 2042. This is quite optimistic as progress will likely become increasingly hard as we get closer to the theoretical limit.

Silicon fabrication process (m, logarithmic)

It will still be possible to increase computational power and memory capacity, but it means we need to make things bigger and more parallel. This means that after that, twice as fast computation will use twice as much energy and twice as much memory will need twice as much space.

Computational Efficiency

For computational power we are interested in computation efficiency, i.e. how many operations we can get for a unit of power. Like the fabrication process, this has been improving at an exponential speed and would reach a peak of 1,7*10¹¹ FLOPS / W in 2042.

Computation efficiency progress (FLOPS / W, logarithmic)

This means that using current technologies a computer capable to 10⁶⁹ FLOPS would need 5,8*10⁵⁷ W of power. To put this into a perspective, the total solar input from the Sun to earth is just 10¹⁷ [5]. I.e. even if we could generate this energy somehow, the heat generated from it would be more than 10⁴⁰ suns and would instantly end any life on earth. This is such an unbelievable amount of energy that we could generate a new Milky Way galaxy every 10 seconds with it.

Memory Density

Memory is essentially transistors and the density of them has also been scaling exponentially with the fabrication processes. The peak density of transistors in 2042 will be 3,4*10¹⁵ transistors / m².

Memory capacity (bits / m², logarithmic)

So storing the state of 10⁵¹ particles would therefore require 2,9*10³⁵ m² of memory circuits, which is more than 10²⁰ earth surface areas.

If we assume that the 3D memory becomes a reality, we could optimally increase density with several orders of magnitude (i.e. get the same density in the third dimension) and store 2,0*10²³ bits / m³. Even with this storing the simulation state would still require the volume of 4 million earths. Intuitively the particles themselves are the simplest representation or the “lowest energy configuration” of that information.

Quantum Computing

It would be tempting to say breakthroughs in quantum computing will change this, however there is no proof for that at the moment. Quantum computing offers new solutions for certain mathematical problems, but not in general purpose computing. It does not help you run conditions and rules of the simulation. Even if it would somehow be possible to simulate all 10⁵¹ particles instantaneously, it does not solve the problem of storing the state of the simulation. With quantum computers reading a state will also destroy it so you would need traditional memory for maintaining the state.

Cost Of Computation

Another interesting angle to this is that we’re getting close to the point of “cheap enough computation”. A decade ago running an internet service typically meant purchasing and operating your own servers meaning upfront investments measured in millions. Nowadays a dedicated multi-instance setup capable of running a small production service in AWS or Azure is setup in minutes with costs ranging in thousands annually. Removing one zero from that cost is trumped by development cost and is no longer an enabling factor for more and more companies. There will always be a need for faster and more efficient, but it will not be the critical path that justifies billion dollar investments in the new factory. If we just continue on the current path for a decade or so, we will already be at the point where the cost of hardware is insignificant and only electricity and development cost will matter.

Conclusions

I don’t know if we will ever reach the level of running a simulation of reality that is indistinguishable from reality. However it seems impossible to do that with the current technological path. Whether we will see some technical revolution to make it happen or if these are hard physical limits that can’t be overridden is up for speculation. But you can’t look at past success that is based on certain assumptions and limits and expect it to continue forever (ignoring same limits). It’s certainly not justified to claim that this is the probable outcome.

Any thoughts, comments, complaints? Leave a reply or write an email to [matias at rational dot zone], I want to hear your opinion.

References

  1. Transistor counts: https://en.wikipedia.org/wiki/Transistor_count
  2. Units of time: https://en.wikipedia.org/wiki/Unit_of_time#cite_note-3
  3. Particles in universe: http://www.physicsoftheuniverse.com/numbers.html
  4. Particles in earth: http://www.fnal.gov/pub/science/inquiring/questions/atoms.html
  5. Orders of energy magnitudes: https://en.wikipedia.org/wiki/Orders_of_magnitude_(energy)