Conway’s Game of Life is a popular and very simple algorithm for encoding the evolution of a system, leading to complex but stable/quasi-stable patterns (MrJavaFrank / YouTube)

It From Bit: Is The Universe A Cellular Automaton?

From a few simple rules to complex existence, could it be that the Universe isn’t ruled by physics, but by binary?

Paul Halpern
7 min readOct 3, 2017

--

“It’s always seemed like a big mystery how nature, seemingly so effortlessly, manages to produce so much that seems to us so complex. Well, I think we found its secret. It’s just sampling what’s out there in the computational universe.” -Stephen Wolfram

In the early 1970s, computer programmers around the world were entranced by the strange happenings taking place on their screens. Right before their eyes, they observed digital creatures emerging from the void, crawling around, propagating, gobbling each other up, and vanishing into nothingness. It was all part of John Conway’s popular Game of Life: a simple algorithm that produced remarkably complex patterns.

The Game of Life starts with a checkerboard grid in which each square, or “cell,” is populated by a binary piece of digital information — a bit — set to either zero or one. For the purposes of the game, zero represents non-existent or “dead” (an empty cell) and one represents “alive” (an occupied cell). After the initial values are seeded, the screen is updated repeatedly — with each iteration depending on the previous one according to rules based upon each cell’s immediate neighborhood.

If a living cell has fewer than two neighbors, it dies; if it has four or more neighbors, it dies; if it has two or three, it remains alive. If a non-living cell has exactly three neighbors, it comes to life. These are the basic rules to Conway’s game of life (David Hua, Yonatan Biel, and Martin Pelikan / STARS-2012 program)

Whenever an empty cell (with a zero) is surrounded by precisely three occupied cells (with ones), a “birth” takes place upon the next iteration, and it becomes occupied. If a living cell has fewer than two other living cells in its vicinity it “dies” of “underpopulation.” More than three, and it “dies” of “overpopulation.” Only if a living cell has two or three others in its neighborhood does it survive until the next round. The result is an ever-evolving series of patterns, resembling living creatures. Some of these configurations, such as the so-called “glider gun” (developed by Bill Gosper and sometimes known as Gosper’s glider gun) seem to produce recurrent streams of “organisms,” spawned like tadpoles from digital “parents.”

Conway’s brilliant game constitutes a subclass of a more general mechanism called a “cellular automaton,” developed in the late 1940s and early 1950s by John von Neumann and Stanislaw Ulam as an outcrop of the computer revolution. The necessary ingredients for such a system include a network (one-dimensional, two-dimensional, or even more intricate), a set of values (usually binary), and a set of simple, deterministic rules for iterations that most often depend on the values in each neighborhood of cells. While the original cellular automata had irreversible rules, some are fully reversible — acting identically forward and backward in time. The reversible versions thus bear some similarity to Newtonian mechanics, albeit with discrete time-steps, rather than continuous behavior governed by differential equations.

Ed Fredkin joined contract research firm Bolt Beranek & Newman (BBN) in the early 1960s where he wrote a PDP-1 assembler (FRAP) and participated in early projects using the machine. He went on to become a major contributor in the field of artificial intelligence (Computer history museum, ca. 1960).

Computers execute complex functions based on rudimentary operations involving bits. Could the universe itself, at its deepest level, operate on the basis of similarly discrete digital rules? Such a scenario was speculated in the 1960s by innovative thinker Ed Fredkin, and later dubbed “It from Bit,” by the accomplished physicist John Wheeler. It has remained the focus of much debate surrounding the significance of cellular automata.

One key difference between cellular automata and the universe is that the former are discontinuous in both space and time. Spatially, that means they are pixelated, like the small-scale appearance of the screen of a television, computer monitor, or smartphone.

When viewed extremely close-up, a television, smartphone, or computer screen appears highly pixelated. At a fine enough level, so may the Universe (ln736637 / Pixabay).

The road that led Wheeler to “It from Bit” started with a joke that he made to his student Jacob Bekenstein in the early 1970s. Alluding to the law of non-decreasing entropy (a measure of the lack of usable energy as bodies of disparate temperatures approach equilibrium), Wheeler used to jest that if he placed a hot cup of tea next to an iced cup and allowed them to even out their temperatures, he’d commit a crime by raising the amount of entropy in the universe. Once temperatures even out (a state of high entropy), no thermodynamic work can be done without an external source (of low entropy). Ultimately, for the universe as a whole, that would lead to an inert state called “heat death.” Hastening that day was the offence Wheeler joked that he committed.

The simulated decay of a black hole not only results in the emission of radiation, but the decay of the central orbiting mass that keeps most objects stable. In the far future, all the stars and gravitationally bound objects will be ejected from the galaxy, the black holes will decay, and there will be no matter or energy left capable of doing any work. This is the heat death of the Universe (EU’s Communicate Science).

Yet, as he remarked to Bekenstein, if he tossed both cups into a black hole, the gravitationally collapsed core of a massive star, no trace of his “crime” would remain. There’d be no visually sign of the entropy increase. In that case, Wheeler wondered, what would happen to the extra entropy?

Bekenstein brilliantly developed a solution that connected the surface area of a black hole’s event horizon (spherical region inside which light cannot escape, according to the classical picture) with a measure of the black hole’s gravitational entropy. Any material falling into the black hole would deposit entropy via a corresponding expansion of the event horizon’s girth. Consequently, Wheeler’s dropping of cups into the universe would enlarge the black hole’s invisible frontier and thereby lead to a net gain in entropy after all.

Encoded on the surface of the black hole can be bits of information, proportional to the event horizon’s surface area (T.B. Bakker / Dr. J.P. van der Schaar, Universiteit van Amsterdam)

Via the work of Claude Shannon, Bekenstein and Wheeler learned about another form of entropy, called information entropy, which quantified the content of strings of bits. That connection led them to an insight: if the black hole event horizon’s surface area is pixelated in squares each the dimensions of the Planck length (about 6 × 10–34 inches, the lower limits of measurement according to quantum theory) squared, then its information content could be depicted as a single bit (0 or 1) in each square. Therefore, as the event horizon grew, its array of bits would increase as well, leading to greater and greater information entropy.

The black hole at the center of the Milky Way, along with the actual, physical size of the Event Horizon pictured in white. The visual extent of darkness will appear to be 5/2 as large as the event horizon itself. The entropy of the black hole is proportional to the Event Horizon’s size, irrespective of the apparent horizon (Ute Kraus, Physics education group Kraus, Universität Hildesheim; background: Axel Mellinger).

Fredkin’s work, developed independently, involved the hypothesis that such digital information could represent reality on the particle level. The stuff of atoms — electrons, protons, and neutrons (and the quarks and gluons that constitute the latter two) — would in reality be composites of bits, which organize themselves and interact with each other as a universal cellular automaton. Matter and energy would be illusions; only digital information would be real.

For a researcher proposing such a radical new concept in science, Fredkin’s background was unusual. Without even an undergraduate degree, but with considerable computer acumen, he was appointed to head MIT’s computer science laboratory and teach a range of classes. He developed his own course in “digital physics” at the intersection of the physical and computational sciences. Few knew about his hypothesis, until science writer Robert Wright interviewed him in the late 1980s.

In Wright’s article in the April 1988 issue of Atlantic, Did the Universe just Happen, Fredkin revealed his philosophy of nature:

I don’t believe that there are objects like electrons and photons, and things which are themselves and nothing else. What I believe is that there’s an information process, and the bits, when they’re in certain configurations, behave like the thing we call the electron, or the hydrogen atom, or whatever

IBM’s Four Qubit Square Circuit, a pioneering advance in computations, could lead to computers powerful enough to simulate a fully quantum Universe (IBM research)

As Richard Feynman pointed out, Fredkin’s notion of modeling nature as a reversible cellular automaton had a profound limitation: it failed to account for quantum processes. Quantum mechanics (according to most mainstream interpretations) is not deterministic at all at the point of measurement. Rather, there are often many possible outcomes that transpire according to probabilistic transitions rather than predictable rules. Feynman showed that a quantum system could not be fully simulated using a classical computer and classical algorithms, rather one needed what came to be known as a quantum computer. Instead of bits, it would be based on qubits, which would exist in a superposition of states (for example a superposition of spin up and spin down) until a measurement would be taken at the end of the computation.

But if the universe itself is a digital system involving superpositions of zeroes and ones, who would take the measurement that triggered “collapse” into definitive values? Wheeler knew that it couldn’t be someone outside of the universe, so it would have to be activated from within by internal observers. (He considered and later rejected the idea of many worlds, his student Hugh Everett’s hypothesis, that would not require collapse.)

The idea of a self-excited circuit was first presented by Wheeler; as an observer views the Universe, it causes reality to self-create in a certain sense. This was an incredible implication of the ‘It from Bit’ idea (Christopher Langan)

Wheeler pondered, therefore, the concept of a “self-excited” circuit, in which astronomical measurements of the past force the digital information in the early universe to assume particular values. He called his model “the participatory universe.”

Today, as laboratories around the world are striving to perfect quantum computers, it will be interesting to see how the notion of the universe as a cellular automaton (albeit one governed by quantum rules) develops. The idea that information is fundamental — though still very far from adequately explaining the laws of nature — remains an intriguing hypothesis worthy of continued thought.

Paul Halpern is the author of fifteen popular science books, including The Quantum Labyrinth: How Richard Feynman and John Wheeler Revolutionized Time and Reality.

Starts With A Bang is now on Forbes, and republished on Medium thanks to our Patreon supporters. Ethan has authored two books, Beyond The Galaxy, andTreknology: The Science of Star Trek from Tricorders to Warp Drive.

--

--

Paul Halpern
Starts With A Bang!

Physicist and science writer. Author of Synchronicity: The Epic Quest to Understand the Quantum Nature of Cause and Effect