A New Kind of Physics

The unifying theory of the Wolfram Model

Nicholas Teague
From the Diaries of John Henry
13 min readOct 7, 2020

--

There have been a few paradigm shifts of note in modern physics. The principles of relativity bent the constancy of space and time at extreme scales, then quantum dynamics broke point-wise precision at the nano. The library of atoms and constituent particles was eventually revealed as an abstraction for aggregations of the subatomic, whose newest member, the Higgs boson, required near light speed particle collisions for evidence.

Marriages between these domains have long been sought by researchers, as macro scale relativity and nano scale quantum have trouble reconciling the nature of gravity, one of the four fundamental forces. One channel of investigation has been the invention of new kinds of mathematics, finding higher dimensions manifesting particles from the vibrations of strings and membranes, and symmetries between dimensions even demonstrated through AdS/CFT correspondence, which translations may yet be shown as a kind of Penrose triangle, with the direction determining the destination.

In general, a sort of constancy amongst these different frameworks are their foundations in mathematics, such as for equations, distributions, and properties. Although in some cases measurement errors may interfere, in general if we can successfully isolate some system from surrounding interactions and estimate a current state with precision, we can at least know a distribution of potential evolutions that may be realized through progression. In many cases we may even have similar potential for tractability by evaluating a system in a course grained representation that may yet abstract away many of the finer details for predicting macro state distributions by way of renormalization.

A New Kind of Science

It was in this context that Stephen Wolfram released his 2002 magnum opus “A New Kind of Science”. At a high level, the premise of this work was the exploration of unexpected properties of cellular automata, which are simple rules for state evolutions of 2D pixel grids based on states of adjacent cells in preceding rows. One of the findings was that even from the simplest kinds of elementary rules, i.e. single binary pixel states derived as a function of three adjacent cells in preceding row, the rules may evolve collective pixel states to exhibit many different kinds of macro patterns such as uniformity, repetition, fractals, randomness, or complexity based on the rules applied and the seeded input rows used as a starting point for derivations. The higher tiers of complexity were of particular note, for it was demonstrated that a computational irreducibility comes into play. In other words, predictions of pattern progressions could not be abbreviated by mathematical equations, it was only through fine-grained derivations of every intermediate time step that later states could be reached. This finding was of added interest when considering that at least one of the rules was proven to be capable of acting as a universal Turing machine through state progressions. In fact the book extracts from this finding to presuppose that all systems, natural and artificial alike, can be considered through their evolution to be performing a kind of computation that could fully simulate a Turing machine, and although some of these systems may be vastly vastly more efficient in their computations, there may still be some consistency between what is taking place in say weather patterns found in nature or the workings of a human brain to give two examples — a principle the book refers to as computational equivalence.

Rule 30 elementary cellular automata (image via wikipedia)

One of the key chapters from A New Kind of Science sought to bridge between the domains of cellular automata and fundamental physics, or more particularly offered a hypothesis that underneath the known laws of physics there may be a system at play analogous to the progression of a cellular automata, perhaps which could reintroduce the quantum debates on a hidden variable theory. To be clear, this underlying system was not expected to be as straight forward as the elementary cellular automata discussed elsewhere, for their rigid grid-like structure does not seem a likely fit for what we know about the natural laws. The book hypothesized in their place that a possible candidate could constitute a more fluid structure of graph nodes with geography determined by (potentially temporary) node links — as opposed to rigid cellular automata adjacency matrices. We’ll refer to these graph node networks as hypergraphs. Of course A New Kind of Science only offered these as a sort of broad hypothesis and introduction, coupled with suggestions on how further research into this theory could be conducted by way of a search through the space of possible rules that may be found to generate patterns consistent with known laws of physics — as may require massive simulations as computational irreducibility dictates. The types of proposed investigations by way of experiments in rule simulations are the “new kind of science” eluded to in the title of the book.

Life is short, time is a finite resource, as is the attention of a researcher. It was thus that after publishing this book that the author, Stephen Wolfram, returned primary focus to his first responsibility, running the software enterprise that he had founded in 1987, Wolfram Research, which has been the home for many flagship products bridging domains of computer languages, mathematics, natural language, and more — such as Mathematica, Wolfram Language, Wolfram Alpha, as well as the foundation of the voice assistant now known as Siri. In fact if it wasn’t for the encouragement of two young researcher attendees at a Wolfram Research summer school, now collaborators Jonathan Gorard and Max Piskunov, it might have taken another generation or two for serious inquiries into hypergraph physics. Fortunately, the last 18 years between A New Kind of Science and the current rapid progress in the Wolfram Model was not a total loss, for many of the software innovations that have made efficient simulations and investigations into hypergraphs possible are a result of innovations that have been incorporated into Mathematica in the time since. And the pace of progress in the investigation of hypergraph evolutions as an underlying foundation for modern physics in the last two years of serious inquiry has been considerable, culminated now in the publishing of an authoritative book “A Project to Find the Fundamental Theory of Physics”.

A Project to Find the Fundamental Theory of Physics

In an attempt to convey the foundations of the Wolfram Model and relations to physics, I’ll offer here some highlights based on my reading of the text. I’ll offer first that for more comprehensive and concise discussions there have been several introductory write-ups offered by the Wolfram Physics Project community, including some authored by Stephen Wolfram himself, please consider these channels as the more authoritative in comparison to this essay.

As a restatement of what we are referring to with the terminology, a hypergraph comprises a set of geographically neutral point nodes that are related to each other by links between. In other words the “distance” between two nodes is a function of the number of link transversals required to connect. These links between nodes are themselves of interest — links may connect two nodes, three nodes, or more — in some cases a link may even be from a single node to itself. An important point is the links between nodes need not be static. In fact, the progression of a hypergraph through time is realized in this theory by applying one or more update rules as a function of existing node links. As an arbitrary example, if a hypergraph was set up with three types of constituent nodes we’ll refer to as A, B, C, one possible kind of update rule could be that wherever a link is found between node types A and B, that link would be replaced in an update step with a new set of nodes and associated links as A-C-B. This is just an arbitrary example. The point is that these kind of node link update rules are analogous to the pixel proximity rules applied in procession of elementary cellular automata patterns, and thus through their progressions it becomes possible to realize hypergraph node link patterns of increasing complexity and sophistication.

Demonstration of hypergraph progressions through node link update rule applications.

Another way to think about this is that for our library of particles, particular node and link patterns could coincide with particular subatomic particles, and that further motions of particles in physical space could be a result of node and link patterns progressing though rule applications to changed link transversal distance proximities with other node link patterns — if you’ve never seen a demonstration of Conway’s “game of life” cellular automata look it up, it might help you better intuit what it means for these type of patterns to travel through a state space as a result of rule applications.

The progression of hypergraph node link patterns to transverse physical space is not the only type of update considered by Wolfram’s hypergraph theory. Consider that in many cases particular node/link patterns may be the result of more than one set of intermediate rule progression sets — for example the node link pattern A-C-B could be a result of applying the replacement from A-B -> A-C-B noted above, or it could be the result of some different transversal of rule progression steps. The point is that the realized physical manifestation of distinct node link patterns does not care about the rule progression steps required to realize, it just matters what an observer sees as the current state (getting a little ahead of myself, this observer stuff touches on questions of quantum dynamics). If we want to distinguish between different types of node link rule progressions to realize some state, the solution offered in this theory is what is known as a multiway causal graph. This isn’t actually a separate hypergraph, it’s more of a layer on top of the physical space graph relating different parent patterns that lead to common child patterns and visa versa. One way to think about these causal graphs is that a quantum measurement by an observer erects a border in the multi-way causal graph between that state occupied by the observer vs. those alternate causal scenarios that may have been realized by the other sequences of rule progressions — the Wolfram Model equivalent to the collapse of a quantum superposition.

Multiway causal graph demonstrating alternate update progression paths to reach common distinct states.

These type of analogous hypergraph properties in relation to fundamental physics turn out to be a central premise of the book, in fact an extensive survey of such hypergraph and physics relations is aggregated as evidence for the model. Let’s walk through a few demonstrations to illustrate.

As a first example, I noted earlier that these node and link elements of a hypergraph do not have on their own a rigid geometric structure as were found in cellular automata, instead physical space is an emergent property realized as a function of hypergraph links. Consider that we are used to experiencing our surroundings as three spacial dimensions, how then can such dimensional constraints emerge from a hypergraph? The answer is that we can pick an arbitrary node as a starting point, and then can simply count the number of nodes that can be reached by following a given number of link transversals in each direction. For example if our starting node is linked to two neighbors, then following the first generation of links from our starting point we can count that we have reached a total of three nodes. If we then follow a second generation of links from all of the nodes reached in the first generation, we can count all of the nodes reached cumulatively in these first two generations of links, and similarly through 3, 4, and more generations of links. Of course one may expect the number of nodes reached has the potential to climb exponentially, which it does, but the hypothesis is that we can derive an effective dimension Dr by relating between the number of link generations r and associated node count V(r) as Dr = (log(V(r+1)) — log(V(r)) / (log(r+1) — log(r)), where although this derived value may not necessarily be constant for each generation count, for some types of rules it has potential to reach an asymptote with increasing generations of links, such as may converge to the expected three spatial dimensions that we are used to.

Effective dimension as a function of link transversal steps — here each line represents a distinct node/link state which is growing in size through updating events.

So we’ve tracked down how to recognize space, how about the other key dimension of time? The idea is that time is realized through the application of hypergraph rule updates, such as updating events that may change the links between nodes, create new linked nodes, consolidate link nodes, and etc. This distinction for the time dimension may help to explain the second law of thermodynamics, as a trend of increasing entropy could just be a stochastic effect of the updating rules, and just as the 2nd law is a probabilistic law with potential for short term deviations at different scales, our hypergraph rule updates have a built-in stochasticity inherited from the deviations of node patterns upon which rules are applied.

Here horizontal lines are space-like surfaces through node space, vertical lines are time-like surfaces through sequences of updating events.

Another useful point of this distinction between space and time dimensions is demonstrated with a totally holistic emergence of a property analogous to relativity. In fact it is traditionally taken for granted by physicists that general and special relativity are fundamental properties with no accepted underlying cause yet understood. Thus the natural emergence of a relativistic relation between the space and time dimensions of the Wolfram Model is an important result. More particularly, relativistic properties are realized by shifting the angle of our observation border through the causal graph, realizing a kind of light cone circle of influence form some starting state.

Demonstration of relativity (tbh this one is a little over my head).

An important property related to our multiway causal graph that Wolfram considers a prerequisite for candidate hypergraph rules is that of “causal invariance”. Causal invariance refers to the principle that the identity of some distinct realized state is independent of the various paths of hypergraph node link updates that may have been applied to reach that state. This ties in to relations between the model and quantum dynamics, where each fork in rule applications can be considered an alternate history that as yet may reinforce or interfere with some distinct downstream state coalesced by the measurement of an observer — such as an observer himself a part of the same hypergraph as his surroundings.

Demonstration of quantum measurement superposition collapse.

This leaves us with the question of subatomic particles and how we might be able to relate their presence to hypergraphs. Although the derivations are not covered in detail, the book offers an estimation for a fundamental unit length considerably below what is estimated for the Higgs boson, something like ~10^(-93)m as opposed to ~10^(-18)m for the Higgs. So if you’ve ever seen that YouTube video for the powers of ten, you can imaging what a significant difference that represents. Universes within universes within universes. In fact this fundamental discrete node distance is not the only property estimated. Wolfram also suggests that time itself may be discretized as an application of each hypergraph link updating event, with an elementary scale on the order of ~10^(-101)s. Feynman was right, there is plenty of room at the bottom.

Including this image as kind of an abstract allusion to subatomic particles derived from hypergraph patterns.

One item I found sort of insufficiently covered by the book’s survey was with respect to correlations between hypergraph properties and gravity. I’d like to offer here a sort of vague hypothesis as an alternative to what was discussed in the book: perhaps gravity could be found as a result of deviations in the rate of updating events as a function of connection density. More specifically, deviations in the directional distribution of updating events such as to favor updates directed towards higher density regions of the hypergraph. I’m not sure if such directional distributions work with a constant speed of light, unless perhaps there as an equivalent offset in a converse directional updating event distributions, really this is just kind of brainstorming. Anyhoo.

This has only been an abbreviated survey of the catalog of correlations between hypergraphs and known properties of physics. There are many others, such as energy, momentum, mass, charge, spin, etc. — it’s all there in one form or another. But the amazing thing is that these properties are not being demonstrated by definitions and equations, it’s all emergent, potentially from progressions of very very simple node link update rules applied to hypergraphs. Just like he wrote about in A New Kind of Science for cellular automata in 2002. It looks increasingly like Stephen Wolfram may have been right this entire time.

Images copyright by and used with the permission of The Wolfram Physics Project

Books that were referenced here or otherwise inspired this post:

A New Kind of Science — Stephen Wolfram

A New Kind of Science

A Project to Find the Fundamental Theory of Physics — Stephen Wolfram

A Project to Find the Fundamental Theory of Physics

(As an Amazon Associate I earn from qualifying purchases.)

--

--

Nicholas Teague
From the Diaries of John Henry

Writing for fun and because it helps me organize my thoughts. I also write software to prepare data for machine learning at automunge.com. Consistently unique.