Stephen Wolfram recently announced his new Physics Project, an attempt to rethink how we do physics in terms of simple operations on abstract structures. And already there are people in the physics community expressing opinions about it ranging from disapproval to disinterest.
As non-physicists, what should be our take? Awe? An indifferent shrug? An eye roll? Is it really worth our time to contribute to his crowd-sourced investigation of a new way of looking at reality? I say yes. A thousand times yes, even though I don’t agree with everything in Wolfram’s approach. If you take the long view, it may be the most valuable thing you ever do with your free time.
Why should you be interested in my opinion? Because I’ve conducted over twenty years of research into the intersection between foundational physics and computation. I’ve worked with both physicists and computer scientists. I’ve listened to concerns and passions on both sides of this quiet but persistent debate and I’m sympathetic to all of them.
Here’s why you should care: fundamental physics has a problem and it affects all of us. Brave members of the academic community have already stood up to admit it— people like Lee Smolin and Sabine Hossenfelder. They’ve written at length about their concerns over their field drifting further and further into abstruse mathematics and further from experimental testability. And without advances in physics, our development as a species is truncated. We remain technologically trapped while the climate worsens and our resources dwindle. A lack of progress in physics may eventually kill us.
Nobody seems to know where to look for the breakthrough that’s needed to push the field forward. The leading candidate theory from the last two decades — string theory — has failed to deliver testable predictions and its credibility is inevitably eroding.
There are, of course, clues about what might replace it. From dimensional analysis, there are strong reasons to believe that space is not smooth, but somehow granular. Contributions from condensed matter physics suggest that the best way to describe space may be a mesh of entangled elements. And approaches like the Causal Set Program, which model the universe as a directed graph of connected event-nodes, have already shown predictive strength. More and more it seems likely that space itself is an emergent property of some deeper non-local structure.
Ironically, these are exactly the properties for the universe that Wolfram’s approach suggests yet nobody seems to know what to do with his work. Instead, one hears that Wolfram is a wealthy interloper to the field who’s more interested in marketing his own notions than listening to the community. Again. Because this also happened to him in 2002.
It likewise happened to Ed Fredkin, one of the pioneers of cellular automata, even after he managed to influence the thinking of Richard Feynman and Nobel Prize-winning physicist Gerard t’Hooft. And it even happened, to some extent, to AI luminary Jurgen Schmidhuber, who now mostly concentrates on his contributions to machine learning.
These are the cases you hear about but there are plenty more. There is a quiet legion of researchers working, mostly in isolation, on the intersection of computer science and physics. They are utterly tireless and, IMO, utterly admirable. And the respect they get is close to zero. These are wonderful people like Dan Miller and Tommaso Bolognesi: the dogged, unsung heroes of a barely-noticed field. You don’t hear about them because it’s impossible for them to get traction for their work even while they often make significant contributions to other disciplines.
That’s because researching what’s called digital physics self-selects for champions who are self-funded, self-motivated, and loud. Because nobody else can cut through the blockade that the physics establishment presents.
Note that I say physics establishment, not all physicists, because there are plenty of open-minded ones out there. My own research in this field has been utterly dependent on their support. But it’s effectively impossible for someone in academic quantum gravity research to work directly on digital physics because the field is still in its infancy. Useful results might be decades away and generating them looks to be challenging. That makes touching it career suicide.
That’s why there’s a blockade, of course. It’s not malice or stupidity on the part of the physicists. Far from it. It’s just that there isn’t anything immediately useful for folk working in that field that comes from this discipline, so why attend to it? Even glancing at it means risking career credibility in a field where careers are often painfully precarious. As a result, the shield of skeptical disinterest is never punctured.
That’s a deep shame because (and I’ll go out on a limb here), the potential of the tools that digital physics employs far exceeds that of those currently in common use. Why do I have such a deep belief in the potential of this field if meaningful results are so far away? How can I? To answer that, we have to talk about what physics is supposed to be about, and more broadly, what science is for.
Science is an attempt to get as far away from an attitude of ‘it’s just magic and we’ll never understand’, as possible. It’s an attempt to bring everything in nature under the scrutiny of reason for the betterment of humankind. It’s driven by the passionate insistence that a logical explanation can always be found if we look hard enough.
During the scientific revolution that was easy to do. But then, a hundred years ago, two incredibly powerful yet counter-intuitive theories hit the scene in rapid succession: quantum mechanics and general relativity. Both worked in their respective domains more effectively than any theory that had ever come before. Unfortunately, they didn’t work together. They made radically different assumptions about the universe. The last century of work in fundamental physics has essentially been an attempt to reconcile them and for the most part it has been a failure. The last piece of solid progress was the development of the Standard Model in 1967.
The usual method for solving this problem has been to try to merge the two theories while abandoning as little established math as possible. Where the implications of this approach deviate from our intuition, or from what we can physically observe, we are encouraged to accept that the universe is stranger than we know and trust in science.
But something vital is lost in this approach. As soon as we abandon attempts to intuitively understand nature and simply ‘shut up and calculate’, the potential for forward movement in the field slows to a crawl. That’s because using intuitive reasoning to generate ideas or check results becomes impossible. Nobody really understands what’s going on any more. Just look at the surveys of physicists’ opinions on which interpretation of quantum mechanics is ‘correct’. The results are all over the place.
The people who lead physics under these conditions become those with the fanciest math, regardless of whether it predicts anything, or is comprehensible to those who’ve worked on slightly different approaches. And so, having swallowed ideas like fundamental randomness and probability amplitudes, now we’re asked to swallow multiverses and compact dimensions, yet we’re no wiser as to how things work.
Digital physics proposes that quantum mechanics and general relativity can be reconciled by showing that they’re both emergent properties of a more fundamental system composed of a finite number of discrete elements interacting via simple rules. It doesn’t claim to know what those elements are or what rules drive them. It just proposes that we concentrate our efforts on mapping how that might work.
Why discrete elements? Because the math we have for understanding finite systems is robust and well-understood. As soon as you introduce infinities, you lose some amount of understanding and some amount of precision. We’ve historically used smooth descriptions of nature because it makes things easy to understand. Calculus is a wonderful shortcut that makes otherwise intractable problems tractable. But it’s exactly that: a shortcut. It quite literally says ‘don’t sweat the small stuff’. As such, it works brilliantly for describing systems of very large numbers of tiny elements that all behave similarly. This is just as true in economics or biology, but nobody expects to see fractional rabbits in a field or see a price-tag for Pi dollars. Things don’t have to be literally smooth for calculus to be useful.
Calculus starts to struggle when the behavior of the smallest units impact the behavior of the system they comprise, as in fluid dynamics. Once that happens, the math quickly becomes hard to handle. And this is why mathematical research in complex systems can lag decades behind what you can model with extremely simple computer models.
The mainstream approaches to physics we’ve focused on over the last century have mostly assumed that the smallest scales of the universe don’t matter. There is no smallest scale worth worrying about and there’s zero cost associated with incorporating infinities into our models. That would be great but for the fact that physics itself seems to suggest that this assertion is not true. Those tiny scales become extremely important precisely when trying to reconcile relativity with quantum mechanics.
Digital physics proposes trying to augment (not replace) existing physical theories by trying to build upward from those smallest elements toward something that might eventually be usefully predictive.
And why use simple rules? Because we’d be crazy to insist on complicated ones. We live in an orderly universe. The rules that most likely reflect nature are going to be the ones rich enough to encode natural processes and no richer—that’s just Ockham’s Razor.
Digital physics is not like the simulation hypothesis. Most advocates are not proposing that the universe is running on some giant machine in hyperspace. Or that the universe runs on a rigid rectangular lattice. Or that the features of well-understood physical theories, like spatial distortion, do not exist. Rather, it says that given that we can never prove that anything more than discrete elements are necessary to describe the universe, why rely on models that require something else? Why not try to make physics as scientific as possible by exploring the idea that it’s deterministic at some level, even if it’s inaccessible to us. Particularly given that no proof can exist that rules determinism out, and it may yield a theory with greater predictive power than the ones we have. Why not insist on a Turing Machine-equivalent universe until there is reason to believe that such an approach cannot succeed? Scientists are surely duty-bound to push the limits of reason, not simply choose the tools they prefer out of familiarity.
Furthermore, if it turns out that minimal-scale behavior is important, then only a digital physics approach will find it. And if it turns out that the minimal scale is not important, there will almost inevitably be a digital physics interpretation that the right theory equates to. There is literally nothing to lose by trying and everything to gain.
The additional massive benefit of the digital physics approach has over others is that it constrains the set of possible physical models. If you insist on a simulatable universe, it raises the bar for the completeness of a theory, and this gets you to experimentally testable ideas faster. Whole rafts of ideas get slid to the back of the line because they depend on hypothetical properties of the universe that we can’t test or reason about.
My own admittedly amateur experiments in this regard have yielded me a surprising amount of insight. I have been forced to find way to reconcile quantum mechanics and relativity in ways I otherwise would never have thought of, such as fusing time and mass together, instead of fusing time and space. This has the unexpected side-benefit of unifying two descriptions of mass that most theories have to carry around separately. The digital constraint acts as a spur for theory generation of remarkable potency.
This, I believe, is how we might get physics out of its rut. Even if it later turns out that a purely discrete model is inadequate, then we will know why, and surely that would help too. And this is why projects like Wolfram’s deserve attention and respect, regardless of how you feel about the man himself. (And whatever you do, I implore you to respect the courage and integrity of his collaborators.)
There is much work for us to do. My guess is that Wolfram’s project won’t rapidly yield a theory of everything as he hopes. But unless we engage and try, the bridge between discrete math and physics will remain unbuilt, and, in all likelihood, fundamental physics will stay stuck.
If we want a future with any chance of starships and teleporters and wonder, we have to build one. All of us. By rolling up our sleeves and learning and participating in science as deeply as we’re able. Regardless of what comes of it, projects like Wolfram’s make engagement easy and fun. So if you love science and believe it has a place in our society, what are you waiting for? Your chance to contribute starts now.