This is a slightly modified version of a story I published earlier in the year as an entry in the FQXi essay contest. The intention was not to win the contest per se, but to give seasoned researchers in foundational questions a series of clues that would lead them to the prize they have been seeking now for more than a hundred years. However, rather than kicking off a race to complete the puzzle, it left them clueless. So in the coming weeks I will be going over the ideas involved for the benefit of ordinary people like you and me — I only know this stuff because I have needed to know it in my role, not because I have had the intellect to discover any of it myself — we are all of us standing on the shoulders of giants.
When a family settles down after Christmas lunch to solve a 1000 piece jigsaw puzzle of an abstract work like Jackson Pollock’s Convergence, we first establish the boundaries, and then each of us begins the assembly of individual pieces into fragments.
Occasionally, when leaning back to scan the entire field, someone will glimpse a connection between two fragments and reach across everyone else at the table to merge those fragments, to the delight (or occasionally the annoyance) of those who previously had ownership of the problem.
Steven Weinberg wrote a short paper in 1967 proposing the unification of electromagnetism and the weak nuclear force, advancing the Standard Model of particle physics in just such a leap. With the recent detection, in high probability, of the Higgs boson, the progenitors of the Standard Model are rightfully congratulated on their achievement.
I do not profess to be a mathematician or a physicist, but I do enjoy the stories told by those working in these fields, and looking, without the prejudice of deep understanding, for patterns in what they have to report. Great though Convergence may be, it was not Pollock’s only masterpiece, indeed I would argue that Blue Poles advances the art by superimposing structure upon the abstract.
Every age establishes a paradigm informed by the dominant technology of their era — Isaac Newton’s clockwork universe has become today’s computational universe.
Weinberg, commenting in 2002 on one such computational model of the universe, suggested that those who study the workings of computers, day in, day out, would perhaps be inclined to start thinking that the universe was itself a computer — “So might a carpenter, looking at the moon, suppose that it is made out of wood” (sounds like something from the Monty Pythons.) But of course many a foundational thinker (including Weinberg himself) is just so inclined — for example in 2008, Max Tegmark, a mathematical physicist, argued that the universe is literally ‘made out of mathematics.’
In 1936, Alan Turing demonstrated that all decidable mathematics (encompassing the mathematics with which we model the universe) could be computed. Computation, or more formally the lambda calculus as developed in parallel to Turing by Alonzo Church, has since been considered more foundational than mathematics. At the deep basis of reality, we should be looking for the most primitive computation, rather than the most primitive equation, to emblazon our T-shirts.
Stephen Wolfram should be credited with having perhaps elucidated this entity, a 2-state 3-symbol Turing machine, and with having enticed Alex Smith to prove its universality.
Through various schemes, some speculative, some rigorously confirmed, we have the emergent chain of
(…) → Computation → Mathematics → Physics → Consciousness → Artificial Computation → Artificial Mathematics → Artificial Physics → Artificial Consciousness → (…)
and this garden path (were it to be established in fact) could lead us on for ever and ever.
In 1990, John A. Wheeler saw a clear opportunity to break this cycle, mounting an argument that the world consists entirely in information enacting the laws of physics — delivering ‘it’ from ‘bit’ — and that our consciousness creates the very reality from which it has emerged, in a self-referential loop. That last bit still has most of us scratching our heads. Indeed many schemes (not least Tegmark’s Ultimate Ensemble) have employed the (ancient) notion of self-reference to avoid being foisted on an infinite tower of turtles. The latest victim of this (equally ancient) malaise of infinite regression is of course the ‘multiverse’, a spectre beckoning from beyond and before the Big Bang.
Exquisitely beautiful as many mathematical models of reality may be, we suspect they are idealized approximations to a reality that is fundamentally discontinuous. The E8 Lie group employed by Garrett Lisi is a gorgeous creature, but the macroscopic fermions and bosons it is modelling present composite behaviour that emanates from machinations many (twenty) orders of magnitude downstairs at the Planck scale.
Solid modelling is the basis of ‘artificial’ reality, and three spatial dimensions are of elegant sufficiency to allow us all to have emerged out of flatland. The ideal modelling method is spatial occupancy enumeration, where each cell (voxel) of a regular spatial grid is individually calculated in relation to its twenty-six neighbouring (cubic) voxels. However, this method is rarely used in practical modelling, because it is computationally verbose, requiring a large number of calculations in each cycle for every point within the simulated space.
Jürgen Schmidhuber has suggested that a simple Turing machine, executing a compressed algorithm, could compute the histories of all possible universes, subject to all possible computable laws. Julian Barbour has argued that time is not a fundamental concept, but emerges from the process of change. Thus it does not matter how many steps are required to complete this ‘ultimate’ computation — execution in its entirety could manifest as just one ‘instant’ of time as we know it, and the computation of the universe could be executed all over again within each subsequent instant.
The observable universe would contain about 8 x 10 ^ 184 voxels if it were a simulation at the Planck scale (based on a universe radius of 4.4 x 10^26 metres), and spatial occupancy enumeration would become a practical method of rendering this universe if a computational core were assigned to each individual voxel (the number of voxels is, despite being a very large number, a finite number that is just as distant from infinity as one is). Each core would only need to reference its immediate milieu, and Wolfram’s 2,3 machine, a reduced instruction set computer, would be an ideal candidate for the job.
As a systems engineer, I work with virtual computers day in, day out, and not surprisingly I sometimes get to thinking that the universe might itself be a virtual machine, just like our carpenter observing the moon. In the practical world of systems engineering, we of course understand that there is ultimately some real hardware behind all this virtualization, indeed that our virtual machines are merely hypervisors hovering precariously above the foundational hardware. But occasionally we will mount a virtual machine upon a host that is itself already virtualized. In doing so we get
Real → Virtual → Virtual
Such machines, embedded within other machines like Russian dolls, don’t run very efficiently, because the real bit holding everything up is subject to the laws of physics, and gets rather hot. Yet despite all the hyperbole about Turing having invented the ‘computer’, Alan never intended his gadget to be made into a physical reality — he invented it as an abstract device for systematically generating mathematical statements (albeit not all mathematics, as Gödel so elegantly demonstrated). As an abstraction, Turing’s machine is not subject to the laws of physics, indeed it isn’t physical at all.
Thus, (and this is not a sleight of hand), a pair of universal Turing machines could be arranged such that they simulate one another, neither of them existing, in a very fundamental sense, until simulated by the other. We thus introduce self-reference to the most primitive element of reality, and get
Virtual ↔ Virtual
where previously there was nothing — no space, no time, no nothing.
Gottfried Leibnitz predicted the existence of this fundamental entity, calling it a ‘monad’ (Newton developed his ‘fluxion’ in parallel to Leibnitz). John von Neumann proposed how such machines, which he called ‘automata’, might replicate. And as discussed earlier, these monads might then enumerate each of the voxels of an ‘artificial’ reality, giving us
Virtual ↔ Virtual → Real
The expansion of this ‘real’ space, as each monad replicates, would be centred at each voxel, giving uniform expansion from every point in ‘space’. The replication of the monads could become exponential, giving us a space whose expansion accelerates. In 1969, Konrad Zuse described how these automata would engender a space that ‘calculates’. The limiting speed (of light) is intrinsic to this architecture. The phenomenon we know as ‘light’ cannot be passed from one voxel to the next, across this virtual space, any ‘faster’ than allowed by the computational capacity of each monad to enumerate its simulated voxel. The change in state of each voxel becomes a fundamental unit of measurement that manifests as ‘time’.
In generating this virtual space, the monads engender lineal dimension where previously there was only abstraction. The vast bulk of mathematics is only possible after this linearity has arisen, starting with number theory from the one dimensional number line, to planar and spatial geometry, and so on into higher dimensional geometry.
If the monad is an abstraction, having no intrinsic dimension, then it is fair to suppose that all the numerous (but countable) monads generating this virtual reality ‘exist’ at a single dimensionless point. Albert Einstein described such a point as a singularity, a ‘place’ where all spatial dimensions cease to exist. However, it is just as valid to think of this point as a superposition of the monads, a place where one massively parallel computer, burgeoning in capacity, engenders the reality we inhabit.
When researching and developing ‘quantum computing’, we should bear in mind that we may be accessing precisely this superposition of the universe. Indeed, we may discover that the ‘edge’ of the universe is not ‘100 billion light years’ away, as it appears to be in the classical (spatially extended) estimation of astronomers, but rather that its entirety is ‘right here at our fingertips’. If the entire machinery of reality exists in one place, then the concept of concurrent action at a distance, which so upset poor old Albert, does not seem all that ‘spooky’ anymore.
Weinberg, S. (1967). ”A Model of Leptons”. Phys. Rev. Lett. 19 (21): 1264–1266.
Weinberg, S. (2002). “Is the Universe a Computer”. The New York Review of Books.
Tegmark, M. (2008). ”The Mathematical Universe”. Foundations of Physics 38:101–50.
Turing, A. M. (1937) [Delivered to the Society November 1936]. ”On Computable Numbers, with an Application to the Entscheidungsproblem”. Proceedings of the London Mathematical Society. 2 42. pp.230–65.
Church, A. (1936). ”An unsolvable problem of elementary number theory”, American Journal of Mathematics, Volume 58, No. 2. pp. 345–363.
Wolfram, S. (2002). ”A New Kind of Science”. p.709.
Wheeler, J. A. (1990). ”Information, physics, quantum: The search for links”. In Zurek, Wojciech Hubert. Complexity, Entropy, and the Physics of Information.
Steinhardt, P. (2014). “Big Bang blunder bursts the multiverse bubble”. Nature 510:7503
Lisi, A. G. (2007). ”An Exceptionally Simple Theory of Everything”. arXiv: 0711.0770.
Schmidhuber, J. (1997). “A Computer Scientist’s View of Life, the Universe, and Everything” (in C. Freksa, ed., Lecture Notes in Computer Science, pp. 201–208, Springer, 1997).
Barbour, J. (2008). “The Nature of Time”. FQXI, winner, 2008 essay contest
Leibnitz, G. (1695). “New System of the Nature of Substances and their Communication” Tuttle, Morehouse & Taylor, pp. 71Von Neumann, J. Burks, A. W. (1966). “Theory of Self-Reproducing Automata”
Zuse, K. (1969). “Rechnender Raum” translation “Calculating Space”