Intelligent Space:

Putting substance back into emptiness

Space is traditionally viewed as an emptiness, a void within which all things ‘material’ have their existence. In the late 1960s, Konrad Zuse (who designed and built the world’s first ‘stored program’ computer) proposed that space, far from being empty, consists in a regular lattice of miniscule machines known as ‘cellular automata’, one for every point in space, which ‘calculate’ the physical reality manifest at each of those points. I have revisited Zuse’s idea in the light of contemporary observation and theory, and call this new synthesis Intelligent Space. It seeks to explore more closely what it is that Zuse’s fundamental cellular automata are made of, and in so doing present a framework for merging relativity and quantum theory.

In the late 1980s, Stephen Hawking suggested that “a complete theory would be understandable in broad principle by everyone, not just a few scientists.” Intelligent Space, through its foundational simplicity, appeals from the outset to our common sense, and it may indeed be the underlying source of our own intelligence.

The Continuum: Length

We begin by considering the continuum of numerical values that stretch along the number line between the numbers zero and one. We simply cut a metre-long length of string into halves, then quarters, then eighths, and so on, keeping this up until we can no longer divide whatever it is that string is made of — that stuff which makes up reality. This will happen (empirically) after just one hundred and fifteen divisions, and at that conclusion to slicing the string into shorter and shorter lengths, we will have reached all the way down to what the ancients described as an ‘atom’, meaning quite literally something that can no longer be ‘cut’.

This true atom is not to be confused with what we commonly think of as an atom. The chemical ‘atom’ is split apart (cut into smaller pieces) by boffins in lab coats all the time, and we would reach the chemical atom after a mere thirty-four divisions of our metre-long piece of string.

Because our piece of string (conveniently) started out at one metre in length, we can ‘map’ each piece of cut string onto a fraction of the number ‘1’, just like the real features of the earth are represented by markings on a topographical map. The half metre length of string would map to the fraction ‘1/2’, the quarter metre length to the fraction ‘1/4’, and so on. However, if we keep dividing the number ‘1’ in half, over and over again, it is obvious (as noted by Zeno of Elea) that we can keep doing this forever, never reaching the ‘atomic’ end of the road like we do with our material piece of string. The number ‘1’ can be divided into an infinite number of infinitely diminishing fractions — what mathematicians think of as ‘entry level’ infinity (aleph naught).

Physicists use this mapping of physical reality onto ‘numbers’ when creating mathematical ‘models’ of reality, and these models are very useful in describing and predicting how reality behaves. But some of these models are idealized approximations to reality, for they smooth out the lumps and bumps of the individual atoms of reality onto an unbroken mathematical continuum (known somewhat ironically, for historical reasons, as the ‘real’ number line).

The Continuum: Time

The (experimental) physicist is occupied with measuring the length, breadth, depth and mass of the stuff that makes up reality, but more so with modelling (so as to predict) what happens to that stuff over the course of time. Imagine that we could press the ‘pause’ button and freeze the entire universe in time. If we were then to release the pause button for an instant before depressing it again, we can envisage that all the (true) atoms in the entire universe would have each advanced by just one ‘notch’ into a revised configuration, like a movie progressing from one frame to the next.

Just as we divided up our piece of string, we are also interested in knowing how many times we can divide up a ‘piece’ of time, say one second, before the stuff that time is made out of can also no longer be divided — before we reach the ‘atom’ of time. It turns out (again, empirically) that if we keep dividing a second in half just one hundred and forty-four times, we will reach a point where time can also no longer be divided.

The Seiko Spring Drive movement has no escapement. The second hand sweeps smoothly around the dial with no evidence of ‘ticking’, as if time were an unbroken stream, a continuum. In actual fact, the movement is regulated by a quartz crystal that ‘ticks’ some 32,768 times a second.

Max: The fundamental units of measurement.

These atoms of length and time are known as ‘natural’ units, because they are not arbitrary, but derived from the measurement of real phenomena, and they were first established by Max Planck at the turn of the 20th century. The ‘quanta’ of space and time, as they are more commonly named, are both extremely small (relative to the dimensions we encounter in everyday life), and are defined by the speed of light. Light travels exactly one quantum of length within the period of exactly one quantum of time, and this is the fastest speed possible, for reasons we will discover. Because space has three dimensions, a quantum of space is represented by a cube with sides of one quantum of length.

A representation of a region of space comprising 4,096 (16³) individual quanta of space. There are approximately 1,000,000,000,000,000,000,000,000,000,000 quanta of space spanning the diameter of a human hair: the quantum of space is tiny as!

The period of one quantum of time sees the playing out of that single frame of the ‘motion picture’ of our universe. There are many (26 orders of magnitude) more quanta of time in just one second, than there have been seconds in the 13.8 billion year history of the universe. Don’t blink or you’ll miss it.

Clocks: What exactly is it, this thing we call time?

Anyone who has been mesmerized by the ticking of a metronome will appreciate that the enduring quality of time is its regularity.

Temple investigating time and space

Yet time is not something in itself. Dividing a second of time is qualitatively different to dividing a piece of string. Time is merely an artifact that emerges from the change in state of all the atoms in the universe from moment to moment, and for this to happen as it does — like clockwork — all the atoms in the universe need to agree on the time, and change moment to moment in lockstep with each other.

Finding agreement on the time of day is an ancient problem. In an ideal world, we would have access to perfect clocks. The timekeepers would synchronize an ‘escapement’ of clocks at a ceremony one day in Greenwich, then ship them off to every corner of the globe, and we would always know if any two events, at different places on the earth, happened at the same time, by simply referring to the local ‘UTC reference’ clock at each location.

Unfortunately, the clocks we have so far managed to construct are not so perfect. All of our clocks, even the most accurate ones, drift away from the true time. However, when the news comes on the telly each evening, we can adjust our less accurate timepieces to a considerably more accurate reference time, a standard that is broadcast at the speed of light from a central time authority.

The Seiko Astron is synchronized to the reference time of the Global Positioning System

This standard is quite handy for earthlings who need to agree on when they are meeting up for lunch, for a signal travelling at the speed of light can get everywhere in the world in almost no time at all. However, sending out a signal at the speed of light is not much use if you want to synchronize clocks at opposite ends of the universe. So then, how does the universe manage to keep itself in time?

Albert and Edward: Detecting the luminiferous aether

A groundbreaking experiment, conducted a hundred and twenty-eight years ago by Albert Michelson and Edward Morley, revealed a rather puzzling phenomenon. If we shoot a bullet from a gun, forward from a speeding train, we can reason intuitively that

The speed of that bullet relative to the ground

is simply the addition of

a) the speed of the train relative to the ground, and

b) the speed of the bullet relative to the gun.

We should avoid standing near the tracks ahead of this train.

Bullet. Train.

However, if we shine a laser pointer in any direction from the speeding train (that train driver should be given a ticket), then unlike the bullet, the measured speed of the beam of light emanating from the laser never increases, nor decreases, but always remains the same, relative to anything — the train, the ground, the solar system, the universe.

After some years of deliberation over this frankly bizarre and counterintuitive result, a somewhat mystical solution was proposed. All hope of understanding the mechanism that engendered this result was abandoned, and famously replaced with an article of faith, a postulate — that the observed speed of light is constant in any inertial reference frame.

This alluring proposal ushered us away from the theatre of understanding, and through a side door into the anteroom of instrumentalism — a place where theorists have since declared that “as long as a theory agrees with observation, and accurately predicts observed behaviour, it doesn’t really matter what is actually going on under the covers.”

Immanuel and Konrad: What’s the world actually made of?

Immanuel Kant famously asked if we could know what something was actually ‘in itself’, rather than just knowing its physical characteristics, qualia like its size, weight, colour and so on. We know the properties of the elemental atoms, but do we actually know what they are? The theory of Calculating Space, as envisaged by Konrad Zuse, sees space not as an empty nothingness through which material objects move, but as an active medium that is constantly ‘calculating’ the characteristics of its contents.

This idea effectively inverts our intuitive understanding of where we stand in relation to everything around us. As with the emergence of the idea that the sun, and not the earth, sits at the centre of the solar system, some four centuries ago, the broad acceptance of Calculating Space remains a work in progress.

Just Waving: What’s going on beneath the surface?

How is it that we don’t ‘see’ all the machinations of Zuse’s myriad cellular automata? In a ‘Mexican wave’, each one of us remains in our seat as we ‘pass’ the wave along the rows and around the stadium.

But an alternative realization of the wave would be for all the seats in all the rows to be empty, except for just one column of spectators stretching from the front to the rear, who then run sideways together along the rows, flailing their arms up and down as they pass by each seat. Strange as it might seem, this alternative scenario is actually how most people intuitively think the world works.

As we stroll down the street, we typically think of all the (chemical) atoms that go together to make up our body actually moving from one position to the next, perhaps brushing aside some molecules of air as we push forward through mostly empty space. It is now known that 98% of the atoms in our bodies get replaced with completely different ones every year. But at any given moment, we sincerely believe it is our very own personal collection of approximately seven billion billion billion atoms of mostly oxygen, carbon and hydrogen, that is strutting its stuff.

In the theory of Calculating Space, the ‘reality’ that consists of you, and me, and all that we perceive around us (in the entire universe), is analogous to the ‘wave’ in a Mexican wave, and the seated crowd in the stadium is analogous to the (hidden) ‘substrate’ of our reality. In this theory, all overt physical phenomena — everything we observe in the universe — can thus be thought of as ‘wave’ phenomena that propagate through the medium that is Calculating Space — a lattice of cellular automata.

The Touchscreen: A microscopic Mexican wave

A more sophisticated execution of the Mexican wave takes place in a modern touchscreen display. The screen is made up of a rectangular array of picture elements (pixels)

Your touchscreen seen under a microscope

so small that our eyes cannot discern them. ‘Behind’ each of those pixels is a computer, calculating which colour that pixel will display at any given instant, from a palette of more than 16 million made up from different combinations of intensities of red, green and blue light.

Equal intensities of the three primary colours produce white light

When we grab hold of an image on a touchscreen and ‘move’ it with our fingertip, we see that the screen itself is (obviously) not moving. What does move however, is a precise rendition of the image, transferred from one set of pixels to the next, according to how quickly and in what direction we move our fingertip.

Each spectator in the Mexican waving stadium can be thought of as the ‘computer’ that is calculating the value of each pixel (arms up or down). As the wave moves, each spectator ‘calculates’ when to put their arms up in the air, based on what the person next to them is doing. So too does each pixel on a touchscreen pass its ‘colour value’ on to the next pixel in the direction your finger is moving, and in turn receive an updated value from the immediately preceding pixel in the direction your finger came from.

It is essential to the functioning of a touchscreen that all the pixels decide what their next value will be, and change to that next value, at exactly the same time. And so each pixel refers to a very accurate reference clock within the computer. Indeed, all the pixels on your touchscreen are switched off, so all their values can be recalculated, and are then switched back on again, many times a second, more quickly than our eyes can discern.

Of course, if you move your finger very quickly across a touchscreen, you can begin to see the computational limits of the system, limits that are likewise encountered by objects that have mass, for example protons, as they approach the limiting speed (of light) in a particle accelerator. A packet of information cannot propagate across the cellular automata of Calculating Space any faster than one quantum of length per quantum of time.

Moving on up from Representation to Reality

We are quite familiar with and accepting of still and motion images being displayed on flat, two dimensional screens. In general, those images provide us with a facsimile of the light that is reflected off objects in the real world and onwards into our eyes. However, when we move out of flatland and into three dimensional reality, we are dealing with more than just the physics of reflected light striking the red, green and blue receptors in our eyes.

In three dimensions, our actual reality (as distinct from a mere representation of it in either ‘2D’ or ‘3D’) is subject to the entire gamut of physical ‘law’, not just the law pertaining to the reflection of light, but the law pertaining to the properties of the reflecting materials themselves, and their interactions. Where in two dimensions we have ‘pixels’, which are squares on a flat surface, in three dimensions we have ‘voxels’, which are cubes in a volume of space.

In our touchscreen display, each pixel need only represent one specific hue at one specific brightness at any given moment in time. But in the real physical world, each voxel must represent everything about the reality at the specific location of that voxel — all the physical law that applies to that point in space, at any given moment in time.

René and Isaac: Playing 3D Battleships.

As a child, René Descartes was often laid up sick in bed, and tradition has it that he observed a fly ‘walking’ upside down on the ceiling of his room, prompting him to develop the concept (named after him) of cartesian spatial coordinates. In the theory of Intelligent Space, space is an absolute datum (as was favoured by Isaac Newton), a rigid framework in which every voxel has a cartesian ‘address’ that is a fixed number of space atoms (voxels) offset from a common origin in each of the x,y, and z axes.

Every point in space has an absolute address relative to the origin

Alfred and Bertrand, Alan and Kurt: Real worlds and Virtual worlds.

We all appreciate that there is a computer beavering away behind our touchscreens. So where then is the ‘computer’ whirring away behind reality? In the 1990s, the Matrix movies popularized the idea that we are living in a ‘simulated’ world, but sure enough, just ‘above’ the putative simulations presented in any of the Matrix movies, we soon discover that there is some real physical hardware driving it all. Most movie goers have realized that this layering of ‘reality’ offers little insight into our understanding of ‘ultimate’ reality, and have since become bored with the genre, despite recent efforts to revive it.

Alan Turing’s intention was never that his famous ‘machine’ become a mechanical reality. He only ever meant it as an imaginary device, an abstraction designed to generate and prove every possible mathematical theorem. The mathematical Platonists (named after Plato, who founded the movement) believed that the complete tea set of all mathematical theorems has always existed, and that over the course of millennia, mathematicians have been merely discovering the members of that set — the teapot, the milk jug, the cups and saucers, the strainer, the sugar bowl, the spoons, and so on.

The suggestion was that the set of all mathematical theorems was finite, and a group of mathematicians known as the Formalists, led by Bertrand Russell and Alfred Whitehead, set out to catalogue the entire contents of this set. But Turing and Kurt Gödel rocked up and spoilt the Formalists’ afternoon tea party, proving that the set of all mathematical theorems is in fact an open set; proving that we can never complete our knowledge of the collection, that there exist mathematical truths which cannot, however, be proven.

Gödel’s proofs utilized the notion of self-reference, as in the statement “this statement is a lie”, which if true, would be false — and if false, would be true. As we shall see, this conundrum of self-reference is capable of ‘pulling the universe up by its own boot straps’.

John and Stephen: Universal machines.

A universal Turing machine can do everything that any Turing machine is capable of doing. It can mimic the functionality of every ‘specialized’ Turing machine, including the universal Turing machine itself. It is a master of all trades and professions.

Following Turing’s lead, John von Neumann proposed that we develop an analogous physical machine, which he called the universal assembler. It could do for manufacturing physical objects what the universal Turing machine had done for generating mathematics. The universal assembler is typically a microscopic machine, not unlike a bacterium, that at its most basic can manufacture exact copies of itself, and at its most complex, can manufacture exact copies of anything. It has a control centre just like the nucleus of a bacterium, containing the instructions for making replicas of itself (like the instructions in a bacterium’s DNA). And like a bacterium, it takes in raw materials from its surroundings to be incorporated into new assemblers, increasing the population of the colony exponentially — one machine becomes two, those two then become four, those four eight, and so on.

Your modern all purpose computer (whose architecture was also pioneered by von Neumann) is often presented as a ‘realized’ analogue of Turing’s universal machine, for it too can engender a ‘simulation’ of its own functionality (often referred to as a ‘virtual’ machine). Indeed, that equivalent virtual machine can go on to simulate another equivalent virtual machine, and so on, ad infinitum. Stephen Hawking famously presented this scenario as a universe that is perched atop an infinite tower of turtles that stretches ‘all the way down’.

A recursive nesting of virtual computers

Yet hasn’t every couple, at various stages in their relationship, scratched one another’s back, or kept each other warm, or sat on the grass facing away from each other, and held the other upright? If we return from von Neumann’s physical machines to Turing’s abstract machines, we can likewise imagine that a universal Turing machine could simulate another universal Turing machine, but that these two machines, being merely abstractions and thus not subject to physical law, could then proceed to simulate each other, in perpetuum.

A pair of universal Turing machines simulating each other’s existence, set against a backdrop of nothingness.

Like a Möbius strip, which appears to have two surfaces, but actually has just one, this pair of machines could ‘hold’ each other in existence, where neither of them actually has an existence independent of its simulation by the other.

Edwin and Fred, Arthur and Wilhelm: What was happening before anything ever happened?

Edwin Hubble’s study of stars known as ‘standard candles’ (because they have a fixed intrinsic brightness), indicated that our universe is expanding, the implication being that it was once much smaller. This idea initially left Fred Hoyle aghast, mockingly coining the term ‘big bang’ for the point in space and time where it all began. But he soon embraced the idea, so much so that he went on to pioneer our understanding of the synthesis of chemical elements in the first few minutes of the universe’s existence.

It is now well established (from a more recent study of much brighter standard candles) that the universe began about 13.8 billion years ago, with theorists turning their speculations to what lies beyond the big bang (either in space or in time). However, in one typical hypothesis called ‘eternal inflation’, the big bang once again rests precariously upon Stephen’s infinite tower of turtles.

Indeed, over the last hundred odd years, theoreticians have attempted to visualize the strange and impossible geometries, probabilities, and causalities that the elegant (for the most part) mathematics of their modelling suggest. For example, Arthur Eddington presented the universe as being like the surface of a balloon on which we have drawn galaxies, with a texta, that recede from each other, without any centre of expansion, as the balloon is inflated.

Balloon universe

Then came the analogy of “spacetime being like a stretched rubber mat that is curved by the presence of mass”.

A depression in spacetime

Or the recent favourite of dedicated followers of fashion, the idea that “the extra dimensions of string theory are ‘rolled up’ into each of the normal dimensions we are familiar with in everyday experience”.

Rolled up dimensions

Complex geometries are the bread and butter of modelling reality. One of the most elegant representations of the standard model of forces and particles is the 248 dimensional Lie group, classified in the late 1800s by Wilhelm Killing.

A Petrie projection of E8

Such attempts to visualize higher dimensional space, heroic though they might be, are nevertheless deeply unsatisfying, for even Blind Freddie knows that space is neither a point, nor a surface; it’s a volume. Even our Möbius strip presents a sleight of hand, for it takes a two dimensional surface, and proceeds to gives it liberties in a dimensional realm above its station (the Klein bottle does the same). Higher dimensional mathematics are spectacularly successful in modelling a universe that is not static, but rather in constant flux. But those mathematics are merely maidservants to a reality that actually has three spatial dimensions — no more, and no less.

Indeed, if we open the door of the anteroom of instrumentalism, and step back for a moment into the grand ballroom of understanding, we discover that over the course of the last century, space and time have taken on a life of their own in our conceptions of them, becoming (when combined) a thing of ‘substance’ (spacetime), ‘machinery’ (Calculating Space) and most recently quintessence, being a substance not dissimilar to the ‘aether’ that was so glibly eschewed all those years ago.

Let’s start at the very beginning: Yes, and No…

So then, let’s look at how a quantum of Intelligent Space might evolve. We start with two Turing machines, consisting in strings of binary digits, neither of which exists without the other, so that we appear to have something, where in fact there is nothing. At this juncture we have a universe without any physical reality. But this self-supporting computational engine has the capacity to begin generating the mathematics that constitute the physical law of the universe.

The pair grow in computational capacity by drawing bits of binary out of the surrounding nothingness (be those bits represented by one and zero, yes and no, black and white, yin and yang, something and nothing — it matters not). Once physical ‘law’ has emerged from the mathematics that the pair generates, and that law has become ‘loaded’, as it were, in their ‘memory’, the pair can proceed to simulate the first quantum of physical space, a vacuum having a cartesian address of (1,1,1), and a ‘vacuum energy’ consisting in the machinations of its underlying Turing machines. The pair can thence go forth and multiply, continuing to draw in binary digits from the ‘surrounding’ nothingness.

Space proceeds to expand exponentially like a bacterial culture, where there is no single centre of expansion (as in an explosion), but rather each and every replicating quantum of space is itself a centre of the expansion.

Each newly created Turing machine pair, and the quantum of space they simulate, is assigned a unique cartesian address. It is not the stretching of a fabric (or a ‘rubber sheet’) that causes space to inflate. Instead, each quantum of space remains fixed in volume, and it is the increase in their number that results in the expansion of the total volume.

While the simulated physical universe (space) is thus expanding, the actual Turing machine pairs that operate behind that space, which in themselves are abstract and have no physical reality, remain in exactly the same ‘place’ (nowhere). This place (if it can be thought of as such as thing) is known as the ‘superposition’ — all the computational engines behind the quanta of space (which together constitute all reality) are in direct proximity to each other, and can directly communicate with each other, even though they simulate quanta of space whose assigned cartesian spatial coordinates may put them, logically, at opposite sides of the universe.

One heroic way of picturing the superposition, suggested by Warwick Grigg, is that every point in space is positioned one unit from the cartesian origin, in a dimension of its own, such that the superposition has as many dimensions as there are quanta of space.

It is thus that an instantaneous correlation can exist between all quanta of space, however vast their separation (in space), for their underlying Turing machine pairs are directly interfaced with each other at the (non-physical) superposition. Because the ‘engine’ of the entire universe is thus contained within the superposition, the universe has no difficulty in keeping its activities perfectly synchronized, for all of its clocks never actually ‘leave Greenwich’.

The quantum of time is the period that elapses between the pairs of Turing machines alternating their simulations of one another, such that all 10¹⁸⁵ Turing machine pairs that make up the observable universe, go about their work in precise synchrony, just like the typing pool at Bletchley Park.

The Hubble Deep Field is a composite image of 342 exposures, taken over the course of 10 days, of an extremely dark region of the night sky, so small as would a tennis ball appear if held at a distance of 100 metres, and that on this close inspection reveals some 3000 objects, almost all of which are galaxies. Even the tiny bit of the universe that we can observe is big. Really big.

There is (obviously) no limit to the amount of space that the superposition can simulate, with there being ample evidence that the (simulated) universe is much bigger than the region we can observe through the detection of ancient sources of light.

Recalling then how an image moves across a touchscreen, so too, across the entire universe, is every ‘voxel’ of space ‘recalculated’ with every ‘tick’ of the superposition clock. Each one of us, immersed as we are within this space, has an intimate connection with the superposition.

Cosmology: What are we looking at out there?

To fathom the frankly mind-boggling magnitude of the computations taking place within the superposition, consider a quantum of light travelling from the outer reaches of the universe and finally plunging into the detectors of the Hubble Space Telescope. This quantum is not a particle (despite manifesting as a photon), but rather a packet of information containing a complete description of the reality it represents. It mostly traverses regions of the universe that are a vacuum (computationally idle), at a speed of one quantum of space per quantum of time (the speed of light).

To travel just one metre, the information has to hop across 10³⁵ ‘stepping stones’ of space, transferring the ‘information’ of its reality between each quantum of space in turn, like a bucket brigade moving water from a well to a fire (or indeed like the propagation of a Mexican wave). But the expansion of space is accelerating, and so as our quantum of light ‘information’ traverses each metre, it finds that additional stepping stones are being inserted into its path (by the replication of Turing machines at the superposition), causing the wavelength of its light to be (computationally) lengthened (shifted towards the red end of the light spectrum). Naturally, the further this photon has travelled, the more inserted space it will have encountered along the way (and the greater its redshift will have become).

We can also imagine that on its journey, this quantum (of information) will traverse stepping stones (voxels) that are not representing a vacuum, but are actively engaged in transmitting a packet of information that ‘represents’ some other physical reality.

If we think of the phenomenon of gravity as emanating from massive objects at the speed of one quantum of length per quantum of time, then the ‘graviton’ (the name we give to a packet of gravitational information) will contain several data. Like the photon, it will have directional data, and as it passes each stepping stone, its trajectory will be reviewed and recalculated, such that it will remain on a constant heading, just so long as it doesn’t encounter and interact with any information that acts to alter its direction.

When the path of such a graviton intersects with that of our photon, each having quite possibly come from the farthest reaches of the universe, the graviton will ‘inform’ the calculation of the photon’s trajectory at their (voxel) of intersection, imparting one unit of ‘gravitational redirection’ towards the direction from whence the graviton approached.

Of course, the closer our photon comes to a gravitational source, the greater the frequency of such encounters, and their cumulative alteration of our photon’s path. Indeed, the intensity of any force transmitted through Intelligent Space diminishes as the inverse square of the distance from the source, according to simple geometry.

The exponentially decreasing intensity of photons and gravitons as they move away from their source

Eventually, our photon’s (long) journey comes to its end when the information (that defines it) interacts with the information contained in the voxels that represent the lenses, the light detectors, and finally the rendering engines, of the Hubble Space Telescope.

The fact that gravitons propagate at the speed of light (and not instantaneously as Newton thought), is the underlying reason for the success of contemporary gravitational modelling. Indeed, the Global Positioning System would simply not work if its design did not account for the fact that an object will have changed its location, albeit by a miniscule amount, in the finite time any force must take in reaching it.

The components of the Hubble Space Telescope have stable (chemical) relationships with each other (as do the components of our bodies in holding us together), but each of those relationships must be constantly transmitted across the Intelligent Space through which we and Hubble are propagating. Hubble is orbiting the earth, the earth is orbiting the sun, the sun is orbiting the Milky Way, the Local Group, the Virgo Cluster, the Local Supercluster, Laniakea and beyond.

Who said we’re not at the centre of the universe?

Hubble (and indeed each one of us) has a resultant ‘universal’ gravitational trajectory (relative to the rigid framework of Intelligent Space). This trajectory is the sum of all the vectors, from all the gravitons, that at any given moment are interacting with any of the voxels that are representing our reality. One assumes an exclusion principle, preventing any voxel from processing more than one interaction per quantum of time.

Obviously the most intensive interactions for Hubble are those with the gravitons emanating from the earth, which keep it whizzing around in a geostationary orbit. If we get in a car and blast down the highway, then that motion will be the most intensive interaction that our bodies (and the car) have with the Intelligent Space into which we are pushing forward.

But in each moment there will likely be one voxel, from the approximately 10¹⁰³ voxels which at any moment are defining any individual’s existence, that will encounter a packet of gravitational information recently arrived from some attractor far away from us out there in the universe, an attractor that of course isn’t where it was, for it has long since moved on.

Computational physicists attempt to model reality in terms of these individual (quantum) interactions, but this modelling can only ever be a crude approximation to reality, for the most complete modelling is the computation taking place within reality itself.

Erwin, Rosalind, Francis, James and Craig: Seminal Ideas.

In 1935, Erwin Schrödinger shocked the world with his indifference towards the fate of a hapless moggy.

As if he hadn’t inflicted enough existential angst on cat lovers the world over, when he was in Ireland seeking sanctuary from the Nazis (and from those who might cast judgement on his lifestyle), he delivered a series of lectures simply entitled “What is Life?” He argued that deep within the structure of the biological cell, there must exist an ‘aperiodic crystal’ in which the morphology of life is encoded.

His extraordinarily prescient speculation eventually led Rosalind Franklin, Francis Crick and James Watson to the discovery of DNA. Craig Venter, who has more recently risen to prominence by sequencing the human genome, sees himself following in Schrödinger’s footsteps, hoping to inspire the next generation of geneticists and computer scientists with his own speculations on how we might manipulate the information contained in the genome.

Like those putative ‘aperiodic crystals’, Intelligent Space is a highly speculative, but nevertheless testable hypothesis. It suggests that the information contained in the genome lies on the surface of a vast ocean of information, the information content of the entire cosmos, information that is contained entirely within the superposition, an entity that is potentially at our fingertips (and likely to have long since come to the fingertips of much other sentience throughout the universe).

Albert: The Avant Garde.

There’s a natural tendency, for we who enjoy our being within a three dimensional world, to imagine squeezing all those myriad pairs of Turing machines into a dimensionless point, just like General Relativity tries to collapse the universe into a singularity, as if that point were still immersed within a three dimensional reality; it’s not. The superposition is not contained within the set of all things that have dimension. Relief arrives when we fathom that the material world is made out of information, and conversely, that information is not made out of material.

Albert Einstein established the two great pillars of modern physics. Over the ensuing century, a vast community of researchers has very nearly completed the entablature that unites them.

I gratefully acknowledge the assistance of Guy Cranswick in preparing this work for publication.