The Fabric Of Computational Non-Duality

Jacob Parece
17 min readJan 3, 2022

--

Wolfram Physics, “Theory Of Everything” — & Exploring Reducible Pockets Of Computations Within Blockchain & Iota Tangle Technology

Photo by Manuel on Unsplash

THE THEORY OF EVERYTHING

Physicists are looking to create a grand unification theory that can bring all particles and forces that act upon each other into a singular model that will connect both the macro (Einstein Theory) and micro (Quantum Theory). The debate on how to create such a model is variable. In this article, we will explore the theories of Stephen Wolfram. Wolfram is a physicist, mathematician, and computational language, expert. The author of the book, “New Kind Of Science” Wolfram constructs how to perceive the universe through computational modeling.

We will explore elementary definitions within Physics, Computation, and Awareness to help refine our understanding. Our brains often are used to compartmentalizing, but I want to reader to be aware that continuity and discrete events are both paradoxically at the heart of Wolfram’s — Theory Of Everything. Ultimately, this story will lead us to the question, “Can The Theory Of Everything help create an exponentially more efficient blockchain?”

THE SEARCH FOR A UNIFIED THEORY

The models which govern physics up until recently have always been expressed mathematically. Reasonably, as scientists have introduced new concepts, the previous understanding of the universe is negotiated against new thought, mainly in the form of mathematics.

Traditional Model of Physics = Complexity Modeled Theorems Fit To Unknown Variable Structures

Theory Of Everything = Simplicity Modeled Causal Structures Mapped Onto A Graph To Analyze How Complexity Forms

The Wolfram Model

The central principles of the Wolfram Physics Project are about finding a simple computational rule that generates our universe.

Wolfram’s different path emerged out of his work within the power of computational learning. The path which ended up becoming the standard of Wolfram Language which helps automate language into computation (a set of operations that begins with some initial conditions and gives an output which follows from a definite set of rules).

This “New Kind Of Science” inverts our perception to view the world as the interaction between computational elements.

For the purpose of this article, we will define a computation as an output based on its direct causal relationship in the form of an input. We can assume (inputs) are probabilistic and their causal relation is deterministic (output state transitions.)

In order to know an answer to anything computationally, one cannot skip ahead. Wolfram puts forth a theory called computational irreducibility; whereas, in order to create an output, the input must happen first or put another way — in order to know the effect, the cause must happen first.

Cause and effect create an idea that definite events happen. We experience this as a single thread of time and correlate that space being of the same matter.

In Wolfram’s words:

Wolfram challenges this assumption that space & time are a single-threaded substrate by using a Multiway graph to computationally plot out the cause and effect histories, between abstract elements. This process measures how their interaction changes over time. Points in the graph can be viewed as bounded observations. Whereas branches in the graph represent the continuum as a slice of time. Thus, the Multiway graph tracks histories/relations as bounded and unbounded observations existing simultaneously. Utilizing knowledge gained from running computation models, Wolfram’s Model theorizes that the branching and merging of a system can be analyzed on a graph to give us hints into the nature of reality.

Multiway Graph

DOUBLE-SLIT EXPERIMENT

In order to create a unifying model, physicists have to connect Einstein’s Theory of Relativity; and the Quantum Realm in which observable variances occur.

In the double-slit experiment, two narrow slits are cut out of a piece of material and behind it sits a screen. In between both slits sits an obstacle that will interfere with the light source. The waves go through both slits combining again behind the screen, creating an interference pattern. When the photons are measured directly with an instrument from which slit they entered they react as A PARTICLE INSTEAD OF A WAVE.

The quantum experiment, demonstrates “that light and matter can display characteristics of both classically defined waves and particles; moreover, it displays the fundamentally probabilistic nature of quantum mechanical phenomena.” — Wolfram

Wolfram states that the double split experiment is the story of branchial space and physical space. The destructive interference is the result of the two possible paths of photons going through the slits, but winding up on the opposite ends of branchial space. Thus, this mechanism is why nothing is “measurable” because the different branches couldn’t get merged together to be able to be measured within physical space.

What is more interesting is when you take this same experiment and “mark” the stream of photons in order to track them, the interference pattern is not seen. As soon as you “un-mark” the photon and delete the information the experiment returns back to its original state.

It appears that the updating of the universe doesn’t always happen sequentially (ONE LINEAR THREAD). In order for us to take into consideration any computation, we must first acknowledge our own computational boundedness. Whereas, time is a function of how we observe and not just what we observe.

Time In The Wolfram Model:

Wolfram theorizes that:

The Wolfram Model has helped explain a way in which computation can be used to derive how elements in our universe interact with each other. We as the bound observers play an intricate and equivalent role. Out of observation arises quantum phenomena that shift our understanding of space & time being just a single-threaded “thing”.

Now we will explore a new paradigm called multi-computation which will give rise to new ways of perceiving computational structures.

Multi-Computation: A New Paradigm

Photo by Guillaume TECHER on Unsplash

The computational paradigm utilizes sequentialized time, whereas one thread exists. Simple, time-constrained computational structures lack the ability to parallelize computation into multiple realities or put another way — Single-threaded computational structures like blockchain utilize a synchronized function, whereas time and computation are bound output states.

Multi-computational systems run many threads of time, which correspond to different causal histories. This can be thought of as multiple computations in asynchronous, and parallel evaluation events. Each multi-computation, the update happens within its own time coordinate or put another way — updating happens in a branch of the Multiway graph, while simultaneously happening on all other possible branches where updating events occur.

Within this network of relationships, exists one function between “discrete things” and what is observed. We don’t have the ability to stand outside of the system and know all the moving parts, but can only be updated with the events themselves.

The observer is non-dual in a multi-computational model, in the sense that decisions can be made that are both probabilistic and deterministic, paradoxically. Thus, non-duality exists in multi-computation systems when following a specific path of state changes, and duality exists when we observe the decisions of all possible paths.

Multi-computation within software programs equates to layers that are all dependant upon each other. The byproduct is an abstraction of the data structure, updating and the scope of observation. If we peered into any of these functions we may not be able to determine the state of the system. Unlike, uniform computation where time is the story of progress. In a multi-computational system, a model must be created for the observer to determine the state. It no longer becomes a linear input-output-update state paradigm, but more dynamic models are needed, which take into consideration the status of the observer.

In the next section, we will evaluate Blockchain technology and learn a little bit about how distributed consensus is achieved. Only to return back to how multi-computation can be utilized within blockchain technology.

BLOCKCHAINS ARE COMPUTATIONALLY CONFLICT-FREE REPLICATED DATA STRUCTURES

Photo by Shubham Dhage on Unsplash

Blockchains are distributed computing systems where:

  • Economic Ledgers are booked & maintained

while

  • Consensus is performed (nodes run input-output logic)

in which

  • Immutable transactions are placed in blocks storing a sequential history chain

From a state of invariance to variance, a spectrum of replication can exist. In a traditional blockchain, a bridge of sharing information about each node’s opinion takes place in order to form a uniform consensus on the values within the ledger. In account-based ledgers, each node records the current account balances. Whereas, in Bitcoin’s (UTXO) Unspend Transaction Output Ledgers, each node tracks the spent funds.

In UTXO each time a transaction is spent, the remaining change returns as a deposit awaiting to be spent again.

We can think of this as breaking a dollar bill and receiving change back, but because this is happening digitally each value returns to a “container”. It is the responsibility of each node to be the accountant of the ledger’s state. Thus, an output can only be spent once. Open record books help build transparency and trust in a distributed model.

POCKETS OF REDUCEABILITY

Note — Blockchain’s are computationally irreducible in the fact that one simply cannot skip ahead, and know how code will process without running it first. Consensus has been refined by replacing algorithms, searching for pockets of computational reducibility. Within these pockets, one could analyze if the code modifications created the desired effect.

HOW DO BLOCKCHAINS COME TO CONSENSUS ON THE LEDGER STATE?

Blockchain Consensus — Classical vs. Nakamoto vs. Hybrid

One of the biggest problems that blockchain and distributed computing face is network complexity. When a node (computer) transfers information to another node, it must process and react upon it. If a node must gather information by questioning other nodes, the more messages received the more processing that must take place.

Similar to physics, early classical consensus blockchain’s do not account for relativity between nodes efficiently. In classical consensus, gossiped messages must transverse time-consumingly from leaders to secondary leaders and beyond throughout the network. This model is similar to having a telephone booth where each node answers a call from his neighbors and must gossip it to the next town, to the next state, to the next country and to the whole world. The further into computation the more complex this system becomes.

In Nakamoto’s consensus, utilized by Bitcoin, block creation is slow and only happens after a leader is selected. Nodes are bound by a “clock” of pointed state and state change in sequential fashion, block on top of the block. A single point of state at a specific time. Honest nodes are incentivized to not double spend when blocks are created. The ledger must be kept in a conflict-free state and blockchain does this by slowing down and choosing a leader — “truth-teller” node. Only the “truth-tellers” opinion survives, and the rest of the network reorganizes around it. From a decentralized race, to a centralized broadcast model, consensus becomes a single-thread bound observation. In future sections, we will explore why Nakamoto’s Consensus is a breakthrough that should not be forgotten.

In hybrid consensus, subsampling uses fixed neighbors to broadcast messages from one point to the next. Hybrid models try to create the best of both worlds by sampling (reducing message complexity) to a portion of honest nodes and broadcasting, but fail to scale with large networks due to the message complexity. This model is similarly used by pollsters by sub-sampling votes at the polls to make judgements about the probability of who won. Like classical consensus, hybrid sub-sampling is bound to message complexity which is why a fixed amount of nodes are needed to determine consensus in a timely matter.

Classical Consensus— Byzantine Fault Tolerant

1. Transaction issued to (leader) node

2. The main (leader) node broadcasts the request to all the secondary (backup) nodes.

3. The nodes (primary and secondary) perform the service requested and then send back a reply to the leader.

4. The request is served successfully when the user receives ‘n+1’ replies from different nodes in the network with the same result, where n is the maximum number of faulty nodes allowed.

Proof Of Work

1. Transaction issued

2. Leader selected to form Block (having solved a computationally expensive proof)

3. Nodes update their opinion

Hybrid — Sub-Sampled Gossip

1. Transaction issued

2. Node queries random subsampling

3. Broadcast spreads through network

COMMUNICATIVE PROPERTIES — LIVENESS & SAFETY GUARANTEES

Early Blockchains like Bitcoin would be considered part of this single threaded computational structure. Nakamoto’s longest chain wins consensus, which interlocks transactional histories into sequential blocks, is a prime example. By compressing all computational capabilities into blocks it creates an easy to understand evolution.

The breakthrough of Bitcoin was the ability to maintain a conflict-free ledger without needing to trust a third party representative. Bitcoin achieves this by having each node individually race to solve a proof of work — cryptographic puzzle — and then broadcast the block update across the network.

Without needing to trust opinions, and instead relying on trusting the process of computing asynchronously and the cost in which it would take to change the ledger. In order to double-spend and create an inconsistent ledger value the decentralization of consensus must be analyzed. How often a leader is selected in Bitcoin — proof of work is linked directly to a naturally centralized distribution.

The Nakamoto’s coefficient is the amount of nodes that must be compromised in order to invalidate blocks. In Bitcoin, only 8 mining farms need to be compromised to reach 51% and thus lose the ability to come to an agreement on the values that exist. To give another example — in Solana for instance the coefficient is 18.

Source:https://coinmarketcap.com

Nakamoto consensus relies on three principles: Safety, Liveness, and Communication.

Safety within the Nakamoto’s model equates to nodes committing the same sequence of blocks — no variation can exist ie. (The One Reality).

Liveness within Nakamoto model is the guarentee that all nodes will reach an eventual agreement on the values within the network.

Communication within Nakamoto happens neighbor to neighbor when a new block leader is selected to update the state.

Nakamoto consensus uses a probabilistic finality, whereas a transaction becomes harder to reverse the longer it has existed in an approved state. The benefit of this model is that the network doesn’t automatically crash when out of sync.

Nakamoto consensus doesn’t fight against the nature of reality, but introduces a way for nodes to ultimately agree.

Coupled with the (UTXO) Unspent Transaction Output model in which spent funds can only happen once, Nakamoto created a network which was flexible and limited the constraints needed to reach consensus.

While the UTXO model can operate in parallel and does not need “sequential ordering” to produce a state, the production process of block creation does.

Nakamoto’s consensus can be built upon, but can it become a multi-computational system and thus scale to a greater extent?

Wolfram believes multi-computation is the next evolution in how we design computational systems.

Wolfram’s Words:

In order for multi-computational blockchain’s to operate, we have to find a reducible way to interlink asynchronous updating and root the observer’s perspective.

The observer is time-bound in blockchain architecture. Similarly to tracking events in a multiway graph the observer only sees the scope of their own updating unless they communicate with their neighbors. In single computational ledgers this requires a block to be updated before the next one can be built upon. So even though, asynchronous races are being done in parallel “proof of work”, a node must wait to observe and then compute its next move. This creates a slower process because as we learned earlier the inputs must arrive before outputs can be produced due to computational irreducibility.

In a sense, the difference between distributed nodes is trust. Without an all seeing, “non-dual book keeper” we wouldn’t have any way to infuse trust in the system because it would be exceedingly hard to converge to one shared opinion. Imagine all nodes giving different opinions without a hierarchical trust structure, hash and blocks. Multicomputation in the realm of blockchain technology must problem solve in new and innovative ways.

The observer is the structure, and it must be able to decipher the truth and have differentiating functions to enable computational consistency.

What is interesting is that diffusion of light may bring to light a way to understand order in chaos.

Wolfram’s words — “It’s interesting to note that the emergence of something like diffusion depends on the presence of certain (identifiable) underlying constraints in the system — like conservation of the number of molecules. Without such constraints, the underlying computational irreducibility would lead to “pure randomness” — and no recognizable larger-scale structure. And in the end it’s the interplay of identifiable underlying constraints with identifiable features of the observer that leads to identifiable emergent computational reducibility.”

NOTE — Thus, nodes of a Multicomputational system must maintain an “observed” identity which is linked to an identifiable constraint (trustworthiness) which allows the unidentifiable “random” structure to converge towards computational reducibility or put another way — one eventually consistent replication.

Has there ever been a Multi-computational Blockchain with functions similar to the Theory Of Everything?

The Iota Tangle— Parallel Reality Ledger State

Directed Acyclic Graph

Iota is a permissionless distributed ledger technology that does not use blocks, and each miner is a user who appends transactions into the ledger. Without blocks a different type of data structure is utilized called a DAG, (directed acyclic graph) where transactions can be processed in parallel. Currently running on their developer network is a new consensus mechanism called, On Tangle Voting.

Iota’s Parallel Reality Based Ledger is eerily similar to the mechanisms which govern physics, computation and the causal relationship between abstract values. Iota is the first Multi-computational ledger that embeds the observer into the function to create an eventual conflict-free ledger state.

Remember — Nodes of a Multicomputational system must maintain an “observed” identity which is linked to an identifiable constraint (trustworthiness) which allows the unidentifiable structure to converge towards one shared replica or put another way — computational reducibility.

This excerpt from Hans Moog — Software Engineer and originator of the multi-verse consensus speaks clearly to why this breakthrough is revolutionary.

Multicomputational — Parallel Ledger States

Iota doesn’t constrain the structure, but places constraints on the observer through a layered computational model. What this means is that, replication is abstracted into two parts. The ledger state (strong eventual consistency), and the Unspent Transaction Output (UTXO) “Branch DAG” layer in which all possible realities are tracked. Similar to a Multiway graph, multiple conflicting realities exist simultaneously within the Iota ledger. It could be said that the replication of the ledger state is an act of being an invariant system. Branches (forks) become parallel reality variants, and the master branch continues on as a non-conflicting UTXO [Master Reality].

The Power Of Nakamoto Is That UTXO Does Not Need Total Ordering. Iota leverages the theorized fundamentals of the Wolfram Model And Nakamoto UTXO to create a multi-layered consensus.

By utilizing the Iota ledger structure as a voting layer, the consensus becomes computationally reducible. Node & transaction become a singularity, a non-dual computation of the issuing node’s trustworthiness. Whereas, when a node issues a transaction, all conditions are met to start processing consensus without further communication because we already know each node’s opinion upon observing & tracking the data structure. No further communication is needed unless a decision cannot be made. In those rare metastable breaking circumstances Iota will use Fast Probalistic Consensus and query a subset of nodes to dispell the dispute.

Note — This is also made possible because Iota uses a different sybil protection which is linked to “Consensus Mana” which is generated from producing truthful transactions and is assigned to the node. The speed in which a transaction can be submitted uses “Access Mana” which is tied into the congestion control algorithm. Remember, in Iota each miner is the user so trust can be built differently.

Thus, each transaction submitted by trusted nodes will indeed approve all of its past causal history automatically while simultaneously building approval-weight in support of finality. Iota holds true Nakamoto’s vision of longest-chain wins, but takes the concept and generalizes them in a multi-computational voting schema.

In Iota nodes observations are both deterministic and probabilistic as the ledger state is emergent. The observer gets to choose the grade of finality because observation + computation is the same function of time in the system.

The observer is able to parse through transactional realities and always choose the conflict-free replica [Master Branch]

The UTXO graph holds properties equivalent to the quantum experiment where photons are marked and then later deleted. This is exactly what happens when a node’s state is different from its neighbors. Each fork, becomes a separately tracked output “container”, and when a decision has been reconciled it is as though that reality never existed. That forked-branch will never merge back into the physical ledger space and it shall disappear from being tracked in the UTXO- branchial space.

UTXO

Given that all distributed networks experience a delay due to the nature of space-time, we will assume a standard delay, but “On Tangle Voting” exists computationally reduced compared to blockchain because other nodes don’t need to be queried on truthful messages. This results in faster confirmation times at almost real time.

The parallels to Wolfram’s idea that the universe is computational are apparent within the Iota — On Tangle Voting System.

Both structures utilize:

  1. Deterministic & Probabilistic (Bound & Unbound Structures)
  2. Contain Discrete Time & Continuity (Embedded Observer)
  3. Write And Read Off” Computational Causal History (Track Parallel Ledger States)
  4. Computational Irreducibility — Cause Before Effect — Construct & Compute Observations
  5. Pockets of Reduced Computation (Transactions Are A Vote)
  6. Multi-computation Can Display As Invariant Causal Graphs (Eventual Consistency)

TO NOTE —

The biggest potential in blockchain technology is to consider the power of the observable state. Due to the existing voting mechanisms blockchains lack the awareness to unify node and vote in a non-dual observational state.

I believe Iota will revolutionize how we view Distibuted Ledgers because consensus cannot be reduced further when the observer is embedded in the system. This may also lead to an “identity based sharding model” which would allow Iota to scale similar to the structures of society.

Thus, “if one can create the ledger state by observation”- one can “in the future create a system to “punish” those who consistently vote for false realities or — to create a new logic within the stream of Iota transactions.

Obviously, this is theoretical, but no other cryptocurrency allows conflicts to exist, be analyzed and higher level functions of security potentially enforced. Iota’s generalized voting enables more complex functions to exist. I’m interested to see what changes the Iota Foundation and Hans Moog will create as meta-functions in the future — Iota.

JACOB PARECE

Resources:

--

--