Holochain White Paper and the Road to Sheafhood

To preexisting readers of my blog: I work for Holo now! This post is aimed at people totally unfamiliar with the content of my personal blog.

Scalability is the end. Compositionality is the means. Category theory is the means to the means.https://twitter.com/_julesh_/status/986923696362311680

Almost every day we are asked about the status of the Holochain white paper. While Holochain development has been buzzing along, with the Scout alpha release in May and the Rust refactor moving along smoothly, as well as the development of Holo, the white paper has been temporarily lost in the stack as the authors Arthur Brock, Eric Harris-Braun, and Nicolas Luck work tirelessly on Holo and Holochain itself.

I am writing this to introduce myself as a new member of the team hired to tackle this task, update you on where it’s going, give a bird’s eye view of some long term research goals beyond the white paper, and a small dose of the heavy duty mathematics that will be doing the lifting.

I am currently a Ph.D. candidate in Mathematics at Georgia Tech, located in Atlanta, GA. I study quantum computing and related mathematics, and in general am interested in determining what role quantum computing has to play in distributed computing technology. I have a blog at The Advancedness Project on quantum, distributed, and qualia computing. I found out about Holochain, Ceptr, and Metacurrency in December 2017, and they have been challenging my perspective on distributed computing and crypto-economics ever since.

I believe in the principle of open science — writing about science as you do it, for everyone to see, including all of the mistakes, warts and all. This exposes the truth of scientific endeavor — that it is very messy, never happens in a logical order, and frequently feels like it is going nowhere. This also shows how real progress is made — through endless trial and error, rather than strokes of genius and the illusion of peering through the veil of Maya and extracting Truth itself. You can look forward to occasional updates from me about what is happening behind the scenes in Holo research.

I have tried to write this article in the style of Quanta Magazine — with no equations, and striving for vivid descriptions of the math at play.


With that over with, let’s take the pulse of the Holochain white paper. The primary current issues are that it has a few unfinished sections, and that the mathematics for distributed systems aren’t up to rigorous standards. The unfinished sections are on the to-do list, but first we need to upgrade our mathematics since that becomes the substrate on which all other arguments are laid upon. This post will mostly be about describing what sort of mathematics we ought to describe Holochain with, and consensus-oriented distributed systems more generally. I am assuming that the reader is not a mathematician, so I will also attempt to describe the mathematical thought process that created this plan, which for now I am dubbing The Road to Sheafhood.

The current draft has a definition for a generalized distributed system that will make sense to an average computer scientist. It gives a great general picture of what sort of distributed system we are actually talking about — a generalization of a blockchain that does not force global consensus on all transactions. Echoes of a blockchain remain, as every agent holds a hash chain, but we no longer require each agent’s hash chain to be identical with that of all others. A handful of things are not specified in detail but a typical computer scientist can fill in the blanks.

This definition is perfectly fine for describing what follows in the paper. Bitcoin, Ethereum, and Holochain all seem to roughly fit into the general model. Software itself is way too complicated to exactly represent mathematically, so approximate models are usually the best we can hope for (but there are ways to exceed this limitation). What the definition cannot do very effectively is allow us to prove that it behaves in the way we claim it ought to. Not only is this because of unspecified behavior (which is useful for keeping options open but ruins any chance of proving anything), but even if all behavior were specified, the ways in which the various elements may relate to each other (hash chain, verification function, data elements, addresses vs nodes, etc.) that the number of possibilities is so massive as to be effectively impossible to prove things about. Sometimes in mathematics we call such constructions “wild”.

So we somehow need to figure out what exactly we would like to prove about generalized distributed systems, and then melt down the definition sufficiently that we end up with a very simple sort object that only has the behavior necessary to prove the desired fact about the system. The wild object ought to be thought of as a composition of many very simple objects, and we must understand how any two mechanisms interact with each other and to what extent their behavior influences the other. This decomposition of the wild object into simple objects will ultimately allow us to prove things about simple objects and automatically know facts about the wild object. This process has been used since the dawn of time in mathematics (consider prime numbers…), and this study of composability has ultimately materialized in a field known as category theory.

But category theory is a story for another time, and too broad for today’s narrow interest. Instead, we look to objects now considered to be a sort of subset of category theory — sheaves.


Sheaves, roughly speaking, give us a neatly bound mathematical bundle of information that comes from many distributed agents that have overlapping perspectives but may disagree on that overlap. One simple example of a system with an effective sheaf-theoretic description is a room with several sensitive thermometers in different points in the room. They will inevitable yield slightly different readings. What is the best way to mathematically organize this information? How do you pick one temperature to be the “right” temperature of the room? Sheaves have been singled out as the best structure for such situations. They tell us how to “glue” local information into global information, and if we can’t do that, tells us what is obstructing us from doing so. From a data-centric blockchain perspective, you pick one thermometer at random and claim that that is the temperature of the whole room. From the agent-centric Holochain perspective, you record all of the temperatures recorded, and keep track of which groups of thermometers agree or disagree.

Some overlapping sensors. Sheaves tell us how to think about data that is organized like this and glue local data together into global data.

A sheaf associates to every region of some space some sort of data. You could imagine the region as being the sensor range of a radar, so the sheaf associates the radar reading to the region of space that the radar sweeps. If you have several radars with overlapping ranges, each plane flying in an overlap will be seen by two or more radars. If you watch each radar sweep individually, it can be hard to tell when two radars are looking at the same plane. But if we glue the data from the radars together, we can look at one complete map. Sheaves give us a precise prescription on how to do this gluing process in pretty much any possible context where gluing makes sense. Sheaves have a very general definition, which inspires confidence that in any randomly chosen context where a notion of gluing exists, a sheaf-theoretic description of that gluing exists.

f and g are different sensors, and this diagram describes how they overlap and glue. From A Category Theoretical Approach to the Concurrent Semantics of Rewriting, by Heindel.

I have spoken with several other mathematicians and physicists who have spontaneously aligned in thinking that sheaves seem like the right way to describe consensus protocols for distributed systems. This is one of the problems we will be focusing on at the Second Statebox Summit. I don’t know anybody that claims to have a good understanding yet of how to do this, just a collective feeling that it is the right way to do it. By the end of this article, I hope I will have conveyed some sense of what sheaves are, what they are capable of, and drum up support for the mission to A L I G N T H E S H E A V E S. Creating infrastructure-critical technology requires the utmost due diligence, and a little math goes a long way.

What is the ultimate point of this endeavor? I cannot claim an ultimate point, but at least divulge some very good reasons for doing so. The entire blockchain and friends’ space is lacking a ground-level mathematical foundation. One open arena on which you can pit all possible technologies against each other in the pit and compare them directly, apples to apples. For the most part, arguments in this space rest on hundred year old game-theoretic arguments and there is no way to directly say that one thing is better at a given task than another. Sheaves, we predict, will ultimately give us a rigorous way to do one part of this task, namely comparing different consensus models (or more generally, local consensus models a la Holochain which theoretically has Nakamoto etc. consensus as a special case).

Knowing how to do this will ultimately allow us to build provably-correct code. Anybody who has followed the cryptocurrency space for awhile has seen stories of incorrectly written smart contracts causing people to lose millions of dollars. This is simply unacceptable and wide scale adoption can never happen if that remains a possibility. We must be able to prove that code performs as expected, and furthermore be able to do this with automated theorem provers. This is not an impossible goal, but to avoid making the article too long and speaking about that which I know little about, I will not elaborate further.


Here is what I expect will actually happen once we reach the end of the Road to Sheafhood. We will arrive at a sheaf-theoretic description of Holochain, but it will be too wild to actually work with. The large number of moving parts in the definition will make it difficult to do precise reasoning about the sheaf, possibly it will not even be much easier than the original set-theoretic definition. This should actually be fine, though. It should not obstruct our goal, just push it a little further away into something even more universally applicable.

Famous letter written by Alexander Grothendieck to Jean-Pierre Serre about the Riemann-Roch theorem.

Grothendieck was one of the most profound mathematicians of the 20th century. In the widest possible terms, he studied complicated mathematical objects (very often sheaves, in fact) by placing them in a category of objects of the same sort that shared some common features. Then by proving general statements about entire swathes of the category (frequently by focusing on a simple object that “slotted in” to more complicated objects), the statement for the original case of interest becomes a specialization of the general result. We aim to do the same with consensus. Abstracting a problem to simplify may seem extraordinarily backwards, but this is a widely utilized strategy in modern mathematics. The reason it essentially ends up working with category theory is because it allows you to ignore all the parts that don’t matter and concentrate on the kernel of abstract properties that do give rise to what you care about. We will follow the path to understanding that Grothendieck illuminated.


The short description of the path to our first sheaf which I will proceed to blow up is: pseudocode -> labeled transition system -> sheaf.

In Actors and their Composition, Janneck begins by describing a system of independent and autonomous actors using pseudocode. This is called the actor model, which is a computing model that has as its primitives a set of actors that have an internal state, several functions to manipulate that internal state, and ways to send messages to other actors. In response to a message, an actor can send a finite number of messages to other actors, create a finite number of new actors in a predetermined state, and designate what sort of behavior it should respond to the next message with. The actors may live in some sort of “environment” — which could just be communication channels between the agents, but it could also include resources available to the agents, for example.

Janneck goes on to describe how this pseudocode may be translated into an isomorphic labeled transition system, a lightweight mathematical structure consisting of a set of possible states and possible ways to transition between them. If you think of a very large flow chart describing how a system can change as various actors make decisions, you aren’t far off.

I invite you to draw a labeled transition system describing the Monty Hall problem to see why the conclusion is correct!

In Sheaves, Objects, and Distributed Systems, Malcolm shows us how to take a labeled transition system describing a distributed system and build a sheaf out of it, essentially telling us how to glue together observations of different actors to form collective observations, and how to detect when those observations cannot be joined together.

This sequence of abstraction will ensure that the ultimate sheaf-theoretic description exactly reflects the pseudocode — and if the pseudocode reflects the reality of Holochain code, the wild sheaf will be a complicated mathematical construction to deal with, but ought to fit into a larger category of sheaves where we will find a logical kernel representing the most general way to describe Holochain-type consensus.


Here is what we should roughly end up seeing once we have a Holochain sheaf. Thinking back to the sensor example, each agent can be thought of as a sensor for a neighborhood of the DHT near their own hashed public address that they watch over. They store and validate data in this neighborhood. Other nearby nodes (in DHT distance) will also store and validate data in their neighborhood, which will partially overlap with the neighborhoods of other agents/sensors. The number of nodes that watch over a given entry in the DHT is specified by the programmer, but let’s say there’s 20 for each entry for now. That means 20 different sensors are keeping track of an entry in the DHT and have validated it. If one of them decides to declare than a valid entry is actually invalid, this will break the consensus on this entry. In order to repair it, the other nodes watching that entry check whether the entry is actually invalid utilizing the Holochain’s DNA, and if it is, they will put out a warrant saying that there is a malicious node. Eventually the malicious node will be dropped from the network, and consensus on the DHT entry is restored.

This story is more general than Holochain, though. Blockchains, though they force global consensus and we call them data-centric, can still be thought of and described as agent-centric. This should lead us to being able to describe Holochains, blockchains, and probably DAGs and other distributed consensus systems, all on equal footing. This ought to ultimately show that Holochain is a more general model of computation that blockchains, in that a Holochain can simulate a blockchain but the converse statement is not true.


Anyways, the ideas expressed here are just my current thoughts on the matter that have been developing over the course of the year. This is a small peek at how mathematical research works (which feels mostly like stumbling around in the dark), and also constitutes my message to all interested parties about what is going on with the Holochain white paper. While being wrong about sheaves playing a role in describing consensus seems unlikely, the exact way to get there could end up actually giving us something too complicated to work with since we are working from the top down, and it is sometimes easier to work from the bottom up. I think another very promising place to start with is Petri nets. This could be thought of as a bottom-up approach to describing consensus models. Perhaps they will meet somewhere in the middle, converging at sheaves. We won’t know until we try!

“More so, the specific computational horse-power of consciousness is phenomenal binding –the ontological union of disparate pieces of information by becoming part of a unitary conscious experience that synchronically embeds spaciotemporal structure.” https://qualiacomputing.com/about/