# The Asynchronous Computability Theorem

In this post I describe the Asynchronous Computability Theorem, which uses tools from Algebraic Topology to show whether a task is solvable in a distributed system. The theorem yields quite readily to intuition, however we will take care to make our statements as precise as possible without getting too caught up in the details. Loosely speaking, the theorem states that a task is solvable with a distributed system if its input space maps continuously into its output space. In particular, a continuous mapping fails to exist when the spaces at hand are incompatible with respect to holes within them. The beauty of the theorem lies in that it provides a geometric interpretation of the execution of a distributed protocol, allowing us to draw on our innate intuition for space to tame the combinatorial explosion of states in concurrent computations. In what follows, we shall answer the following questions: What is a task? What does it mean for a task to be solvable with a distributed system? What are input and output spaces? What is a continuous map? Finally, we shall state the Asynchronous Computability theorem and use it to demonstrate the impossibility of consensus.

**The Impossibility of Consensus**

To gain intuition for tasks and solvability, we start with a well known impossibility result — Impossibility of Distributed Consensus with One Faulty Process. Known colloquially as the FLP result, it states that processes communicating over an asynchronous network can’t all agree on a proposal if even one of the processes fails. An asynchronous network is one where communication delays can be arbitrarily long. The task in this case is consensus. Solvability corresponds to whether there exists a terminating algorithm which the participating processes can run to come to consensus. Since it is impossible to distinguish between a communication delay and a failed process, any consensus algorithm may not be able to terminate if a process fails. More generally, k-set agreement refers to tasks where processes must agree on at most k distinct values. The consensus task in FLP is 1-set agreement. It has been shown that as long as **k < n**, the k-set agreement task is unsolvable with a wait-free protocol [8]. That the algorithm is deterministically terminating is a key stipulation of this impossibility result. In the next section, we discuss solutions to consensus wherein the termination property is relaxed.

**The Alpha & Omega of Consensus**

In the analysis of distributed algorithms, we make assertions about the correctness of system states — safety properties, and the temporality of system states — liveness properties. Safety ensures that nothing bad happens, and liveness ensures that something eventually happens. The problem revealed by the FLP result is that of liveness — due to arbitrary timing delays, we can’t assert that a consensus algorithm will eventually terminate. Consensus protocols such as Paxos and Raft overcome this limitation through elaborate use of quorums and timeouts, or more generally, failure detectors [4]. Indeed, one can decompose the problem of consensus along safety and liveness properties — the alpha and omega of consensus [3]. Alpha captures the safety properties of consensus — agreement, validity, and integrity, whereas omega captures liveness properties — the eventual termination of the algorithm. It is this latter component of consensus that does not admit a wait-free protocol.

**Wait Freedom**

Consensus is solvable, but as demonstrated by FLP and the Asynchronous Computability Theorem, it is not *wait-free* solvable. Wait freedom is a particular type of progress condition:

No process can be prevented from completing an operation by undetected halting failures of other processes, or by arbitrary variations in their speed. [6]

In this way, wait-freedom gives us a type of fault-tolerance particularly important in a distributed system where components fail independently. This is a remarkable global reasoning property that ensures that each non-faulty process of a distributed protocol will make progress regardless of failures of other processes, and do so in a bounded number of steps. It should be noted that the wait freedom property is perhaps too strong — the absence of a wait-free protocol for a task does not mean it can’t be solved with weaker progress guarantees. For example, an achievable progress property for consensus is anon-blocking guarantee which states that eventually progress can be made — albeit slowed arbitrarily by retries in case of collision.

**System Model**

The system model used in defining the Asynchronous Computability Theorem is based on I/O automata. An I/O automaton is defined as a set of states, events and a transition relation. Individual processes are represented as automatons — Turing machines — with well-defined start and finish events, as well as events corresponding to interaction with a shared memory object. The shared memory object is a linearizable atomic read/write memory, represented as an automaton with events corresponding to read and write operations. Processes have exclusive write access to their memory cell, but may read the contents of memory cells belonging to other processes. A *protocol* is defined as a set of processes and a shared memory object.

Interestingly, the methodology used to establish the result in this post has been used to unify the synchronous and asynchronous message passing models.

**Tasks**

Informally, a task is a problem in a distributed system. Formally, a task is a tuple **⟨I**,**O,Δ⟩** where **I** is a set of input vectors, **O **a set of output vectors and **Δ** a specification relation mapping inputs to outputs. An input vector represents a possible assignment of input values to the processes of a distributed system. Similarly, an output vector represents process outputs at the end of execution. Components of input and output vectors correspond to processes, such that **I[i]** represents the input to the i-th process and **O[i]** its output. If **I[i] = ⊥ **then process **i** does not participate in the execution. If a process **i** fails, then **O[i]=⊥**.** **The task specification relation **Δ** maps input vectors to sets of allowable output vectors.

Take for example the binary consensus task. Each process starts with an input value of **0** or **1 **and at the end of execution, chooses **0** or **1** such that they all agree and the chosen value was some processes’s input. A task specification for a two process binary consensus task is shown in Figure 1 bellow:

**Solvability**

Recall that a protocol is a collection of processes interacting through shared memory to solve a task. A protocol *solves* a task if 1) no process takes a infinite number of steps to finish, and 2) the output value of a process is consistent with the task specification **Δ**. A protocol *wait-free solves* a task if it solves the task even if all but one of the processes fail.

**Topological Definitions**

Topology is a branch of mathematics which studies continuous transformations of space. It abstracts the concept of the familiar Euclidean space to spaces that are considerably more general. In particular, a topological space need not impose a notion of distance between its points, while still retaining a rigorous treatment of continuity and connectedness. The points in a topological space can represent points of a Euclidean space, but they can also represent functions, other topological spaces, and in our present discussion, configurations of a distributed system.

Connectedness refers to a space being “all in one piece”, and allows us to formally classify spaces according to the number of “holes” they contain. A familiar example is the correspondence of a bagel and a coffee mug — topologically speaking, these spaces are equivalent because one can be continuously transformed into the other. On the other hand, a bagel is incompatible with a solid ball because the former contains a hole. If we represent the state of distributed system as a particular type of topological space, we can characterize computability as compatibility between spaces. In the next section, we will introduce the concept of a *simplex* — a member of a topological space known as a *simplicial complex*. We will then represent a configuration of a distributed system of n + 1 processes as an n-dimensional simplex, a set of possible configurations as a simplicial complex, and wait-free solvability as a continuous mapping between an input complex and an output complex. Despite the generality of topological spaces, we can still appeal to our geometric intuitions in order to visualize their properties. Indeed, every topological space **X** has a *geometric realization* |**X**| which regards the space as a subset of a high-dimensional Euclidean space.

**Simplexes**

A simplex is a generalization of a triangle to arbitrary dimensions. A triangle is a 2-simplex, a 1-simplex is a line segment, a 0-simplex a point, a 3-simplex a tetrahedron, etc. Simplices themselves are built from lower dimensional simplices called *(proper) faces* of the simplex. For example, a triangle can be formed by gluing together line segments at the appropriate ends. A line segment can be formed by gluing its two endpoints.

More precisely, a simplex consists of a set of vertices which are *affinely independent*. In essence, this means that the vertices are distinct such that if they were represented as vectors, there is no pair where one vector can be scaled into the other. In case of a 2-simplex, this means that there doesn’t exist a line segment on which all three vertices lie.

In distributed system, a simplex can be used to represent a configuration — an assignment of states to each process. For example, a triangle can represent a system of three processes, each vertex corresponding to a particular state of each process. The dimension of a simplex corresponds to the notion of a process *participating* in a protocol. If a process fails, it ceases to participate, and the resulting simplex has a smaller dimension.

**Simplicial Complexes**

A simplicial complex is a collection of simplices **K**, such that every face of every simplex in **K** is also a simplex of **K**, and every intersection of a pair of simplexes is also a simplex of **K**. Due to the second property, the formation of a complex does not result in any new simplices. A simplicial complex thereby forms a topological space, with simplexes as the points.

For our purposes, a simplicial complex corresponds to a set of possible configurations of a distributed system. We take individual configurations and connect them at their intersections. In this way, we obtain a geometric interpretation of similar system states — similar configurations are those which intersect at their boundaries. Moreover, unlike graph-theoretic models, we have the dimension of the intersection as the *degree* of similarity between states.

The following diagram depicts a 2 dimensional complex — a 2-complex — consisting of two simplices — **⟨P0, Q1, R2⟩** and **⟨P3, Q1, R2⟩**. These in turn consist of 1-simplices such as **⟨P0, Q1⟩ **and 0-simplices such as **⟨P0⟩**. The intersection of the 2-simplices is the 1-simplex **⟨Q1, R2⟩ **and represents the fact that the two system states are similar — the only differ in the state of process **P**.

The simplex **⟨P0, Q1, R2⟩ **can represent a configuration of three processes, **P**, **Q** and **R** with local states **0**,**1** and **2** respectively. The simplex **⟨P3, Q1, R2⟩ **represents a configuration different only in that **P** has local state **3**.

**Simplicial Maps**

A *simplicial map* between complexes **K** and **L** carries vertexes of **K** to vertexes of **L** and also simplexes of **K** to simplexes of **L**. In other words, if a set of vertexes span a simplex in **K**, their image under a simplicial map will also span a simplex. A simplicial map can however *collapse* a simplex if the dimension of the simplex under the map is zero. A simplicial map that does not collapse any simplices it is called *non-collapsing*. We shall make use of simplicial maps two both formalize the notion of associating processes with vertexes and to represent transitions between system states.

**Chromatic Complexes**

A *coloring* of a n-dimensional complex **K** is a non-collapsing simplicial map from **K** to an n-dimensional simplex. The vertexes of the target simplex correspond to distinct colors, and a coloring is thus a labeling of the vertexes such that no two neighboring vertexes have the same color. Take note of the non-collapsing property of the simplicial map at play— if two neighboring vertexes in the complex were to map to the same vertex in the simplex — or in other words, share a color — it would result in the collapse of the simplex formed by the vertexes in the complex. A complex **K** together with a coloring **χ** is called a *chromatic complex *(**K**,**χ**). A simplicial map is *color-preserving* if the color of a source vertex is the same as the color of the destination vertex. Coloring provides a mechanism to associate the vertexes of a complex to individual processes. Color-preserving maps represent state transitions wherein the processes maintain their identity.

We can now provide a topological rendering to the formal definition of tasks described above as triples **⟨I**,**O,Δ⟩. **We represent an input vector **i ∈ I** as a simplex **S(i)** whose vertices are pairs of processes and their values. An output vector **o ∈ O** is represented as the simplex **S(o) **accordingly **. **The topological task specification is set of all pairs (**S(i), S(o)) **wherein **Δ** holds.

**Subdivisions**

A *subdivision* of a complex **K** is a complex **σ(K) **such that:

- Each simplex of
**σ(K)**is contained in a simplex in**K** - Each simplex of
**K**can be formed by combining simplices in**σ(K)**

A subdivision can be thought of as a triangulation of a complex. A *carrier* of a simplex **S **in a subdivision **σ(K), **denoted by *carrier*(**S**,**K**), is the unique smallest simplex **T** in the original complex **K** such that **S** is contained in **T**. A *chromatic subdivision* is a subdivision of a chromatic complex (**K**,**χ**) such that **σ(K)** is a subdivision of **K**, and for all **S** in **σ(K), **its set of colors in the subdivision is a subset of the colors of its carrier. A chromatic subdivision is therefore a subdivision wherein all constituent simplices are assigned a color in accordance with the colors assigned to their carriers. In Figure 4, the 2-simplex shaded orange on the right hand side of the diagram is a simplex of a chromatic complex. The chromatic subdivision of this complex is shown on the left. On the left is also shown a 2-simplex in the chromatic subdivision. The shaded 2-simplex on the right is its carrier. Moreover, the colors of the vertexes of the 2-simplex on the right are a taken from colors of its carrier.

Chromatic subdivisions model the *execution* of a protocol. If a protocol starts with a given input simplex, a round of execution subdivides the simplex, such that the subdivision represents the reachable states of the system after the round. These reachable states form the *protocol complex* which reflects the structure of the protocol.

To illustrate the unfolding of a protocol complex, consider the following 2-process task. The two processes, **P** and **Q** have inputs **p** and **q** respectively, represented by the input simplex in Figure 5. The processes interact using a shared memory object with two cells — one for each process. Each process first writes its input to shared memory and then scans the entire memory object. The processes run concurrently, but there are only three possible executions:

- If
**P**reads before**Q**writes then**P**’s view is**(p,⊥)**and**Q**’s is**(p,q)**. - If both
**P**and**Q**read after both have written then both have view**(p,q)**. - If
**Q**reads before**P**writes then**Q**’s view is**(⊥,q)**and P’s is**(p,q)**.

Each executon is represented as a simplex in the diagram bellow:

Execution 1 is the 1-simplex **⟨P(p,⊥),Q(p,q)⟩ **at the bottom of Figure 5, execution 2 is **⟨P(p,q),Q(p,q)⟩ **and execution 3 is **⟨P(p,q),Q(⊥,q)⟩**. These three simplexes form the *protocol complex*. If we consider the original input simplex, then the protocol complex is a subdivision, dividing the input simplex into three pieces. Note also how the vertex **P(p,q) **is an intersection of two 1-simplexes. This is a geometric representation of the fact that in this state, **P** can’t tell between executions 2 and 3.

**Connectivity**

A graph is connected if there is a path between every pair of vertices. Similarly, a simplicial complex is 0-connected if there is a path between every pair of vertices. More generally, a simplex is 1-connected if any loop can be continuously deformed to a point, a ball for a 2-connected space, and so on. This generalization can be visualized with the following diagram:

The loop at the top of Figure 5 can indeed be contracted to a point. On the other hand, the loop at the bottom cannot be contracted to a point because of the “hole” in the way. Therefore, while the shape in Figure 5 above is 0-connected, it isn’t 1-connected. We can also view connectivity from a combinatorial perspective — if **K** and **L** are n-connected complexes such that their intersection **K** ∩ **L** is (n-1) connected, then their union **K ∪ L** is n-connected. Thus we can infer connectivity of a composite complex via connectivity of its constituents.

Connectivity has the pleasant property that if a complex is n-connected, the complex with a dimension removed — a faulty process — is (n-1) connected, thereby extending all assertions to configurations where some processed have faulted. A complex is disconnected if it fails to be connected. For example, a complex representing the final states of a system, can be disconnected if certain configuration are not admitted under the task specification, creating “holes” in the complex. These holes, in turn, present a challenge to constructing a simplicial map to this complex. In the context of distributed systems, connectivity maps to the notion of *reachability* of system states. A particularly notable property of a wait-free protocol is that the protocol complex for any input n-simplex is n-connected [1].

**The Asynchronous Computability Theorem**

The definitions provided so far should suffice to state the Asynchronous Computability Theorem:

A task⟨I,O,Δ⟩has a wait-free protocol if and only if there exists a chromatic subdivisionσofIand a color-preserving simplicial map

μ:σ(I) →O

such that for each vertexs∈σ(I),μ(s)∈Δ(carrier(s,I)).

The task tuple consists of an input complex, output complex and a task specification determining valid outputs. The subdivision reflects the execution of the protocol. The existence of the map, and specifically a simplicial map is what governs the possibility of a wait-free solution. The beauty of this theorem lies in that it gives a purely static, topological approach to problems which are typically modeled operationally as computations unfolding in time.

The theorem can be visualized using Figure 7 bellow:

On the left is an input complex a 3-simplex (tetrahedron). We have a subdivision of the input complex and one simplex in the subdivision is shaded orange. We also have an output complex — a torus (this actually represents an output complex for a renaming task). The simplicial map **μ **takes a simplex in the subdivision of the input complex to a simplex in the output complex. Moreover, this simplex in the output complex is a member of the sub-complex which represents the acceptable outputs of the task under **Δ** given the input simplex containing the subdivision (colored white).

# Impossibility of Consensus, Topologically

We can now demonstrate the impossibility of consensus topologically, using the Asynchronous Computability Theorem. We begin by illustrating the challenges presented by consensus in a geometric way. Suppose we have two processes **P** and **Q** which are given inputs **0** or **1**. A state of a system where process **P** has input **0** and only process **P** executes, is represented as a vertex labeled **P0**. A configuration wherein both **P** and **Q** participate, starting with inputs both equal to **0** is represented as a 1-simplex **⟨P0,Q0⟩**. The decision of the protocol in both of these configurations is simple — it always decides **0 **since there are no alternatives to choose from. The situation gets more complicated when the processes have distinct inputs. According to the task, they must both decide one value or the other. The following diagram shows the input and output complexes for this example:

The input complex for the 2-process binary consensus task consists of four vertexes, corresponding to the pairs of process and input value. The four edges correspond to the four possible 2-process configurations. The output complex captures the fact that the processes must decide on the same value. The output complex is disconnected and therefore incompatible with the input complex which is connected. Since subdivisions and simplicial maps preserve connectivity, invocation of the Asynchronous Computability Theorem leads us to a contradiction — if we suppose such a map **μ** exists, it cannot be simplicial.

Given this intuition, we can depict the impossibility of consensus in a more rigorous manner. In fact, we shall make a stronger statement by showing the impossibility of k-set agreement described above. Recall that k-set agreement is a task wherein the participating processes are allowed to select up to k distinct values. If k is less than the number of processes, then this task is impossible to solve wait-free. Consensus is 1-set agreement. To express this result using the Asynchronous Computability Theorem, we make use of a standard result from algebraic topology known as *Sperner’s Lemma*.

## Sperner’s Lemma

Sperner’s Lemma is a combinatorial analog of the Brouwer Fixed Point Theorem:

Letσ(S)be a subdivision of an n-simplexS. Iff : σ(S)→Sis a map sending each vertex ofσ(S)to a vertex in its carrier, then there is at least one n-simplexTinσ(S)such that for each vertexvofT, thef(v)are distinct.

A good way to think of Sperner’s lemma is in terms of subdivisions as discussed earlier. Take the chromatic subdivision shown on the left of Figure 4. Sperner’s lemma states that there always exists a simplex in this subdivision who’s vertexes are all distinct colors. This simplex with distinctly colored vertexes is the “fixed point” of the subdivision, and the lemma states that such a fixed point always exists. The following diagram is an example:

The vertexes of the outer 2-simplex are colored red, green and blue. The 2-simplexes in a subdivision are colored with colors in this set, and the 2-simplexes shaded gray make use of all three colors. Sperner’s lemma guarantees that at least one such simplex exists in the subdivision.

Sperner’s lemma applies to impossibility of consensus in the following way. Consider a configuration with three processes each with a distinct input value. Suppose also that k is 2 — we can allow up to two distinct output values. We can represent this configuration as a 2-simplex **I** colored with three distinct colors and labeled with three distinct values. The output complex **O** consists of simplexes labeled with up to two distinct values. Assume by way of contradiction that a protocol does exist. This means there is a subdivision **σ**(**I**) and a simplicial map **μ** : **σ**(**I**) → **O**.** **This map satisfies the conditions of Sperner’s lemma, and therefore carries some simplex in the subdivision to an output simplex labeled with all three values. However, the output complex has no such simplex — we would thus map to a “hole” in the output complex. Note that when using Sperner’s lemma we considered the labeling of vertexes not only with the process it represents, but also with the local state of the process.

**Proof Sketch**

For the complete details of the proof, refer to [1] where Herlihy and Shavit prove both directions — that the conditions of the theorem are necessary and sufficient.

Necessity states that a decision task has a wait-free protocol *only if* there is a subdivision and simplicial map as required by the theorem. Informally, the proof of necessity is based on the notion of a protocol complex described earlier. Recall that a protocol complex represents the possible executions of a protocol. It is shown that a protocol complex **P(I)** corresponds to a subdivision of the input complex, and that the simplicial map is induced by a final decision transition in the protocol called the *decision map*. The simplicial map **μ** then expressed as a composition of **φ :** **σ(I) ***→* **P(I)**, the map defining the protocol complex,** **and **δ : P(I) ***→*** O**, the decision map. The proof proceeds by stating and proving lemmas about connectivity, which as we discussed earlier corresponds to reachability in a distributed system.

Sufficiency states that *if* we have such a subdivision and simplicial map, *then* the protocol wait-free solves the task. This is done by providing a construction of an algorithm for any task satisfying the conditions of the theorem. They proceed by reducing the problem to approximate agreement which is solved by the participating set protocol [10] and invoking the Simplicial Approximation Theorem, which essentially states that a solution can be attained after a sufficient number of execution rounds. Interestingly, this line of reasoning leads to results about the complexity of asynchronous computations [12].

**Related Work**

- The CALM Principle states that if a program is expressed using monotonic logic — or loosely speaking logic where you can’t revoke past statements — then it can be made eventually consistent in a distributed system in a way that does not require coordination. So, a protocol refuting an earlier claim necessitates coordination amongst the involved processes. With the Asynchronous Computability Theorem in mind, one would expect that protocols expressed using monotonic logic are able to solve their tasks in a wait-free manner.
- Coordination Avoidance in Database Systems presents necessary and sufficient conditions for safe, coordination-free execution. Similarly, Highly Available Transactions: Virtues and Limitations discusses consistency guarantees which can be provided in a highly-available manner. If we express an application consistency requirement as a task using the framework presented herein, it should be possible to asses wait-free solvability of this task using the Asynchronous Computability Theorem. If the task is wait-free solvable, it should be possible to implement it without coordination.
- In our discussion of connectivity, the combinatorial connectivity theorem had the property that allowed us to infer connectivity of complexes based on their constituent complexes. This notion of crafting global observations from local ones is captured by the gluing axiom which forms the foundation of mathematical structures called sheaves. Using sheaves to provide a semantics for distributed systems is attempted in Sheaves, Objects, and Distributed Systems.
- Geometric renderings of distributed systems have been around for quite a while. Indeed one can regard Minkowski space — the mathematical formulation of Special Relativity — as a model for a distributed system. See also: Modeling Concurrency with Geometry, Homotopy Theory and Concurrency, and Interaction Categories. All make use of tools for understanding space as a means for understanding concurrency. Perhaps if we look at Homotopy Type Type theory as the logic of space, we can inch closer toward semantic foundations for concurrency and distributed systems.
- Perhaps by computing the Fundamental Group of a topological space we can automate the reasoning required to determine whether a task is solvable, and allow tasks to adapt dynamically to their environment.

# Acknowledgements

Most of the theoretical content and diagrams in this post are taken from the The Topological Structure of Asynchronous Computability by M. Herlihy and N. Shavit.

**References**

[1] The Topological Structure of Asynchronous Computability

[2] Impossibility of Distributed Consensus with One Faulty Process

[3] The Alpha of Indulgent Consensus

[4] The weakest failure detector for solving consensus

[5] Linearizability: A Correctness Condition for Concurrent Objects

[7] Concurrent Programming: Algorithms, Principles, and Foundations

[8] Wait-free k-set agreement is impossible: the topology of public knowledge

[9] Sheaves, Objects, and Distributed Systems

[10] Immediate atomic snapshots and fast renaming

[11] Unifying synchronous and asynchronous message-passing models

[12] Towards a Topological Characterization of Asynchronous Complexity

[13] Algebraic Topology