Hashgraph: A Whitepaper Review

Michael Graczyk
OpenToken
Published in
21 min readFeb 1, 2018

Note: This is the first in a series of whitepaper reviews on blockchain and consensus, published by OpenToken. Next time, we’ll look at DFINITY.

We’ve been getting a lot of questions about Hashgraph recently. Although you can find plenty of great documentation on Hashgraph’s website, only a few external resources cover it in detail. With this post, we hope to provide a useful overview and evaluation for the whole community, while still covering enough technical nitty-gritty to entertain even the most die-hard distributed systems folks.

We’re going to focus on the technology here, but just to answer the easy questions first: We do not have details about any potential Hashgraph token sale. Hashgraph’s official Telegram, Twitter, and other social media make it very clear that Hashgraph does not currently have any plans to offer a token sale. The company raised a $3M traditional seed round in September 2017.

Our main references are the whitepaper, supplemental documents, and conversations with the Hasgraph team on the @HashgraphDev telegram channel.

What is Hashgraph?

According to its website, Hashgraph is “a data structure and consensus algorithm”. Notice I didn’t say “cryptocurrency”, “blockchain”, or “protocol”. Hashgraph is more like a low-level building block than a complete system. The whitepaper and available documentation describe a consensus protocol, but do not specifically describe any applications or use cases. The Hashgraph SDK includes an example “cryptocurrency” system, but this example is more like a tutorial or game than a functioning system such as Bitcoin.

It’s important to put the dearth of information in context. Hashgraph, unlike Bitcoin, is not an anonymous gift to mankind. It is a serious business, with proprietary, patented technology owned by Swirlds, Inc. Swirlds does not have any public plans to release an open cryptocurrency, or to my knowledge any public system at all. Instead, Swirlds has announced a partnership with CULedger to build distribution transaction processing software for credit unions. For those familiar with Ripple, this may ring some bells. Both companies (and many others) are building enterprise financial software using distributed systems and cryptography that weren’t available when many current transaction systems were designed.

Despite focusing on private, enterprise settings, Hashgraph has enjoyed plenty of public interest. Swirlds released an SDK that allows anybody to experiment with the (closed source) Hashgraph consensus library. Although Hashgraph core is written in Java and LISP, the SDK makes it possible to build applications with any JVM language (Java, Scala, etc). Community members have also used the whitepaper to develop their own implementations in Python, Go, and of course, JavaScript. Each of these libraries follows the whitepaper closely. Their simplicity and similarity hint at what many technical people love about Hashgraph: it is elegant, lends itself to fun visualizations, and its correctness is easy to prove.

As any starving artist will tell you, elegance alone is not enough for success in the real world. Hashgraph may be beautiful, but how does it compare in practice to its competitors? Put another way, if I’m a system designer considering Hashgraph for consensus, why should I use it instead of Stellar, or any other BFT system? Hashgraph promises that it is “Fast”, “Secure”, and “Fair”. Later we will evaluate each of these claims in more detail. First, let’s dive into the juicy technical details.

What does it do, and how does it work?

This section will be more technical than the rest of the post. If you aren’t interested in the details of Hashgraph consensus, you can skip ahead to the evaluation.

Hashgraph’s whitepaper describes itself as an “algorithm for replicated state machines with guaranteed Byzantine fault tolerance”. Another description used elsewhere in distributed systems would be that Hashgraph is an “atomic broadcast” algorithm. Like all atomic broadcast algorithms, Hashgraph provides an ordering to an otherwise unordered set of transactions. Participants submit transactions to the consensus process any time they wish. All participants receive an identical ordered list of transactions as output, which eventually includes every submitted transaction. This ordering is called a “total order” because every transaction can be ordered with respect to every other in a chain.

In fact, a total ordering of transactions is sufficient for many interesting applications, including cryptocurrencies. Let’s build a concrete example. We can represent Hashgraph as two functions

submit_transaction(transaction)get_transaction(index) -> transaction or null

In this example, transaction is some arbitrary object which may include a sender, receiver, id, amount, fee, etc. index is a counting number like 0, 1, 2, or 1000001, which identifies the position of the transaction in the total order. Clients call submit_transaction from anywhere at any time. Now here are the guarantees given by Hashgraph (or any atomic broadcast algorithm):

  • If any successful call submit_transaction(T1) is made for some transaction T1, then there is some value of index for which all calls to get_transaction(index) will eventually return T1.
  • If any call to get_transaction(index) returns transaction T2 (not null), then every call to get_transaction(index) returns either null or T2, and all eventually return T2 (not null).

These simple guarantees say that once Hashgraph has accepted a transaction, every client will see that transaction at the same index in the ordered output list. Crucially, the second guarantee prevents double spending attacks since nodes only ever see one non-null value at each index. We can use our Hashgraph abstraction to build a basic cryptocurrency with the following pseudocode.

global balances = mapping[Address, Number]# Call on any client that wants to use hashgraph
function send_money(Address from, Address to, Number amount):
transaction = {
from: from,
to: to,
amount: amount
}
Hashgraph.submit_transaction(transaction)# Call on all nodes which want to track balances.
function sync_forever():
next_transaction_index = 0
while true:
transaction = Hashgraph.get_transaction(next_transaction_index)
next_transaction_index += 1
if balances[transaction.from] >= transaction.amount:
balances[transaction.from] -= transaction.amount
balances[transaction.to] += transaction.amount

As you can see, we have implemented a ledger by explicitly tracking balances in a global array. The while loop processes transactions in order, updating the balances as it goes. When a transaction is invalid (would make the sender’s balance negative), it is skipped. Since every node sees the same transactions in the same order, they will skip the same set and update balances in the same way. We don’t have to worry about concurrency, communicating with other nodes, block hashes, or anything like that. With just a few lines of code, we have a basic digital currency. We could build additional rules to require fees, track transactions (uniquely identify transactions with a sender sequence number), or even build smart contract functionality. Hashgraph provides the consensus, we provide everything else.

You may have noticed that in order to use our toy cryptocurrency, clients must run the Hashgraph algorithm themselves. This is not just a fluke with our example. Clients must receive the entire Hashgraph data structure and run verification procedures on it to determine whether their transaction has been committed. Users of permissionless, decentralized systems such as Bitcoin or Ethereum must do similar verification, but this requirement is absent in PBFT, Byzantine Paxos, and Honeybadger, all examples of permissioned systems like Hashgraph. To be clear, just like Bitcoin users only need to download block headers and a small proof to validate a single transaction, Hashgraph users only need the graph data structure, not the entire contents of every transaction (which include signatures, smart contracts, etc); only events (vertices in the graph) and their signatures are required, which should be around 128 bytes. A back of the envelope calculation suggests that if confirmation times are on the order of “a few seconds” as Swirlds has claimed, and if there are N=50 nodes in the system, clients will use at least 120 kbps of bandwidth¹, which is high but not unreasonable.

The Algorithm

Now that we understand how Hashgraph fits into the ecosystem and how it can be used, we’ll describe in detail how the actual algorithm works. We base our description here on the May 31 whitepaper and associated materials. Swirlds and Dr. Baird have certainly made improvements in the 20 months since publishing, especially with regards to practical details that the whitepaper leaves open.

The system consists of N nodes connected to one another an asynchronous network. We assume that up to N/3 of the nodes can behave arbitrarily, lying or breaking the rules of the algorithm. We also assume that the network can behave arbitrarily, dropping or delaying packets in any way, as long as one requirement is met: any two honest nodes can eventually communicate, potentially after some unknown but bounded amount of time. This is a typical “Byzantine” setting, and gives any hypothetical attacker about as much power as any algorithm could hope to withstand.

Each node manages a directed acyclic graph (DAG) data structure called a Hashgraph. Vertices in Hashgraph are called “events” and consist of a set of transactions, the event’s parents, a timestamp, and a signature from the node that created an event. The timestamp indicates the real world time when a node claims to have created an event. This value will later be used to indirectly determine event’s position in the final ordering. The event’s transactions and parents are encoded using a collision resistant hash function, so a single event certifies the entire gossip history up to the event’s creation. Any nodes that see an event see the same history for that event, including all of its ancestors and the edges between those ancestors. The entire consensus process is based on gossiping events and analyzing the local copy of the Hashgraph.

The Hashgraph data structure tracked by every node. Each circle is an event, created by a node when it receives gossip. (From the Hashgraph Whitepaper)

Nodes constantly gossip with each other by randomly selecting a peer and sending every event the that the peer does not know of, in topological order. When receiving gossip, nodes add to their graph any valid event (hashes match parent and valid signature) they had not yet seen. At the end of the sync, the receiving node creates and signs a new event that includes any transactions the receiving node intends to submit. This newly created event encodes and affirms the statement “I received gossip from this peer, and here is what I learned”.

These gossip interactions grow the Hashgraph in a consistent way. Since the hash function is collision resistant, every node agrees on the history of every event. This gives Hashgraph a powerful tool: Anything that depends entirely on an event’s history will match across all nodes. Hashgraph uses this tool to give every event two important properties: A round number, which puts a particular node’s events in a monotonically increasing order; and a binary value called “witness” which is true whenever an event is the first created by a particular node in a round.

The two properties above can be immediately determined once an event has been created. Most of the time in distributed systems, things are not so simple. Any binary decision that must be agreed on by multiple nodes can be in one of three states: “definitely yes”, “definitely no”, or “undecided”. When a decision is “definitely yes”, we mean that no honest node will every output “no” for that decision. Until a node is certain that none of its peers will output “no”, it must keep the decision in an “undecided” state.

Each event has two other important properties which begin as “undecided” and receive “definite” values only after new events are added later in the graph. These properties are called “famous” and “consensusTimestamp”. An event is “famous” if it is a witness and was received by most members soon after being created. The algorithm in the paper guarantees that once an event has been assigned “famous” or “not famous”, every node will assign the same value regardless of any new events that are added to the Hashgraph in the future. The algorithm also guarantees that if over 2/3 of nodes continue gossiping forever, every event will eventually be assigned “famous” or “not famous” with probability 1. Put another way: For any event E and any probability ε, no matter how small, there is some number N so that P(E.famous = “undecided” after N gossips) < ε. This implies that the algorithm eventually makes progress, but the paper provides no bounds or analysis of how long that should typically take, even under ideal conditions. When conditions are poor, for example during network partitions, random coin flips are required for progress. The probability of these coin flips “unsticking” the system decreases exponentially with the number of required flips², so it appears that in those cases latency could be very high.

The second potentially “undecided” property is the most property in Hashraph: The consensus timestamp. All events are sorted by their consensus timestamps to determine their final positions in the output total order. This means that once an event has been assigned a consensus timestamp, it is committed and can be externalized to clients. It also means that if E1.consensusTimestamp < E2.consensusTimestamp, then E1 comes before E2 in the total order. This ordering is important for almost any potential Hashgraph application. Cryptocurrencies like our toy example from earlier in this post would likely use the ordering to determine which of two committed, mutually incompatible transactions should be included in the ledger. If two nodes saw transactions in a different order, attacks like double spending would be trivial³.

Since they are so important, your next question might naturally be, “how are consensus timestamps decided”? Let’s consider some specific event E whose consensus timestamp is to be determined. There will eventually be some round after E’s round in which the “famous” property has been decided for all witnesses. We’ll label the earliest such round R. The nodes that created these famous witnesses all saw E quickly, so they get to decide E’s consensus timestamp. To protect against lying nodes, we need to ignore any nodes who have made a fork, defined as any pair of events that have the same creator but are not ancestors of one another. Let’s call the remaining set of nodes “deciders”. These are the nodes that created unique (non-forked) famous witnesses in R. The consensus timestamp of E is the median received timestamp of every event where a decider first learned about E. The mechanism ensures that once the timestamp is assigned, every node will agree on it and it will never change. For more details and a lengthier discussion, I encourage you to read section 4 of the whitepaper and this supplementary doc. The key aspect you should remember is that the consensus timestamp is the time at which an active, honest node claims to have first heard of E⁴.

That’s pretty much it. The bulk of the paper works through these definitions and proves each of the statements made above. Once we accept that consensus timestamps are eventually decided and that all nodes agree on the decision, the entire total order comes for free. Since all “voting” is done virtually via computations on the Hashgraph, connection failures and crashes do not affect the correctness or safety of the algorithm.

One final note. I have left out any discussion of “fairness” in this section because the whitepaper contains neither a concrete definition nor any proof of such a property. As I discuss later, I do not believe that the algorithm satisfies any useful definition of fairness, which could explain the missing proof.

So where do we stand after all of this? We have an algorithm which allows a permissioned network of N nodes to come to agreement on a total order of transactions, as long as more than 2N/3 of the nodes are honest (meaning that they follow the algorithm in the whitepaper). As long as honest nodes can eventually get messages through to one another, agreement is eventually reached with probability 1. Most of the algorithm happens virtually on a data structure that is received only once by every node, so bandwidth usage is nearly as small as theoretically possible.

Evaluation

Is Hashgraph Secure?

The paper contains a convincing correctness proof of its central claim: Once an event has been included in the total order, all correct nodes will see the same event in the same position forever. Through the use of a collision-resistant hash function and digital signatures, transactions committed in Hashgraph are irrevocable and unforgeable. There is no 51% attack, no possibility to double spend, and no need to wait for an arbitrary number of block confirmations. The system is truly ABFT as claimed.

Is Hashgraph Fair?

According to the whitepaper’s abstract, in Hashgraph “it is difficult for an attacker to manipulate which of two transactions will be chosen to be first in the consensus order.” Later in section 6, the paper says fairness means that one transaction should come before another if it is propagated to the community quickly, and the members of the community who saw it first are actively participating in consensus. This definition is pretty vague, but I was able to have the point helpfully clarified by the Hashgraph team on the @Hashgraphdev telegram channel. First, we established that “fairness” only concerns transactions that are known to a supermajority of nodes. If a transaction has not yet been gossiped to more than 2/3 of the participants, then an attacker can reorder it with other transactions without violating fairness. I believe this weakens the fairness claim in the abstract, but I accept it as a reasonable requirement.

Next, I tried to get a concrete definition of “fairness” so that I could understand what the whitepaper actually claims. After some discussion, we agreed on the following definition: An algorithm is fair whenever “for any transactions A and B, if A is received before B by a supermajority of nodes, then A comes before B in final consensus”. This seems desirable, because it would imply that the only way to get your transactions processed earlier is to buy more bandwidth, improve your network connectivity, or participate more actively. It is impossible in practice to do any better than this without crippling the system’s speed and reliability (by accepting transactions round robin or requiring synchronized interactions).

I came up with a counterexample⁵ for the above definition fairly quickly, so Hashgraph is not fair in this way. The Hashgraph team agreed that the counterexample invalidates the definition of fairness given in the previous paragraph, but they followed on by saying that Hashgraph‘s fairness requires that members “gossip continuously”. This response is problematic in many, many way⁶, and it also appears insufficient. Hashgraph remains unfair even if all nodes are honest, constantly gossiping all the time, and there are no network partitions⁷. The figure below demonstrates one such example. Even more worrisome, malicious nodes could manipulate the order of transactions by carefully choosing gossip partners and slowing their gossip transfers at just the right times. Only sophisticated statistical analysis could detect such an attack. I asked the Hashgraph team for a definition of fairness that is actually attained, and eagerly await their response.

Hashgraph fails to fairly order events even when all nodes are honest and actively gossiping

Is Hashgraph Fast?

This one is complicated. Hashgraph’s gossip protocol spreads events through the network quickly. Since the graph represents gossip-about-gossip, information should also be spread efficiently, in that nodes will almost never receive the same bytes more than once. Also, the use of virtual voting for consensus should allow transactions to be pipelined efficiently, even when the network fails to make progress because of temporary partitions.

On the other hand, consensus requires that every node receive the entire Hashgraph. As the number of nodes increases, the inbound bandwidth required to participate increases proportionately. Consensus also requires potentially expensive computation by all nodes and clients. It’s unclear whether the computation would be enormous, or dwarfed entirely by time spent waiting for the network.

Swirlds has not released any metrics or concrete numbers on Hashgraph’s performance, so for now we cannot resolve any of the confusion mentioned here. The landing page lists “Fast” as Hashgraph’s first quality and says it has “high throughput latency and low consensus latency”. These are great features to have, but until we see real world performance metrics with throughput, latency, computation and storage requirements, we can only guess as to whether Hashgraph lives up to its hype.

What did we learn?

Hashgraph is a consensus system owned by an enterprise software company (via US patents). Its simplicity and new ideas have made it popular with an enthusiastic group of developers and crypto enthusiasts. Hashgraph is secure by design, and easy to validate. Its main promise of high performance and low latency has yet to be tested publicly. The algorithm also promises fairness, but is not fair under any definition that differentiates Hashgraph from other permissioned systems.

Swirlds could enjoy commercial success by building enterprise applications on top of Hashgraph. They may eventually release a public platform or currency, but the technical details of such a system have either not been determined or are not public. Swirlds tells us that more detailed performance metrics are forthcoming, so we will have to wait to adequately compare Hashgraph to similar systems. Regardless of the fate of the company, Hashgraph itself is an intriguing and elegant technology that will leave a lasting influence on the cryptocurrency world.

Thanks to Kevin, Liz, Dan, Andrew, and Bryan for your helpful comments and feedback. Credit to Swirlds and Paul Rodecker for images.

Footnotes

[1] Each gossip event creates one new event, and the client receives the entire Hashgraph structure, so we can find average client bandwidth by simply computing the growth rate of the Hashgraph. Propagating an event to 50*2/3 nodes via the gossip protocol described in the paper takes ~6.5 sequential gossip syncs. To be assigned a consensus timestamp and hence “confirmed”, a transaction must be an ancestor of every unique famous witness in a round greater than or equal to its own. The event must have been gossiped to the signers of those witnesses, so 7 is a lower bound on the confirmation time. If latency is 3s, we’re fitting at at least 7 gossips trips into 3s. 8bits/bytes * 128 bytes * 7 gossips/3s * N ~= 120kbps.

This seems like a fairly loose lower bound because it assumes 2/3 of events are unique famous witnesses. In reality, the bandwidth requirement could be many times as high. Practical clients will likely trust a single endpoint and simply not receive the Hashgraph, similar how users choose to trust blockchain.info and etherscan.io instead of validating block headers themselves.

[2] Using the notation from Figure 6, let’s let m = (2*n/3 -t). Then we require at least m flips to go the same way, which occurs with probability 2^-(m-1)

[3] If transactions were seen in a different order by different node, an attacker could double spend by sending the same funds to both nodes. Suppose there are two nodes A and B, and an attacker starts with balance 1. He sends two transactions, T1 which sends 1 to A, and T2 which sends 1 to B. Further suppose that A sees T1<T2 but B sees T2<T1. The attacker could take real world goods worth 1 crypto unit from both A and B, since both nodes will see the transaction first and believe it is the valid one.

[4] Or the average timestamp of two honest nodes.

[5] We have 4 nodes, all are going to be honest in this example. A,B,C,D, who all start out with events A0,B0,C0,D0. I’ll include timestamps for events like this: A0[t]. So we start with A0[0], B0[0], C0[0], D0[0].

A gossips, causing these events to be created: B1[3], C1[1], D1[4].

D also gossips, causing these events to be created: B2[4], C2[2]

Now A fails to receive or send gossip for a while, and BCD all gossip amongst themselves a lot. Eventually BCD create famous witness in a round, causing consensus timestamps to be assigned to A0 and D0. Since BCD created the famous witnesses, the consensus timestamps are:
A0 -> median(3, 1, 4) = 3
D0 -> median(4, 2, 0) = 2

But notice that A0 was seen before D0 by a supermajority of nodes, namely {A, B, C}

So to sum it up:
We have two events A0 and D0.
A0 was seen before D0 by a supermajority of nodes.
D0 was assigned an earlier consensus timestamp than A0.

[6] The issue exists as long as A fails to receive gossip for around 6 seconds. In a world with DDOS-as-a-service and software defined networking, transient disconnects of 6 seconds should be expected regularly.

[7] In the counterexample, the failure is preserved if B and C constantly gossip with one another, and A and D constantly gossip with one another. This can happen with no nodes disconnected, through a network partition or simply bad luck.

Epilogue: General Comments about Consensus

Warning: This section is more technical and opinionated than the rest of the post.

Designing fault-tolerant consensus algorithms is notoriously hard, especially when the system is meant to operate in a Byzantine setting. It’s not always easy to imagine the nefarious ways your network and 1/3 of your nodes might misbehave in such a setting. Given such a challenge, it’s easy to concede defeat. We settle for algorithms that are slow and complicated, but battle-tested and known to be safe in Byzantine settings. Fortunately, we won’t remain defeated for long. The popularity of cryptocurrency seems to have brought new life to BFT consensus research. Novel algorithms like HoneyBadger, Algorand, Stellar, and Hashgraph are being introduced nearly every month.

On the one hand, I’m fired up. All these new algorithms give me something interesting to read and write about, and also suggest that something good will come out of all the money being poured into crypto. On the other hand, it’s important for researchers to recognize that BFT consensus did not begin with Bitcoin, and cryptography alone is not enough to make distributed systems behave well. Crypto whitepapers I read often fail to mention any related work, and rarely seem to mention many of the issues and solutions that make up the bulk of academic publications.

Let me be specific. One could reasonably claim that the primary feature any digital currency should have is a mechanism for users to determine when a transaction is “final”, with certainty or very high probability. For example, if you’re an ATM waiting to spit out cash to a customer, you’d like to be sure that the customer’s account has been debited before irrevocably dispensing bills. Or perhaps you’re an IOT automated turnstile in a subway station, waiting to let a passenger through just as soon as you’re sure their IOTA powered fare has been paid to the station operators. Unfortunately, IOTA’s whitepaper offers a frighteningly incomplete solution:

The main rule that the nodes use for deciding between two conflicting transactions is the following: a node runs the tip selection algorithm (cf. Section 4.1) many times, and sees which of the two transactions is more likely to be indirectly approved by the selected tip. For example, if a transaction was selected 97 times during 100 runs of the tip selection algorithm, we say that it is confirmed with 97% confidence.

Does this mean that 3% of passengers will be let through with no fare? What percentage of honest passengers will be restricted from entry because the IOTA node has failed to collect enough “tips” to build up confidence that the transaction is valid? Can an attacker influence that probability by controling the timing of received messages? The whitepaper offers no guidance on how such a simple, practical system should operate.

Compare to a similar explanation from the PBFT paper.

The client waits for f+1 replies with valid signatures from different replicas, and with the same [timestamp] and [result], before accepting the result . This ensures that the result is valid, since at most f replicas can be faulty

Once the turnstile receives f+1 signed replies, it can be completely certain that the passenger’s fare has been paid. IOTA’s loftier goals do not excuse its comparative lack of clarity.

Tangle” or “Tangled Mess”?

Getting back to Hashgraph, this is an area where Hashgraph gets a lot of things right. The paper discusses several similar BFT systems, including almost every system I have mentioned in this post. It also gives as clear an explanation of finality as could be expected.

Therefore, once an event is assigned a place in the total order, it will never change its position, neither by swapping with another known event, nor by new events being discovered later and being inserted before it.

Once an event (and all the transactions it includes) has been assigned a position via its consensusTimestamp, that event is confirmed/committed/final.

Hashgraph deserves high praise due to its overall clarity. The correctness proof is easy to follow, and the later discussions of enhancements are relevant and interesting. The paper is also explicit about potential failures of liveness, saying “If an attacker completely controls the internet, they can cause this to drag on for exponentially many rounds”. It seems Dr. Baird has spent at least some time trying to poke holes in his own algorithm, which is more than can be said of many whitepaper authors.

On the other hand, certain explanations of Hashgraph given by the creator trigger my “snake oil consensus” alarm, similar to how I felt reading the IOTA and Ripple papers. In this interview, Dr. Baird claims

In hashgraph, the “block” (event) definitely becomes part of the permanent record as soon as you gossip it. Every transaction in it definitely becomes part of the permanent record. It may take some number of seconds before you know exactly what position it will have in history. But you **immediately** know that it will be part of history. Guaranteed.

This claim is not true. Consider a Hashgraph network with N=7 nodes so that 2 nodes are allowed to be faulty. Let’s say Alice has a transaction T that she would like to submit to the network. She receives gossip from Bob and includes T in a newly created event A1, which has as parents Alice’s event A0 and Bob’s event B0. Alice then gossips with Bob, who creates B1 with parents A1 and B0. At this point, T has been “gossiped” to Bob and is referenced in both Alice and Bob’s local Hashgraphs. Now suppose Alice and Bob both crash before gossiping with anybody else, losing all history of B1, A1, and T. Regardless of how Alice and Bob re-sync with the network after restarting, T is lost forever. Neither of the nodes acted dishonestly. Plain ol’ crashes are enough to invalidate the claim.

This may seem like a minor point, but it’s important that experts make precise claims when selling new algorithms to non-experts. It will never be possible for a consensus algorithm to guarantee that a transaction commits after communicating with one peer. Dr. Baird would have been correct to say “an event definitely becomes part of the record once it has been gossiped to more than 1/3 of nodes”, but he made a stronger, incorrect claim instead. Hashgraph presents enough novelty to be interesting without making unfulfillable promises.

--

--