Wavelet: a metastable, Sybil-resistant ledger

Improving upon the shortcomings of Avalanche consensus.

Kenta Iwasaki
PERL.eco
17 min readSep 27, 2018

--

NOTE: This article is now outdated, however we are keeping this article here for archival purposes. For the latest update to Wavelet, check out the article here: https://medium.com/perlin-network/wavelet-beta-df9c738747e2

The recent introduction of metastable consensus protocols from the paper Snowflake to Avalanche has intrigued researchers and engineers across the cryptocurrency space, and has been the basis for much of the exciting research and development efforts we have undertaken here at Perlin. However, misnomers, edge cases and minor discrepancies around this new family of protocols have yet to be covered publicly.

In this blog post, I would like to provide a brief overview of the Avalanche consensus protocol, alongside some of our works and research we have done to piece together a novel, practical high-throughput ledger: Wavelet.

At Perlin, we not only wish to explore the theoretical aspects of consensus protocols, but also how they may be systematically implemented.

This enables us to release an actual working product that seamlessly aids developers, researchers, startups and enterprises in reaping the benefits of accommodating a decentralized system into their tech stack.

Avalanche

Three key system parameters define the communication complexity and security of Avalanche: K, β₁, and β₂ (we’ll cover what each parameter is for throughout the article). Using the average-case derivations and experiments from the Avalanche white paper, we can take the default parameters to be K = 10, β₁ = 10, and β₂ = 150.

Much like other open, permissionless distributed ledger, Avalanche enables a network of decentralized nodes to agree upon the order, validity, and acceptance of transactions.

Now, let’s say I wish to broadcast a virtuous transaction.

The first step is to get our transaction precommitted into the network. We broadcast our transaction to K peers to verify whether or not our transaction is presumably non-conflicting in the eyes of our peers.

But first, what is a conflict?

Conflicts

How we determine whether a transaction is non-conflicting can be done in several ways. Bitcoin makes use of UTXOs (unspent transaction outputs), and Ethereum makes use of the account nonce system.

The account nonce system is much simpler to explain and is used in Perlin. In this system, every account (which is identified by a user’s public key) has a counter called a nonce.

The nonce increments every single time a transaction is sent out from the account. In addition, the value of the nonce before being incremented is marked on every single transaction sent out from said account.

We consider two or more transactions to conflict should they originate from the same sender yet be marked with the same nonce.

This insinuates that conflicts are on a per-account basis. Should conflicts occur, it would suggest that a user attempted to double-spend/confuse nodes in the network.

Precommitment

Now that we have defined what a conflict entails, let’s get back to how we go about having our transaction precommitted to the network.

We consider a transaction to be precommitted if ≥ 80% (note, this threshold may be arbitrarily set; the paper denotes this parameter as alpha) of our K peers notify to us that they strongly insist that our transaction, out of all transactions that may be conflicting with our transaction, is more likely to be accepted by the entire network.

For this to occur, every single one of our K peers acknowledge that our transaction, with respect to other transactions that conflict with our transaction, has the maximum likelihood of being accepted by the entire network.

This likelihood is a quantitative measure that is locally computed across all K peers, expressing each one of our K peers’ confidence in strongly insisting that our transaction should be accepted by the network.

There are many ways we can have each peer quantitatively compute and express a confidence score that any transaction is likely to be accepted by the network.

In the case of Avalanche, each of our peers keep a directed-acyclic-graph (DAG), webbing together all transactions each has acknowledged and received.

Through this DAG, each peer computes the confidence score to be the cardinality of the set of precommitted progeny for a given transaction.

In other words, the confidence score is the amount of precommitted transactions which proceeds/builds on top of a given transaction.

With this metric, we denote that our peer strongly insists that a transaction has the maximum likelihood of being accepted should it and its ancestors have the highest confidence scores among all other transactions of potentially conflicting with our transaction’s hierarchy.

It can be seen that this methodology to assess a node’s confidence in a transaction being accepted by the entire network is acceptable, as node’s are able to then locally compute other nodes beliefs about the precommitment/acceptance of any given transaction.

Once any of our K peers strongly insist our transaction gets precommitted, they’ll repeat the process of spreading the transaction around the network by recursively querying K of their own closest peers with our transaction.

After our transaction gets precommitted, there is just one more requirement before our transaction may be safely applied to the state of our ledger.

If we considered all precommitted transactions to be virtuous, a number of errors could potentially occur.

In the real world, it takes time for information to propagate throughout a network; nodes can become completely out of sync with one another in terms of what transactions they are aware of at a given time.

Just because our K peers strongly prefer that our transaction be the resolution to a conflicting set of transactions does not mean that the entire network thinks the same way.

This brings us to introducing the final step necessary to allow a group of nodes to safely reach consensus: how can we confidently know that a significant portion of our network acknowledges that a transaction is fully committed (i.e., a transaction will be accepted by the entire network)?

Acceptance

Avalanche provides a simple way for us to be well-assured that our transaction has been fully committed, if:

[1] Our transaction is non-conflicting, and β₁ precommitted transactions build on top of our transaction.

OR

[2] Our transaction is conflicting, and β₂ precommitted transactions consecutively build on top of our transaction.

These requirements ensure that our transaction is considered fully committed/accepted and hence safe to be applied to our ledger state.

Taking a step back and rethinking the DAG structure introduced in the precommitment phase, it is easy to see that the way in which vertices (transactions) are interconnected with one another through their edges has a significant influence on whether transactions we make can be fully accepted by all nodes within our network.

In order to construct a transaction that has the highest likelihood of being accepted into the network, it needs to reference transactions as parents (i.e. pick and construct edges to parent vertices) that have the highest likelihood of being accepted into the network.

If we were to pick transactions that may potentially get rejected by the network as parents for a new transaction, it is highly likely that our new transaction will not be accepted.

This causal relationship between a transaction and its ancestry allows us to derive a way to know that it is highly likely that not only will our K peers accept our transaction but also that there exist other nodes in the network that will accept our transaction as well.

Hence, nodes must be particular regarding which transactions they reference as parents for new transactions that they broadcast themselves.

Even if two transactions have no association, transactions inevitably rely on one another to garner acceptance into the network.

Given these relationships we can derive out of the DAG, we can look back and think about why the conditions for considering a transaction to be fully committed/accepted makes logical sense.

Condition 1: Say for example we have a precommitted, non-conflicting transaction X.

Condition 1 (safe early commit).

If other nodes have referenced X as an ancestor for transactions they have broadcast out to the network, they are betting with high confidence that X is going to be accepted into the network.

Condition 2: Now, say we have two conflicting transactions X₁ and X₂.

Condition 2 (consecutive counter).

In a sense, when we have two conflicting transactions, we can analogously consider them to represent two different forks of a blockchain.

We can think of condition 2 to be the longest-chain rule in Bitcoin in which we pick the fork with the longest chain built upon by nodes in the network to be the ground-truth chain of blocks for our ledger.

If other nodes have referenced X₁ as an ancestor for transactions they have broadcast out to the network β₂ consecutive times, we can safely assume that the majority of the network believes X₁ should be accepted over X₂.

If X₁ was referenced as an ancestor β₂ -1 consecutive times, and suddenly X₂ gets referenced as an ancestor once, neither X₁ nor X₂ is considered to be an accepted transaction by the entire network.

However, as nodes in the network are driven with the motive to get their own transactions accepted, they can easily work with one another to ensure that the conflict between X₁ and X₂ is resolved as soon as possible.

Nodes must solve the conflicts of other nodes one way or another as the fate of transactions which a node broadcasts inevitably intertwines with whether X₁ or X₂ is accepted.

For example, should such a scenario occur in which X₂ is about to get referenced as an ancestor by a transaction X₃, all nodes in the network that believe X₁ should be accepted over X₂ may reject X₃ from even being precommitted into the network.

If this occurs, the node which broadcasts X₃ may then re-broadcast X₃ with new parents in the hope that X₃ will be accepted by the entire network.

This is the essence of Avalanche. To summarize, a transaction at any time undergoes two phases:

  1. Being precommitted by broadcasting and querying our transaction to K peers
  2. Being fully committed by being built on top of transactions from other peers in the network

The Issues with Avalanche Consensus

The Avalanche paper itself is relatively simple in theory; however, it fails to cover many concerns when it comes towards being applied in constructing an open, permissionless ledger.

Some of these concerns include:

  1. How do we securely pick K peers to query?
  2. As a new member of the network, how do I securely bootstrap onto other peers?
  3. How and when exactly do I apply accepted transactions to my ledger state?
  4. To determine the confidence score of transactions in the DAG, one would have to perform a BFS/DFS to walk through an entire transaction’s progeny. Will that not be too expensive if we had millions of transactions and edges in our graph?
  5. What is preventing an individual from performing a Sybil attack over the network by spawning enough adversarial nodes such that there are more adversaries than correct nodes in the network?
  6. Smart contracts need a total ordering over transactions or else the order in which smart contracts are executed may be completely out of sync from the perspective of different nodes. Is there a total ordering of transactions which is constructed over all transactions in Avalanche?
  7. How do we synchronize transactions across different peers?

Wavelet

To address the shortcomings of Avalanche outlined above, we have devised a new open, permissionless ledger which addresses the research and development work we have done at Perlin to producing an improved variant of Avalanche: Wavelet.

These enhancements tackle a large number of practicality/safety concerns which come from translating Avalanche from a purely academic work to a real-world solution that is usable by developers.

Overlay Routing

Being securely bootstrapped to a set of honest nodes and securely sampling for honest peers are problems that have been tackled for a long time by research in overlay networking.

Overlay networks such as S/Kademlia are extremely effective in constructing a secure network topology in the presence of Byzantine nodes, where being bootstrapped to one correct node out of any number of Byzantine nodes bootstraps you to the rest of all correct nodes in the network.

In terms of securely sampling for K peers during Avalanche’s precommitment phase, we can perform multiple parallel disjoint lookups over our closest peers to determine which K peers we should broadcast our transaction to within the network.

Should at least one of the K peers we broadcast our virtuous transaction to within our lookup be honest, we can verifiably acknowledge that our transaction will be epidemically spread across all correct nodes within the network.

Even if K -1 other peers we sampled during the lookup process leads us to think that our transaction has failed to be precommitted, correct nodes in the network will eventually propagate our own transaction back to us.

This means that despite being in the presence of Byzantine adversaries, there is a very high likelihood that our transaction will still remain precommitted in the eyes of correct nodes.

In the worst case, we either fail to sample any single honest node (which implies that we should bootstrap with a more reliable set of bootstrap nodes), or have the precommitment of our transaction be delayed due to the presence of Byzantine nodes.

Sybil Resistance: Proof-of-Stake

One of the biggest concerns of Avalanche is the lack of a Sybil resistance mechanism.

Bitcoin addresses this through Proof-of-Work, and Ethereum is slowly shifting over to Proof-of-Stake as a means for achieving Sybil resistance.

Avalanche’s modularity allows us to bootstrap additional processes to statistically verify their effects given.

Changes can easily be made to how K peers are queried, how their responses are taken into account, and how confidence scores are computed on every single node in the network.

In particular, a common trait of a Sybil resistance mechanism is that in order to prevent Sybil attacks, we need a way to make it costly for adversaries to spawn malicious nodes with influence over our network’s consensus mechanism.

The rule of thumb is that at the very least, we can resist all the way up to 51% of some staked value being held in the hands of adversaries plaguing our network.

Let’s define a random variable H backed by a hypergeometric distribution, yielding values in the range [0, K] where K is the number of peers we query in the precommitment phase of Avalanche.

Of course, to know whether or not ≥ 80% * K of peers we query/sample believe our transaction should be precommitted may be done by integrating the probability mass function of H over [80% * K, K].

A very interesting property based on knowing that the precommitment phase can be viewed in such a manner that we can join the random variable H with any other random variable comprised of distinct distributions we may have in mind.

Wavelet utilizes this fact to create a whole new form of proof-of-stake which remains leaderless, encourages for low communication complexity, and discourages centralization by effectively allowing any node to be a validator at a sufficiently minimal cost.

Let's assume that anyone in your network has the ability to stake some amount of PERLs (the tokens operating within Perlin’s network) into the network. They lock up a stake of PERLs into the network that can only be unlocked after some predetermined period of time.

Let’s define a random variable S backed by a weighted discrete uniform distribution whose probability mass function denotes that for every single one of our K peers, the mass, can be illustrated by the following formula:

(Peer’s Stake) / (K Peers’ Stake)

We derive a random variable H’ backed by the joint distribution over random variables H and S, and model our transaction’s precommitment phase to be with respect to the value of H’.

As a result, we now have the answers of the K peers we query when we precommit our transaction being weighted by every K peer’s respective stakes.

Simple stake weighting w. min/max thresholds.

With the new staking mechanism in Wavelet, a Sybil attack may only be carried out if a person has greater than 50% of the total stake in the network given how transactions garner acceptance within the network.

This methodology of creating a joint distribution for assessing K peers’ responses amidst the precommitment phase allows us to embed other forms of Sybil resistance mechanisms such as proof-of-work into Wavelet as well.

Thus, there is no committee selection process, no verifiable randomness needed in choosing validators, and no requirement for validators to always be online; this effectively allows us to construct a novel, leaderless proof-of-stake Sybil resistance mechanism on top of Wavelet.

Penalties and Rewards

There are plenty of ways we can incentivize validators to behave correctly, such as relaying rewards for supporting honest behavior throughout the network.

Wavelet implements the following mechanisms to ensure rewards for honest nodes and penalties for misbehaving ones.

For every transaction a validator successfully validates and assists with being brought into full commitment, the validator is paid a sum of transaction fees.

In return, the cost for validators to acquire said rewards is that they are required to be full nodes, which are consistently online and in-sync with the network.

There also exists a simple process to penalize validators for potentially trying to take advantage of the fact that there may be nothing-at-stake in attempting to get every transaction it receives accepted into the network.

As we may recall how the network selects one transaction out of two conflicting transactions to be the correct fork, we know that eventually one transaction is accepted and the other transaction is rejected.

We can look at the depth difference of the transaction with the highest confidence score from the set of conflicting transactions across all other transactions and determine if a validator is intentionally attempting to bring forks into the system.

Out of two conflicting transactions, if the accepted transaction’s depth minus the rejected transaction’s depth is ≥ β₂ and the same validator propagated transactions, we penalize the validator by removing his/her entire stake for attempting to double-spend/bring forks into the network. The same case applies should there be more than two conflicting transactions issued by the same validator.

Total Ordering of Transactions

There are many kinds of topological orderings in a DAG. Maintaining a topological order consistently for a DAG comprised of hundreds of thousands of vertices and edges is extremely expensive.

Research into online topological sorting algorithms at best have quadratic time complexity.

There is a way however to realistically maintain a consistent order in the eyes of every single node in our network for the sake of Wavelet’s use case.

Wavelet achieves a consistent ordering by recording the depth of every single transaction inside the graph. By inserting a new transaction midway into the graph, we can derive the depth of our new transaction by analyzing the maximum depth of the transaction’s parents and incrementing it by one.

Once that is done, we then walk through the progeny of our transaction and update their respective depth counts accordingly.

An index may then be constructed of the depth counts of our transactions inside any form of embedded key-value store whose underlying data structure is sorted such as LevelDB.

In doing so, our transactions with respect to this newly constructed index will then be lexicographically-least topologically sorted, allowing for every node in the network to have a consistent total ordering of all of the transactions in our DAG.

This allows us to construct features on top of Wavelet where an atomic, eventually consistent, and total ordering of operations is mandatory, such as in smart contracts.

Early-Stopping Criterion for Computing Confidence Scores

Confidence scores in Avalanche are most important in resolving conflicting transactions, and determining whether a virtuous transaction is fully committed/accepted by the network.

To minimize the depth in which we walk over a given transaction’s progeny to compute the confidence score, Wavelet limits the depth of this walk to system parameter β₂.

Early-stopping in said walk allows us to provide a nice lower bound to the time complexity in both the precommit and acceptance phase of any given transaction.

This is viable given that every module associated with the precommitment and acceptance of any transaction over the network only needs to be concerned with a confidence score upper-bounded of β₂.

The number of ascendants of a particular vertex in a graph is the confidence score.

To quote a bit from the white paper, we can represent the probability that the likelihood that 80% of our K peers believe our transaction should be precommitted as P(H ≥ 80% * K) where H is a random variable backed by a hypergeometric distribution ranging from [0, K].

Let’s regard G(P(H ≥ 80% * K)) as a random variable counting the number of consecutive times one transaction in a set of conflicting transactions gets built on top of other transactions.

The white paper provides a nice default for the system parameter β₂ such that the lower bound of the probability P(G(P(H ≥ 80% * K)) ≥ β₂) be negligible (of probability ≤ ε), thus making it extremely difficult to reverse a transaction that was accepted as a result of Condition 2.

Given that Condition 2 presumes that there is negligible probability that said accepted transaction may ever be reversed, we may simply limit all nodes to irreversibly accept a transaction under Condition 2 and cease walking through any paths of the descendants of the accepted transaction.

Adaptive Parent Selection Search

We can make use of the depth index in Wavelet’s database to walk through all precommitted transactions from the highest depth to lowest in finding ideal transactions that may be eligible parents for any new transaction we broadcast.

By keeping track of the tips in the frontier of our graph, we simply check which transactions are precommitted and have a confidence score > 0 or are non-conflicting and use them as parents for new transactions we may choose broadcast.

This approach over walking through all strongly preferred transactions significantly reduces the number of vertices and edges that need to be traversed.

Select candidate parents from the frontier of the graph and filter eligible parents from them.

There is also a guarantee that such a search for eligible parent transactions terminates quickly as we are simply looking for eligible parent transactions along the tips just a few depths below from our graph’s frontier.

Synchronizing Graphs between Peers

Some nodes may lose track or fail to receive some transaction data necessary to safely precommit/accept other transactions.

This is inevitable should the network be accepting tens of thousands of transactions per second.

Therefore, there needs to be a mechanism for a peer to synchronize its ledger state to be consistent with the ledger state of the network.

Wavelet accomplishes syncing by making use of an inverse bloom lookup table (IBLT).

When synchronizing transactions from peers, we can receive and ingest transactions from peers in any order.

Periodically, we determine which transactions are missing and want from our peers by making use of an IBLT, by taking the set difference of our transaction IBLT with our peers’ IBLTs.

We then send out requests to our peers in a separate lightweight thread for the transactions we are missing so that we may sync up with the network.

As we then ingest these transactions in any order, we incrementally walk from the root (at depth 0) to the frontier as our graph gets synced by making use of our lexicographically-least sorted depth index to eventually bring us to the point where our DAG is structurally consistent and complete with other nodes in the network.

To verify the validity of our graph, for every N (say 100) depths of transactions we’ve synced and walked through, we can query K peers for the validity of the transaction tips for every depth N to efficiently validate our graph.

In a sense, this method of verifying by walking through every N depths of transactions is similar to an exponential search for (potentially) invalid transactions we receive from a peer that should be discarded.

Closing Thoughts

The original purpose of Avalanche was to create a consensus protocol for a new cryptocurrency that was sufficiently scalable to be a store-of-value; however, several critical issues of Avalanche have yet to be explored and many use cases have yet to be made/claimed about Avalanche.

We at Perlin understand however that further development is required with Avalanche in terms of both security and implementation.

With the introduction of Wavelet, we aim to solve these critical issues while allowing developers to implement efficient leaderless proof-of-stake protocols, smart contracts, and economic incentives on top of these exciting new metastable consensus protocols.

There is so much more to cover, and we’ll be releasing more updates to address these very points in the coming months.

We are dedicated to bringing decentralized systems closer to mainstream adoption, and the team is working day-by-day with a ton of exciting results which we just can’t wait to share with the community.

Oh, and to leave off on a small note; for all developers out there:

Stay tuned, because Perlin’s smart contract SDK is arriving soon.

| Website | Github | Discord | Medium | Telegram | Twitter | Facebook | Reddit |

’Til Next Time,

Kenta Iwasaki

Many thanks to my teammate Jack Li for helping to edit this piece!

--

--