What Avalanche paper is not

Alexander Skidanov
5 min readMay 21, 2019

--

This blog post presents results of our analysis of snowball and avalanche from last year.

We discussed a draft of this blog post with a researcher from Ava Labs and had some disagreements, so this post should be considered an opinion piece. Make sure to conduct further research after reading this post if you are interested in Avalanche.

The Avalanche paper presents a very interesting consensus algorithm that in a stream of transactions can efficiently decide on one transaction in each conflicting set (e.g. in a set of transactions that spend the same UTXO). A unique property of avalanche is that the network cost of the consensus is just a constant number of network requests per transaction. However, many people who read the paper end up believing that Avalanche is more than what it is. In this write-up I want to cover two major ways in which the Avalanche paper is often misunderstood, specifically that:

  1. Snowball as is is a ready-to-use binary consensus; and
  2. Avalanche, as presented, can be used in practice without major additions.

Snowball as is is not a ready-to-use binary consensus

The paper starts by presenting a very simple binary consensus that can stall under a presence of malicious nodes called Slush. The core idea of the consensus is to constantly sample present votes of other participants and adjust own vote based on those samples. The authors then do two iterations of improving Slush, going to Snowflake and then Snowball, to make it resilient to malicious actors. A reader might be left under the impression that under a certain threshold of malicious actors Snowball can quickly and efficiently reach a binary consensus.

In practice it can not. It is designed for a very specific use later in Avalanche, and it doesn’t work that well beyond that use case. With the parametrization in the paper, specifically with alpha=0.8, Snowball can stall (loosely defined as still being in the state of unstable equilibrium after 500 iterations) with just 17% malicious actors. And the only action that the malicious actors need is to split into two equal groups and always respond with the same color within each group (see the simulation here).

With lower alpha’s the percentage is higher, but the number of steps to converge in normal circumstances also grows rapidly.

When later analyzing liveness in section 3.6, the authors show an important assumption: the honest actors must all initially vote for the same color. It is a reasonable assumption in the context in which Avalanche uses Snowball, but such assumption makes it completely impractical for most other purposes: it is rare that you need to reach a binary consensus among actors in a setup in which the honest actors all initially vote for the same value.

Avalanche, as presented, cannot be used without major additions

Once the paper presents Snowball, it then moves to present Avalanche. The core idea of Avalanche is to assemble transactions into a DAG, and then use Snowball for each conflicting set of transactions. The idea that makes Avalanche beautiful is that in practice for each transaction t the Snowball query that samples the current opinion on the transaction is only done once. Each such sample affects not just the transaction that the query was made for, but also all the transactions reachable from it in the DAG (collectively referred to as t’s Ancestry), and thus once new transactions are published that have t reachable from them (collectively referred to as t’s Progeny), their Snowball samples are also applied to t. Thus, the total number of samples is ultimately equal to the number of transactions, which promises a very high throughput.

The transaction itself, however, is only considered approved if all the transactions in its Ancestry are approved. Thus, the transaction poster has a strong incentive to have as few transactions in the Ancestry as possible, in the extreme case only referencing the genesis. If they only reference the genesis, and don’t have any double posting, their transaction is guaranteed to be accepted, while if they reference someone else’s transaction, there’s a chance their transaction will get rejected because of the referenced transactions. The more transactions are in the ancestry, the higher is the chance of rejection. Thus, without some other incentives all reasonable actors will always only reference the genesis in their transactions, creating a very wide and very shallow graph.

Since no transaction has any Ancestry, naturally no transaction has any Progeny either, and the natural consequence of this problem is that no transaction will ever get sufficient number of Snowball samples. The paper addresses this particular problem in section 3.6, subsection on the “sufficient chits” by asking honest participants to post empty transactions on top of transactions that don’t have enough Snowball samples. If no transaction has any progeny, the number of such empty transactions per meaningful transactions will need to be on the order of tens, slowing down such naive Avalanche significantly.

Some crypto-economic incentive is required to encourage people to post transactions that actually build the DAG. Designing such crypto-economic incentives that cannot be manipulated on itself is a complex task, and is an important addition that needs to be built before Avalanche can be used in practice. Thus, Avalanche, as presented, is not a complete working solution to building a protocol, as the reader of paper might assume, but is instead a core algorithm with some interesting properties. The surrounding incentives need to be properly designed for Avalanche to be usable in practice.

An example of such crypto-economic incentive is IOTA Tangle. The sole purpose of Tangle is to incentivize people to build a deep DAG. However, the Tangle has some major game theoretic issues as well. It is unclear yet what particular incentives Ava labs uses in their approach.

Outro

Most of the work of preparing and running Snowball simulations above were done by Max Zavershynskyi and Marcelo Fornet.

Near Protocol runs the whiteboard series, a video-podcast in which we dive deep into other blockchain technologies with their core developers and researchers, with episodes covering Ethereum, Polkadot, Cosmos and many other protocols.

If you are an entrepreneur in web3 space, consider joining our accelerator with mentors from tier1 companies and investment firms in the space: http://openwebcollective.com/. We help with all the aspects of building the business, including identifying the market, BD, marketing and fundraising.

I’m on Twitter: @AlexSkidanov

--

--