A Simple Framework for Evaluating (Security Assumptions of) Consensus Algorithms

What we talk about when we talk about consensus algorithms

Toya B
Toya B
Jun 26 · 8 min read
Sticker reads: pay attention to security (safety). This one day I was told glass in our building was broken because weather was too hot 🌞

Author by Jan Xie

The usual way of talking about the so-called ‘PoW consensus’ and ‘PoS consensus’ are very vague. PoW/PoS are mere block selection mechanisms, they do not describe the process by which consensus is reached.

Discussions about consensus algorithm cover a board range of topics.

Prior to the emergence of blockchain, there had already been a substantial body of research on consensus algorithm in the area of distributed system and database. However, blockchain consensus poses significantly different requirements and one needs to exercise cautions if pitfalls in traditional consensus algorithm research are to be avoided. Actually not only blockchain puts unique constraints on consensus algorithm, I also see a significant difference in emphasis in researches presented in database conference and those in distributed systems. This is easy to understanding: different use cases naturally call for different designs.

This article proposes a simple framework for analyzing blockchain consensus algorithm in order to enable comparison between them.

Here is what the framework looks like at a glance:

BASIC REQUIREMENTS

A very basic metric for measuring consensus algorithm consists of two parts: correctness and performance.

Simply speaking, correctness include:

  • Consistency — nodes eventually see the same local states

Correctness is the most fundamental requirement of all, most blockchains are able to achieve this. Maintaining both consistency and liveness at all time in an asynchronous network is a very challenging task, tradeoffs between the two are often made under particular circumstances. For instance, Bitcoin’s Nakamoto consensus prioritizes liveness, while BFT consensus prioritizes consistency.

The second part of the metric, performance includes:

Many factors could affect throughput and latency. To name a few: number of consensus nodes, complexity of consensus information, bandwidth available to consensus formation, design preferences and so on.

Generally speaking, it is difficult to optimize for throughput and latency both at the same time. This is because there is a lower bound to consensus information complexity: consensus participating nodes need to receive information at least once every time when new consensus is formed (otherwise they would not know what they are agreeing on). If low latency is preferred, nodes need to come to agreement as fast as possible, which means the complexity of a single request/transaction needs to be high; If high throughput is desired, then requests/transactions need to be processed in bulk as much as possible for reduced information complexity in a single request/transaction, but this will lead to high latency.

Ren Zhang on Nervos research team proposed bandwidth utilization rate as a parameter for measuring the performance of a consensus algorithm: given the same amount of total bandwidth, the less bandwidth a consensus algorithm utilizes, the higher the TPS.

(Here is why TPS comparison does not mean anything: Forget about the TPS Competition)

PROPERTIES OF BLOCKCHAIN CONSENSUS

A DYNAMIC PARTICIPANT SET

Whether permissionless or permissioned, the most important porperties of a blockchain is that it is an open system that operates on a long-term basis. The effect of adding these two together is that the set of participants is always changing. Every once in awhile, there will be old nodes leaving and new nodes joining the network, resulting a dynamic set of participants. How to deal with this dynamic change is a fundamental problem to blockchain consensus.

Different from blockchain consensus, traditional consensus often assume a fixed set of participants, then study how to best form consensus among this fixed group. Only occasionally does change in consensus group become a topic of discussion, and only rarely is any attention paid to the entry requirement for consensus participation. Most traditional researches focus on ways to guarantee correctness (e.g. consistency and liveness), ways to form consensus group are secondary research topics at best. This emphasis is a natural outcome to the traditional use cases, which mostly belong to centralized networks, meaning any servers added or subtracted often belong to one central party.

VAST POOL OF PARTICIPANTS

Decentralization is a key requirement for a permissionless blockchain. Generally, we could measure the degree of decentralization by assessing the protocol’s barrier to entry in its formation of consensus. (why would this be a good measurement?) The lower the barrier, the more decentralized a network can potentially be. Low barrier to entry also means the set of possible participants to consensus is very large. This is something that must be taken into consideration when designing a consensus algorithm in order to guarantee that the performance won’t go down as the set of participants grow.

A MINIMUM TRUST MODEL
The goal of executing consensus algorithm is to generate a consistent computational result for a computational request. This operation involves request-initiating nodes and request-processing nodes. In the traditional consensus algorithms, execution of consensus algorithm entirely happens within request-processing nodes for some, and for others request-initiating nodes and request-processing nodes handle execution together. In both cases, trust threshold resides between request-initiating nodes and request-processing nodes. Namely, initiating nodes submit request to request-processing nodes, request-processing nodes return execution result to initiating nodes, then finally initiating nodes decide to trust the computational result. (On the other hand, request-processing nodes do not trust request-initiating nodes, if they participart in consensus).

Trust model for blockchains are different. The requirement for trust is much reduced. In the permissionless blockchain network, the same node both initiates and participates in consensus. There is a shared responsibility for verifying computational results, as opposed to simply trusting the consensus outputs of other nodes. Take the example of Bitcoin, if a full node part takes in mining, it is a transaction-submitting node and a transaction-processing node at the same time. Even if it does not takes part in mining, it will verify computations by itself and does not trust results provided by other nodes. A full node like this will only choose to follow the chain with largest hash power, not trust results provided by majority of hash power. This is a minimum trust model.

SPV/light node uses a stronger trust model (meaning more trust assumptions) that does not verify the executions of transactions but only verifies the validity of blockheads. Ways to verify the validity of blockheads is another key issue to consensus algorithm design. If validity is verified only through the asymmetrical signatures of the blockheads, then this trust model can be largely considered as equivalent to the trust model of traditional consensus in that in traditional consensus signatures can also be attached to request-processing nodes. This model encounters a number of problems when the set of participants is dynamic, a well-known example would be long-range attack. If verification of blockhead’s validity happens via PoW then there is no such issue. Majority of the research in this area focuses on how to improve the verification process even more.

A SIMPLE EVALUATION FRAMEWORK

Base on the analysis above, we come to a simple evaluation framework for comparing blockchain consensus:

  • Entry requirement — obtaining hash power/staking tokens/…

FOR EXAMPLE:

Evaluating the Consensus Algorithms of Some Popular Blockchain

For any blockchain protocol, the first four parameters (namely, entry method, block selection, consensus algorithm, exit method) basically determine what the rest parts will look like. As shown through this framework, the usual way of talking about the so-called ‘PoW consensus’ and ‘PoS consensus’ are very vague. PoW/PoS are mere block selection mechanisms, they do not describe the process by which consensus is reached. We could even design novel consensus protocols where consensus participants enter by staking, nodes are selected via PoW and the longest chain is ensured by Nakamoto Consensus, (and name it StakingPoW in order to catch the hottest trend) or we could design a chain that enter by PoW (one has to have a certain hash power to be qualified participants), block-production nodes are selected by VRF, and consensus are reached via BFT (this one can be named PoW+VRF/BFT, a very scholarly sounding name indeed).

Incentive Analysis

On top of the framework proposed above, we could add one more additional metric that hasn’t existed in traditional consensus algorithm: incentives. Blockchain introduces economic incentives to consensus, combining Nash equilibrium and system goals via mechanism design, incentivizing participants to abide by the protocol where participants’ utility functions can be easily measured by their economic gains(e.g. return of participation minus cost of participation). Consensus incentivization is often implemented through a blockchain’s native token. The value of reward is determined by the value of the token. In a correctly designed consensus protocol, participants should be rewarded according to their cost of participation. Cost to participation can get fairly complex. Referencing to our proposed framework, we would need to calculate the cost of entry, block production cost, consensus cost, exit cost. These costs together constitutes the cost to participating in consensus. Consensus incentivization has impact on both network security and decentralization, therefore it is a very important component to analysis consensus algorithms.

Translated from:

Reference:

[1] Confirmation
[2] Selfish Mining
[3] Any amount of hash power will have a chance to be selected to produce a block no matter how small, but the average latency will increase
[4]https://github.com/tendermint/tendermint/blob/master/docs/spec/reactors/consensus/proposer-selection.md 7
[5] https://medium.com/nervosnetwork/breaking-the-throughput-limit-of-nakamoto-consensus-ccdf65fe0832 9

HackerNoon.com

how hackers start their afternoons.

Thanks to hackerhodl

Toya B

Written by

Toya B

A non-teller of stories

HackerNoon.com

how hackers start their afternoons.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade