Defining Criteria for Consensus Algorithms
In search for better alternatives to PoW and PoS
Since the early days after the introduction of proof of work (PoW) in Satoshi Nakamoto white paper, there were many searches for a better alternative consensus mechanism for distributed ledgers. And, most of the effort has been spent on trying a different proof of stake (PoS) variants. With the search, the definition of “better consensus” was gradually shifting. Initially, it was a higher energy efficiency, then this criterion was accompanied by a higher transactional throughput (scalability). A couple of solutions was proposed, from a basic PoS found in original NXT to more performant variants as a delegated PoS (dPoS) found in Graphene-based blockchains (Bitshares, Steem, and, probably, EOS).
These two solved the issue of energy efficiency (and, scalability, as for the former one), however instantly introduced a number of issues not found in the original Bitcoin PoW. Namely, these were new types of attacks possible for the usual PoS and issue of centralisation in the dPoS consensus. Thus, when talking about different consensus algorithms, we need to define criteria used to evaluate them.
Criteria of consensus algorithm efficiency
First, proper consensus must be secure, i.e. be able to tolerate multiple Byzantine faults (deviations from the protocol by some nodes). Such faults should not lead to double spending, or be used to introduce successful “nothing at stake” attacks (both long- and short-ranged), and do not give possibilities for manipulating randomness used in selecting block creation leader — and etc. All of that, according to the groundbreaking work on consensus formal verification, can be mathematically reduced to some sort of “measurable probability of blockchain forks” and ability for the consensus mechanism to control that probability.
Second, proper consensus must be sustainable in time, meaning that participating nodes had sufficient incentives to continue operating the protocol.
Third, the consensus incentives must correspond to a Nash equilibrium, meaning that there should be no deviations from the protocol that lead to a higher economic advantage than the following protocol — even under conditions of coalitions in the system (refined, coalition-proof Nash equilibrium). This rise the bar so high, that even Bitcoin itself can’t afford such level of the consensus efficiency.
Having said that, practically none of the existing PoS protocols passes such criteria. If we lower the bar for the third one down to the level of “just a Nash equilibrium” (similar to Bitcoin), we still will filter off “simple PoS”, dPoS, all consensuses inheriting DLS and PBFT-based protocols (like Tendermint, Polkadot, Casper etc), which cannot resist more than 1/3 of Byzantine-faulty nodes (while Bitcoin may resist up to 1/2). The only one left here will be a recently introduced Ouroboros — provable-secure PoS protocol, which also surpasses Bitcoin PoW in terms of energy efficiency and transactional scalability.
Decentralisation to rule them all
But all of these criteria (first part, related to efficiency, and the second one with security and incentives thresholds) are not the final barrier. To understand why it is so, we need to analyse what we believe are the main value introduced in practice by Bitcoin: censorship resistance. The source of the censorship resistance is not only consensus liveness and Byzantine fault resistance but also the decentralised nature of the protocol: censors are (arguably) rational agents able to act contrary to economic incentives (analysed by Nash equilibriums) and they can correspond to the signification part of the network (even majority of the nodes or users). Thus, the presence of censorship resistance takes not only all mentioned criteria but also decentralised qualities of the network that have to be maintained by consensus protocol in time. So, the last criterion will be an ability of the consensus to keep decentralised state and resist centralisation over the long periods. And, while Bitcoin is being criticised for mining centralisation, it is nothing comparing to the level of centralisation that any existing PoS bring.
Stakes of Centralisation
Why is PoS more prone to centralise with a time than even PoW with ASICs?
In general, centralisation comes from the simple fact that those who have money earn more money (due to inflation/block rewards and transaction fees), and those who do not have — do not earn. This will always eventually create the oligopoly. The common counterargument to this statement is “you can buy hashing power (for PoW) or stake (for PoS) and become new independent mining/minting actor” — but the argument lasts only while cryptocurrency cap is low and liquidity is abundant. Try to buy a master node for Dash (master node require stake) these days — and Dash is not even so popular like Bitcoin.
So both PoW and PoS tend to centralise, but PoS does that faster (again, look to a Dash using mixing PoW+PoS model to mine). This happens mainly because PoW means spendings on hardware and electricity, while PoS spendings to maintain minting process are much lower. Remember “energy efficiency” praise to PoS? Here is another side of this coin: faster centralisation. PoW miners have to update hardware, spending the most of their rewards, while PoS does not require it to that scale.
Ouroboros? No. Hybrid.
So is there any ray of hope for the world of crypto? Or we all are going to end up with more and more centralised cryptocurrencies? Here it is: Hybrid models.
In Pandora Foundation we have spent a lot of research on how we can overcome the limitations of modern PoW and PoS consensuses. PoW needs to spend energy for maintaining decentralised state but does that on a useless work (still required to generate trustless randomness for selecting blockchain leader block). PoS gave higher throughput, but lower Byzantine tolerance and higher trend to a centralisation of the coins. What if we cross-bred them? Putting randomness aside of the stake or work. Rewarding not stake but useful work. Scaling through different layers of blockchain architecture.
Divide and Conquer
This works. Meet Prometheus: the first hybrid consensus algorithm with two levels of consensus and three scalability layers:
- State channels (like in Lightning Network) running parallel computing of useful (paid-for) jobs (like AI), putting a stake to prove its correctness. They get mining rewards, but need to maintain hardware and pay for electricity (no centralisation). But these nodes (miners) do not produce blocks (so no scaling limitations).
- Other nodes, doing arbitration for failed jobs, earn reputation — inalienable thing, that can’t be solved or transferred (but can be lost).
- In sidechains and root chain these high-reputation nodes sign blocks, earning block rewards. But these rewards come for reputation, and not in any proportion to their stake/coins in ownership! Thus, no centralisation happens here as well. This “proof of reputation” can be though as a form of PoS, but lacks its main drawback: stake centralisation; why have all of its positive sides (throughput, energy efficiency, speed etc). And it has much higher Byzantine fault tolerance because the most of the nodes monitor blocks and according to the protocol can prove faults of the miners leaving them without reputation and stakes and gaining their place instead.
So, by dividing and combining, we get a new level of both decentralisation quality and scalability. Win-win!
P.S. In the nearest future Prometheus white paper will be released to the public. Stay tuned!