Why We Love Nakamoto Consensus

Jane Wu
Nervos Network
Published in
5 min readFeb 19, 2019

and how to break its throughput limit.

This article is part 1 of “The Making of a Consensus Protocol” series, which include three articles:

1. Why we love Nakamoto Consensus (NC) and how to increase its throughput

2. Why we don’t choose alternative consensus protocols

3. A detailed look into the design of NC-Max, a variant of Nakamoto Consensus with higher throughput

Some background before you start:

The article is based on the transcripts of Nervos researcher Ren Zhang’s talk at “Scaling Bitcoin” meetup in San Francisco, check here for a video recap of his talk.

Ren is currently a researcher at Nervos (mainly working on the Nervos consensus protocol) and also a PhD student of Bart Preneel (designer of RIPEMD 160, the hash function you use to compute from your Bitcoin public key to your Bitcoin address) at COSIC Research Group KU Leuven (the birthplace for AES, the advanced encryption standard which is used in all of your cell phones). In 2017, after conducting research finding the design flaws in scaling proposed bitcoin unlimited, Ren was invited to intern at Blockstream and worked with Pieter Wullie and Gregory Maxwell. Recently, Ren’s paper “Lay Down the Common Metrics: Evaluating Proof-of-Work Consensus Protocols’ Security” is accepted at IEEESSP (Oakland).

Why we love Bitcoin’s Nakamoto Consensus?

We like it for its performance optimization, it is optimized to save every bit of transmission and every cycle of computation. For example, it uses compact block to accelerate block propagation, and it will use minisketch to save bandwidth for transaction propagation hopefully in the future. Bitcoin developer also developed Graftroot, you can think of it as a way to compress Bitcoin’s smart contract for better privacy and better performance. And hopefully, we can see signature aggregation adopted in Bitcoin.

We also like it for generalizability. UTXO + global order of transactions allow support for all sharding and layer 2 solutions and complicated smart contracts. In contrast, Ethereum’s account model is difficult to shard. And if you don’t have a global order of transactions like many DAGs, it’s difficult to support complicated smart contracts.

We like it for its security. Empirically it was launched 10 years ago, survived many attacks. And formally no known PoW protocol comprehensively outperforms NC (check Ren’s talk at Stanford Blockchain Conference 2019 for more details, started at 1:35:00)

How can we increase the throughput of NC without sacrificing security?

As far as we love NC, it is not perfect — NC has a natural throughput limit. For NC, there are two things you can do if you want to raise the throughput: the first is that you can increase the block size, e.g. Bitcoin unlimited and Bitcoin cash; the second thing is that you lower the block interval, but the orphan rate will increase if you do so, as your orphan rate increase, the security of the system goes down and the throughput goes down.

Ren proposed two changes of NC, for better throughput without sacrificing security:

change 1: the “Manual” throughput limit

The NC protocol prescribes a manual throughput limit of 4 Mbyte every 10 minutes, although the bandwidth of Bitcoin public nodes has increased significantly in the past several years, there is not a way to adjust the protocol to enjoy the increase of public nodes bandwidth.

Just as indicated in this study, Bitcoin’s IPv4 nodes that used to be connected to the network with a median bandwidth of 33 Mbit/s in 2016, has a median bandwidth of 56 Mbit/s as of 2017 Feb.

A natural question would be: can the protocol itself dynamically adjusts the throughput?

A bad attempt of this approach is Bitcoin unlimited. It hopes to increase the Bitcoin’s throughput dynamically, but it failed and it introduced several new attacks that weaken the security of the protocol. In 2017, Ren conducted a study found Bitcoin unlimited’s design flaw.

change 2: the incentive issue, namely the selfish mining attack.

In this attack, the selfish miner withholds the discovered blocks, hoping to gain a larger lead on the public network. And when other miners find the block, next the selfish miner broadcast the secret block to the network, hoping that the secret block will reach most of the complied miners before the competing block. If the attack is lucky enough, the next block is mined after the attacker block, the honest block is orphaned. If the attacker is lucky enough to find more than one block in a row, it can safely invalidate an honest miner’s block, because the attacker has a longer chain.

Why is selfish mining profitable? Assuming there’s a concrete example, the attacker has a mining power of 30%, and honest miners control 70%. Without the attack, the attacker finds 3 blocks in 10, the honest miners find 7. With the attack, the attacker finds 3 blocks in 7 and the honest miners find 4 in 7 because 3 of them are orphaned, and the main chain grows slower.

In the next epoch, the next difficulty adjustment period, the difficulty lowers, the attackers get more rewards with the same mining power. That’s why selfish mining is profitable. Selfish mining is not profitable in the first epoch of the attack, it’s only profitable after the difficulty adjustment mechanism. And the difficulty lowers precisely because the main chain grows slower in the last epoch.

I will share more details on why we don’t choose alternative consensus protocols like PoS, DAG and the changes Ren proposed to make in NC in the next articles, please stay tuned on the Making of a consensus protocol series (coming soon!) :

Why we don’t choose alternative consensus protocols (which claim to have higher throughput)

NC-Max, a variant of Nakamoto Consensus with higher throughput

--

--