Emergent Consensus Simulations

Bitcoin Unlimited proposes to solve the issue of consensus on the network rules using a market driven, distributed approach that is fundamentally based on Nakamoto Consensus. We call this approach Emergent Consensus. If you are unfamilar with the details, there are excellent articles about the specific mechanism here, and a presentation here.

People who feel nervous about Emergent Consensus often question whether it will converge to a single blockchain, or whether persistent blockchain forks will be created in a network that is fully running the Emergent Consensus algorithm.

Since Emergent Consensus is fundamentally based on Nakamoto Consensus, its convergence is proven in the Bitcoin white paper (section 11).

However, simulation of the algorithm is useful to investigate specific outcomes given specific input conditions. And a simulation is also useful to experiment with slightly different Emergent Consensus algorithms in order to find the ones that converge quickly, but whose details are difficult to capture in a mathematical analysis.

I have created a simple simulation available here. The code essentially allows you to create a groups of simulated miners that are running with different excessive block (EB), accept depth (AD), and generate size parameters. It then simulates the creation of the blockchain. It is possible to change miner’s settings so we can see what happens when some percentage of the hash power chooses to start mining larger blocks.

The power of a simulation is that we can run thousands of separate instances of the transition between small and large blocks.

This remains a work in progress, in the sense that it will be interesting to simulate many more mining configurations. This article reports on 2-group results — one that wants to keep a lower block size, and another that is attempting to bring the network to a higher block size.

In all of the cases reported, the EB is set at 1MB for the small blocks and 2MB for the large. The AD is set at 4, and the block generation size is set to the same value as the EB.

The Typical Case: 75%/25% split

I think that a 75% majority is one ratio that is being seriously considered as a sufficient majority to initiate larger blocks in today’s “blocksize debate”. I ran 1000 tests, simulating 6000 pre-fork seconds and 1,000,000 post fork seconds. So we’d expect about 1670 (1006000/600) blocks in a typical run.

The longest fork was 4 blocks, the average number of orphaned blocks was just 2 blocks, with a worst case of 8. In 334 runs we had no fork at all (this can happen if the 75% majority mine AD blocks before the 25% mine 1), and in 559 runs we had just 1 fork. In 81 runs we had 2 forks, and we had 3, 4, and 6 forks in just 22, 3 and 1 cases respectively. These multi-fork situations are cases where the 25% minority gets repeatedly lucky and is able to out-mine the 75% for a few blocks, causing the majority to pop back onto the small block chain.

The Minimum case: 60/40 split

In the 60/40 split (again, 1000 runs of 100600 seconds each) the longest fork was again 4 blocks, and only 3 blocks were orphaned on average, with a worst case of 15 orphaned blocks out of ~1800. But as you might expect with the reduced majority, there is a “long tail” of failed fork attempts. I’ll just include the raw data. You can read this as follows: X:Y means Y runs produced X forks.

So in the data below, there was just 1 fork in 511 runs, and 5 forks in 24 runs.

0: 139, 1: 511, 2: 182, 3: 84, 4: 44, 5: 24, 6: 7, 7: 8, 8: 1

A Surprising Result: What if the network is split 50/50?

People who do not understand the Emergent Consensus algorithm typically assume that a 50/50 split will cause the network to not converge. These are the results of a 1000 runs of the transition from 1MB to 2MB blocks:

max fork depth: 4
max orphans: 34
average orphans: 5
X:Y where Y runs had X forks:
{0: 61, 1: 392, 2: 211, 3: 118, 4: 75, 5: 54, 6: 29, 7: 19, 8: 21, 9: 8, 10: 3, 11: 6, 13: 2, 18: 1}

Even with a 50/50 split, the network maintains consensus!

In runs where ~2000 blocks were created, the maximum fork length was 4 blocks, and we had a worst-case run of 34 orphans.

But in 61 runs there were no forks at all, and in the majority of runs there were less than 2 forks.

Incidentally, the 61 runs with no forks is a good sanity check of the correctness of the simulation since its easily analyzed mathematically. No forks means that 4 large blocks needed to be created in a row, and at 50/50 probability that is equivalent to flipping a coin 4 times and having it land heads each time. The probability of that is (1/2)⁴ or 1/16, and so over 1000 runs we’d expect to see it happen approximately 62.5 times.

Appendix: Result Data

You may want to cut-paste this data into a wider text editor:

SPLIT (block height, max blocks, avg blocks):  max fork depth, max orphans, avg orphans, { X:Y where Y runs had X forks }
0.500000/0.500000 (1790, 1797, 1674): 4, 34, 5, {0: 61, 1: 392, 2: 211, 3: 118, 4: 75, 5: 54, 6: 29, 7: 19, 8: 21, 9: 8, 10: 3, 11: 6, 13: 2, 18: 1}
0.600000/0.400000 (1816, 1820, 1673): 4, 15, 3, {0: 139, 1: 511, 2: 182, 3: 84, 4: 44, 5: 24, 6: 7, 7: 8, 8: 1}
0.667000/0.333000 (1790, 1792, 1669): 4, 13, 2, {0: 195, 1: 563, 2: 157, 3: 53, 4: 25, 5: 3, 6: 2, 7: 2}
0.750000/0.250000 (1800, 1800, 1670): 4, 8, 2, {0: 334, 1: 559, 2: 81, 3: 22, 4: 3, 6: 1}
0.800000/0.200000 (1799, 1799, 1667): 4, 11, 1, {0: 422, 1: 517, 2: 52, 3: 8, 6: 1}
0.900000/0.100000 (1797, 1797, 1667): 4, 4, 1, {0: 625, 1: 357, 2: 17, 3: 1}
0.950000/0.050000 (1789, 1789, 1669): 3, 3, 1, {0: 829, 1: 169, 2: 2}