Examining Bitfury’s Scaling Research

Jonald Fyookball
Keeping Stock
Published in
3 min readJul 14, 2017

The Bitfury Group published a paper in September 2015 called “Block Size Increase”.

That paper discusses the arguments for and against increasing the blocksize, and concludes with “in order for the Bitcoin ecosystem to continue developing, the maximum block size needs to be increased.”

However, despite these appearances of neutrality, there appears to be some bias in their analysis. I don’t mean their conclusion is incorrect — of course we need to increase the blocks! I mean, their tone is pessimistic toward on-chain scaling.

The big problem is this table on page 4:

Look at the last 2 rows.

Where did these “exclusions” come from? Only several pages later in the paper does it mention that those exclusion percentages are estimates. Is that misleading? I think so.

We also include an estimate of how many existing nodes would be excluded from the network in the next 6 months. While the immediate drop of the number of nodes is mainly related to RAM and CPU usage…

No thorough explanation is actually given for how those estimates are produced; it seems to be their own subjective guestimate.

But, if we interpret Bitfury’s statement to mean that RAM is the primary constraint, then I would agree. This is also in agreement with the Bitcoin wiki:

The primary limiting factor in Bitcoin’s performance is disk seeks once the unspent transaction output set stops fitting in memory.

…and is also in agreement with the analysis of Gavin Andresen.

RAM and the UTXO

The number of excluded nodes should be based primarily on the key bottleneck constraint (RAM), not on 10 different factors. By lumping all these factors together as the paper has done, the issues have become clouded.

There is no explanation given for how the “RAM usage, GB” row is calculated. We know that the UTXO set grows over time (even with 1MB blocks), but there is no reason why the UTXO would immediately increase at all, (let alone to 32 GB) if we forked to 8MB blocks today.

Based on that, I fail to see how increasing the blocks makes any nodes “immediately excluded”.

Genuine research into RAM and UTXO would examine the issues that Gavin brought to light:

“…if you don’t store the entire UTXO set in DRAM, then it will take you longer to validate blocks. How much longer depends on how much memory you’re dedicating to the UTXO cache, the speed of your SSD or hard drive, and how well a new block’s transactions match typical transaction patterns.”

Such valid research would also investigate hardware developments and explore the price trends of such hardware. This would provide a much more accurate assessment of what percentage of nodes would be excluded after 6 months, rather than the guestimates in the paper.

Lastly, why did the authors decide to include a lengthy 5 page “Appendix A” about BIP100 voting, complete with impressive looking calculations and graphs of quadratic functions?

Is this just another distraction from the real issues while attempting to look smart? Probably.

--

--