Which is better for Bitcoin’s scalability — a Bitcoin Unlimited hard-fork or a Bitcoin Core soft-fork?

Llew Claasen
The Bitcoin Roundup
7 min readMar 23, 2017

The wisdom of the crowds inherent in prediction markets, more specifically the relative price of the Bitfinex chain split tokens (“CSTs”), suggests that right now (March 23), the market believes that Bitcoin Core (BTC) has about an 70% chance of success in the event of a chain split (0.7/BTC vs 0.32999/BTC). It would unfortunately then be easy to draw the conclusion that arguments made by either dissident miners or the team behind Bitcoin Unlimited have no merit and that they should all just take their altcoin and fork off (hat tip to Samson Mow for that gem). Simplifying things to that extent would be a wasted opportunity to learn from this experience and move Bitcoin forward.

How should Bitcoin scale? Increase the block size or use Bitcoin as a settlement layer & move transactions to layer two networks like Lightning? Is the real issue even about scaling?

I enjoyed a recent talk by Gavin Wells, head of Europe at Digital Asset Holdings. In his talk, he spoke about them creating blockchain-based financial infrastructure that was capable of 100 tps and that this throughput was suitable for most financial systems, excluding payment networks like Visa or Mastercard. Visa claims a network throughput of 24,000 tps.

If we do some back of the envelope calculations using current Bitcoin network throughput of 3 tps (based on ~2,000 transactions per block), then this suggests that if Bitcoin is to function as a settlement layer, then it likely needs a maximum block size of 32MB (30MB) or if SegWit is activated and we simplify SegWit’s impact on transaction sizes to an effective halving, then we need a maximum blocksize of 16MB for Bitcoin to perform adequately in the long-term, as a settlement layer. Conversely, if we want Bitcoin to perform 24,000 tps on-chain to compete head-on with Visa, then we will need a maximum block size of no less than 8GB. To put that into context, assuming full blocks for simplicity, the size of the Blockchain would grow by 1.2TB per day and each full node would need to store an additional 430TB every year (assuming no pruning). The idea of a completely decentralized network of amateur nodes validating bitcoin transactions becomes impossible at that time. Massive node and mining centralization would occur because you’d need serious gear and bandwidth to run a full node.

So, if we use the edge cases to guide us in thinking about a distributed and decentralized architecture for Bitcoin, then it’s difficult not to conclude that:

  • Layer two solutions must exist for Bitcoin to be a viable payment network, and
  • The network needs a block size of no greater than 16MB (assuming SegWit activated due to the need for layer two already highlighted).

So what would happen if we (safely) hard fork to a future-proof 16MB block size today? A March 2016 Cornell study found that a 4MB block size would pose a relatively low risk to network centralization today. Conversely, activating SegWit without a “real” block size increase will move current network throughput to no more than 7 tps. That is not a significant increase in scale and unsuitable for even a settlement layer. And for the sake of completeness, an emergent BU block size is probably unnecessary if we already know that there would be little point in increasing block size beyond 16MB.

So what we would want is a safe way to increase block size from the current 1MB to 2MB, then, say, when we get to 80% of 2MB, to safely increase block size to 4MB, etc. The tech behind that upgrade path doesn’t sound very complicated, even if the initial hard fork is risky.

The high-level technology requirement for Bitcoin to scale actually appears to be quite simple and clear. In fact, it’s so clear that it has hardly changed since December 2015 when a large number of Core contributors signed up to a statement of “Capacity increases for the Bitcoin system”. As it turns out, our scaling problem is not one of technology limitations, but rather a problem of mining centralization and communication.

The current issues with agreeing on how best to scale Bitcoin are not new. Frankly they’re tiresome, but let’s have a quick recap anyway.

Our story starts in 2014 when long-term Bitcoin code maintainer Gavin Andresen hands over Bitcoin Core maintainership to Wladimir van der Laan, suggesting that he wanted to write & review more code and get into other technical areas of Core. Then in January 2016, after months of the community wrangling with the issue of how to scale Bitcoin, Gavin submitted pull request BIP109, ostensibly so that Bitcoin would proactively increase the block size to 2MB to enable upcoming network throughput requirements, while a more technically appropriate long-term solution was found. SegWit was one of these long-term candidates and first proposed by Pieter Wuille at Scaling Bitcoin Hong Kong in December 2015. BIP109 is met with strong resistance from many in the community, including the Core team. Gavin responded to this rejection by proposing a new chain fork — Bitcoin Classic.

Shortly thereafter, in Feb 2016 a few Core contributors and miners got together in Hong Kong to discuss how to scale Bitcoin in the short term. At this meeting the “Bitcoin Roundtable Consensus” proposal was developed. Those present agreed to propose a hard-fork to reallocate SegWit’s maximum 4MB blocksize, in anticipation of Schnorr signatures reducing signature sizes, among other things. While the proposal did not represent all in the Bitcoin community, it was hoped that the proposal would attract more support among the wider community. In particular, it was not an agreement “by Core” because there is no such thing, only individual contributors to Core.

Finally, Gavin’s commit access is revoked in May 2016 immediately after his support for Craig Wright as Satoshi Nakamoto, with concerns that his account may have been compromised (as had happened with Jeff Garzik). F2Pool broke the Hong Kong agreement soon after by mining Classic blocks. Luke-Jr proposed code for the agreed 4MB block size hard-fork, but received little feedback from the mining community on whether or not to proceed and work stopped on the hard-fork. No further Core contributors signed on to the Hong Kong agreement. For all intents and purposes, the Hong Kong agreement was moot. Most contributors to Bitcoin Core spent the rest of 2016 instead focusing on delivering the SegWit soft-fork and the effective ~2MB block size increase which it enables.

And that’s how the cookie crumbled.

Some miners claim that Core didn’t deliver the agreed block size increase hard-fork and instead delivered only the SegWit soft-fork, so they won’t signal for SegWit activation. Core contributors comprise a group of very smart people, so it wasn’t an error. Was it a misunderstanding? Something else?

I’ve seen a lot of talk on Reddit and Bitcointalk suggesting that it makes absolutely no sense to increase the block size to 2MB and beyond. As I’ve shown above, maintaining the current blocksize would make no sense, if we assume no mining centralization. Conversely, many big blockers publicly advocate for all transactions taking place on-chain, claiming that layer two solutions are unnecessary for Bitcoin to scale. That would be just as irrational.

But we can’t assume no mining centralization.

The current scaling impasse is not a search for the optimal scaling architecture. It’s an issue of control and ultimately of economic interests.

The two biggest Bitcoin mining facilities to be announced in 2016 appear to be operating on the same hardware — a large Bitmain facility in Xinjiang, China and another large MGT facility in Washington state, US. Considering that the current network hash rate is approximately 4EH, just these two facilities running Antminers from Bitmain would be capable of in the region of 1EH and 10PH in Q1 2017 — a total of more than 25% of the current total Bitcoin network hashrate. This would likely be in addition to the ~27% of the network hashrate already contributing to AntPool (Bitmain’s official mining pool) and ViaBTC (a mining pool rumored to be backed by Bitmain). So that’s already >50% of the network hashrate based on publicly available information.

Yes, we have massive centralization in mining, both in hardware manufacturing — only 3 hardware manufacturers remain (Bitmain, Canaan and Bitfury) — and also further along the value chain in the mining activity itself. A large portion of the network hashrate will soon be located in just 2 data centers. The full capacity of these two data centers is not even fully known or online yet. Is this perhaps the concentrated firepower that could be utilized for a network hard-fork to BU?

Large scale miners care a lot about transaction fees, especially, as Vinny Lingham recently reminded us, about how they can keep transaction fees relatively high. Right now, the transaction fees per block is ~$1,650 — almost 10% of the miner reward. More importantly, in the longer-term, transaction fees will make up larger percentages of the miner reward as there are less bitcoins left to mine. Layer two solutions enabled by activating SegWit have the potential to cut significantly into the long-term profitability of these huge mining businesses.

There is no solution to scaling if we don’t find a way to deal with the economic incentives for large scale miners to keep transaction fees high by keeping transactions on-chain.

BU has the potential to reduce transaction fees in the short-term by reducing the size of the mempool, but it also enables the highly centralized miner network to control the block size, and ultimately, increase the block size beyond the highly undesirable medium-term 16MB mark, further centralizing an already centralized network.

Let me be clear — I haven’t seen enough evidence to suggest that anyone on either side of the scaling debate is “evil” or trying to “destroy Bitcoin”. That thinking is a function of the bias for adversarial thinking present in groups working on decentralized systems. Unfortunately, miner economic incentives are not aligned with what appears to be the optimal Bitcoin network upgrade path and there is enough money at stake for rational actors to push for control. We must urgently decentralize mining or risk privatising the Bitcoin network.

--

--