Lately there has been a great deal of debate around increasing the maximum block size allowed by the Bitcoin protocol because we’re approaching the current 1 MB limit. While this is no consensus about whether or not we should increase the limit, it’s clear that doing so will not actually make Bitcoin more scalable, it will simply increase the load that the protocol will allow the network to generate.
There are a multitude of proposals to scale aspects of Bitcoin:
- Backbone Network
- Invertible Bloom Lookup Tables
- Payment Channels / Lightning Networks
- UTXO Commitments
We can also optimistically hope that some of the “laws” of computer science will ease the burden of running Bitcoin’s infrastructure:
However, while I was initially optimistic that one or more of the above factors may end up solving Bitcoin’s scalability problems, I’ve come to realize that they still require increasing the block size limit on the Bitcoin block chain. The problem is that even with payment channels and sidechains, you still need to post Bitcoin transactions —far fewer transactions, but some are still required. Thankfully the Backbone Network, IBLT, and UTXO commitments actually make it easier for the network to support larger block sizes.
If we choose to scale via payment channels, while they allow users to effectively conduct hundreds or thousands of transactions at little cost, the channel must eventually be closed and posted as a Bitcoin transaction. Tom Harding noted that in order for payment channels to solve Bitcoin’s scalability problem without increasing the block limit, they would have to somehow consolidate global transactions at a ratio of 25,000:1 — this is well outside the range of even my own optimism.
If we choose to scale via sidechains, we could create alternate block chains that support much larger blocks and more transactions per second. However, there will still be a limitation due to the pegging process between the Bitcoin block chain and a sidechain’s assets. Because transferring an asset between sidechains requires a cryptographic proof to be published to lock and unlock assets on a chain, this requires posting a Bitcoin transaction. Thus, the maximum block size will limit the rate at which assets can be transferred between chains. This will likely result in centralized off-chain exchanges remaining a much more popular way to exchange crypto assets, which is risky both from a custodial perspective and from a privacy perspective. Once again we encounter trade-offs of convenience, privacy, and security.
My colleague Ryan X. Charles has calculated that we may very well need to eventually support 10 GB blocks if Bitcoin is to become a global settlement network, even if most everyday trivial transactions occur off-chain or on a chain other than Bitcoin. While this may sound outrageous to many people, only 20 years ago this would have taken me 2 weeks to download via a residential connection while today it takes me 30 minutes. But as Tom and Ryan note, we may be able to truly scale the Bitcoin block chain by implementing new types of nodes that are only responsible for storing and serving slices of the block chain.
While Bitcoin is an amazingly decentralized system and there are a number of excellent proposals to scale it in a decentralized manner, it is becoming apparent to me that in a world of many block chains, the value is still centralized in Bitcoin. Any solution that still relies upon the value and security of Bitcoin’s block chain will inevitably create more demand for bitcoins which will result in more bitcoin transactions, which necessitates larger block sizes. Thus, while increasing the maximum block size is not a scalability solution in and of itself, the proposed scalability solutions can not succeed without it.