The most savage controversies are those about matters as to which there is no good evidence either way. - Bertrand Russell
Bitcoin has come a long way in seven+ years. Being open source software, it lends itself to communal comments and debate. And while we are confident that scaling the Bitcoin network will be accomplished, we would like to set forth some ideas to promote further discussion about alternative ways to scale the network from an engineering perspective.
The conversation around scaling bitcoin has focused on this seemingly innocuous piece of code, and a single number contained therein:
static const unsigned int MAX_BLOCK_SIZE = 1000000;
The ongoing and contentious debate over this single number is a huge distraction from more serious engineering challenges facing Bitcoin. From our perspective, there are three major challenges where the community should focus its efforts to achieve Bitcoin’s full potential and make it scale to be everything we know it can be.
Some of the more elaborate solutions such as voting on block size (BIP105), dynamically changing block sizes (BIP101, BIP106), etc. miss the point. The question “what is the best block size limit” has no right answer. It is an artifact of a limited design using a linear data structure. The right answer is to re-engineer the system to eliminate the problem altogether.
The question: “what is the best block size limit” has no right answer.
Below we present three challenges, along with a few ideas on how to attack them. We are not presenting complete proposals or solutions, but rather points for further discussion. In fact, we welcome constructive comments, objections and criticisms. We are quite certain that the proposed solutions contain flaws, some of which we are well aware and others which we will discover in time. But these are the topics we believe the community should be focused on, because arguing over a single number is neither software engineering nor a productive use of the community’s intellectual capital.
The right answer is to re-engineer the system to not have the [block size] problem in the first place.
(1) Move from a chain to a more sophisticated data structure
The linked-list like block “chain” is not the only data structure into which transactions can be placed. The block-size debate really is a consequence of shoe-horning transactions into this linear structure. If two blocks could be mined at the same time and placed into a tree or Directed Acyclic Graph as parallel nodes at the same height without conflicting, both block size and block time can disappear as parameters altogether (ending the tiresome debate). Directed Acyclic Graph is a mouthful, so we prefer the term “block-braid.”
In particular, imagine “mini-blocks” which form a block-braid and can:
- reference multiple parents (this alone creates a DAG/braid);
- permit two mini-blocks to contain the same transaction; and
- drastically increase the mini-block rate relative to the current Bitcoin block rate.
By modifying how we evaluate the amount of work in a sub-tree, nodes may be able to individually decide their target difficulty, and thereby can use difficulty to moderate the bandwidth they consume.
Here we like the syllogism Braids : Blockchains :: Git : Subversion. Git is tree-like while subversion is linear. But git can commit back to svn via the git-svn module and its dcommit command. You can always bundle a bunch of mini-blocks into a regular block, for the purpose of maintaining continuity with non-upgraded nodes, allocating block rewards, and SPV proofs.
Although Peter Todd’s idea is slightly different, his “tree-chain” work is in a similar direction. Additionally the GHOST protocol, adopted by Ethereum, is a related and similar idea that allows Ethereum to achieve sub-30s block times. Another step in the right direction is some further work by the same authors as the GHOST protocol, called Inclusive Block Chain Protocols.
(2) Move mining to the edges of the network (PoW for the p2p relay layer)
For some time now, mining centralization has been a concern for the stability of the network. Part of the solution is to increase the block rate (or mini-block rate with a braid structure). With more blocks, more participants have a chance to win a block and won’t have to collect into pools.
Beyond that, the original vision of Adam Back’s Hashcash is not realized in Bitcoin, but it could be. Currently, Bitcoin freely relays transactions over its p2p network free for the asking and adds a pile of sketchy heuristics to avoid DDoS attacks. By moving mining to the point of transaction submission, this moves (some) mining to the edges of the network, possibly diversifying the miner pool, and allowing the network to combat DDoS attacks by raising the difficulty threshold on incoming p2p transactions.
If Bitcoin mining ASICs were widely distributed and placed in consumer devices, one can imagine every transaction being mined on submission. Submitting a transaction might involve mining it for a few seconds, instead of or in addition to paying a transaction fee, or delegating an agent to mine insufficiently-mined incoming transactions for their customers.
(3) Shard the Blockchain
Bitcoin with its linear chain structure is fragile. That is to say, the load on the system grows as the number of nodes grow, and as the number of transactions grow. We would like to see an antifragile Bitcoin network in the sense that the addition of more nodes reduces the load on all nodes (given a constant number of transactions). In other words, we need to shard the blockchain so that a single node can hold only a subset of the blockchain, but still be able to verify transactions. Here’s one (admittedly impromptu) proposal:
- Use the low bits of a Bitcoin address as a shard identifier (e.g., the low byte identifies one of 256 possible shards).
- Wallets and transaction submitters would need to grind (brute-force) addresses so that all the addresses in your wallet have the same low byte, and all inputs to any transaction you write reside on a single shard. This is in line with moving (some) mining to the edges of the network. Or simply extend addresses to include a direct shard identifier.
- Transactions would be distributed to each shard identified by the addresses of the UTXOs.
- Nodes could keep any subset of the sharding space they desire (including the whole thing), and each node would attempt to connect to enough other nodes to cover the entire sharding address space.
Planning for the future
The primary reason to address the block size now is to buy ourselves more time to attack the above challenges. But we’re not convinced that increasing the block size is necessary. 1MB is probably sufficient for the next year or so, during which time we will hopefully have better-refined engineering solutions to some of the challenges we propose here.
The above potential scalability solutions imply scarier hard forks than changing the block size. If the Bitcoin community is too paralyzed to implement diverse ideas and test software engineering solutions to better scale the Bitcoin network, then some altcoin will, and Bitcoin may dwindle into the dustbin of history. Ethereum has already made some progress by adopting GHOST. Sooner or later, altcoins will appear in one way shape or form with all the aforementioned ideas — sharding, braids/DAGs or tree-chains, mining at tx input submission, or other improvements.
If the Bitcoin community is too paralyzed to implement diverse ideas and test software engineering solutions to better scale the Bitcoin network, then some altcoin will try such ideas, and Bitcoin may dwindle into the dustbin of history.
Obviously, economics play a major factor in the direction the network takes. There are many ways, however, to maintain continuity of Bitcoin balances while re-engineering it. It’s drastic and discontinuous but one could simply dump addresses and balances, and import them to a new altcoin. Perhaps others can come up with less daunting ways to evolve continuously across a hard fork, but society has handled non-digital hard forks for thousands of years and lived to tell the tale . Let’s face it, the block-size debate won’t be the last hard fork we will have to deal with, and we must come up with a well-defined plan to evolve through a hard fork when the community agrees that the hard fork is necessary.
We are getting an accidental capacity increase in the coming months due to Segregated Witness. There are a number of significant benefits in segwit, and I see the capacity increase really a side effect of fixing transaction malleability. Furthermore it will enable innovation in the scripting syntax, including Merkleized Abstract Syntax Trees which enable more advanced scripting, and accidentally further reduce data on the blockchain in certain use cases. While the majority of developers have signed off on this roadmap, it does not provide directly for any increase of block size, which continues to frustrate many people, but evidently not enough to cause widespread movement by miners. The Bitcoin Classic project is currently holding steady at 4.2% adoption, and that seems unlikely to grow. So it seems for the foreseeable future, Bitcoin will have its capacity limited by 1MB blocks. But, I look forward to more well-thought-out scaling approaches as time goes on.
I’m proud to be the Chief Blockchain Scientist at SolidX, where I am working on the above approaches to improve Bitcoin. I hope you’ll join me in productive, fruitful software engineering to make Bitcoin better, even if it involves intimidating hard forks.
SolidX Partners provides consulting and strategy solutions to organizations seeking to learn about blockchain and implementation solutions. The firm also provides blockchain-based software solutions relating to the indelible recording of records, transfer of assets, and identity. For more information, please visit www.sldx.com or email the team at email@example.com.