The resolution of the Big Block experiment
Bitcoin’s unwillingness to hard-fork is often portrayed as a “failure to evolve,” but the history of Bitcoin Cash has proven otherwise.
Over the course of almost ten years of Bitcoin history, dozens of proposals have emerged to increase the size of the blocks in its blockchain. And even though all of them have failed, the Block Size Debate is still one of the most controversial, divisive and tiring issues surrounding Bitcoin. Now, over one year after the creation of Bitcoin Cash and the ongoing drama surrounding its ABC/SV forks, the trade-offs are very clear.
In this article, I’ll provide the historical background of block size increases, its notable supporters over time, and how this ongoing issue divided Bitcoin on two occasions. Then, I will make the case against such proposals by analyzing BCH’s history and the technologies it employs. Finally, I will make the argument that, in the absence of block rewards, limited block space will be essential to retain Bitcoin’s security model, as well as its monetary foundation in Austrian economics.
The Historical Background: Block War I
A contributing factor to the Block Size controversy is the fact that in the first iteration of Bitcoin, blocks were not explicitly capped. Satoshi Nakamoto only added Bitcoin’s 1MB cap in September 2010 through a series of hidden commits that introduced the MAX_BLOCK_SIZE variable. Before this addition, blocks were technically bound by a soft cap of around 500-750kb. Given the lack of transactions at the time and the significance of the other features that needed to be prioritized, block size was not a topic of contention in Bitcoin’s early history. However, as more developers joined the project and began to consider the technical challenges associated with the mass adoption of Bitcoin, discussions on scalability emerged sporadically.
Less than a month after Satoshi Nakamoto introduced the 1MB block cap, the very first proposal to increase it emerged. In October 2010, one of the earliest developers to work on Bitcoin, Jeff Garzik, proposed a sevenfold increase in block size (to 7MB). Like most of the early technical implementation proposals, which precede formalized Bitcoin Improvement Proposals (BIPs), this was suggested on a BitcoinTalk.org thread. In this thread, Garzik argued that “[Bitcoin] should be able to at least match Paypal’s average transaction rate,” and used this assessment as a justification. Satoshi was one of the first to respond to this thread, and warned Garzik to not implement the increase, as “it'll make you incompatible with the network, to your own detriment.”
Later discussions revealed that Satoshi Nakamoto was not entirely opposed to an increase. However, when describing a potential implementation, he implied that such changes would have to be rolled out gradually. In a brief post, he said that an increase in block cap could be considered “in versions way ahead,” when “obsolete” Bitcoin clients constituted as a minority in the network, thereby preventing a chainsplit. Two months later, Satoshi made his last post on BitcoinTalk.org and vanished. Given the amount of work that needed to be done at the client level, developers that joined Bitcoin over following years focused on security and reliability, rather than potential implementations of bigger blocks.
The topic of scalability was only revived nearly three years later, in 2013, when services like the Silk Road and Satoshi Dice increased the volume of on-chain transactions. And while the debate around potential scalability solutions was momentarily reignited, the shutdown of the Silk Road followed by the collapse of Mt. Gox prevented the formalization of big block proposals at that time. Bitcoin’s price collapsed and continued to underperform throughout 2014. Nevertheless, the number of on-chain transactions nearly doubled over that year, which caused an increase in transaction fees. By the beginning of 2015, as transaction volume hit an all-time-high, the block size debate was once again in the spotlight. This time, however, it was set to stay there for the following three years.
In July 2015, Jeff Garzik proposed BIP100, Dynamic maximum block size by miner vote, the very first big block Bitcoin Improvement Proposal. Beyond a simplistic block size increase, as originally proposed by Garzik in 2010, BIP100 proposed the implementation of an adjustable block cap, ranging from 1MB to 32MB, which was intended to be dynamically determined by miners. Under this system, miners would use block headers to vote for increases or decreases in block size at pre-determined time intervals. A supermajority of 75% of miners would have to vote for an increase to take place and that increase could not surpass 5% of the size of the previous block. Over the following months, BIP100 was highly scrutinized by prominent members of community who argued that, given the dramatic range of block sizes, fluctuating transaction fees could contribute to more volatility in Bitcoin’s price and even open up the possibility for network attacks due to frequent hashrate inflows and outflows.
Amongst BIP100 critics was Gavin Andresen, Satoshi’s handpicked successor, who wanted changes in block size to occur in a more predictable manner and opposed giving miners a disproportionate amount of power. His proposal, BIP101, was to increase block size to 8MB through an algorithmic schedule that doubles block size every two years. And as the trade-offs of the competing proposals became clearer, the Bitcoin community was critically divided. Three competing interest groups, which still exist today, were for the the first time delineated: i) the miner maximalists that want miners to be able to increase block size as needed, ii) the enterprise maximalists that mostly care about the technology’s ability being to scale, and iii) the hard-fork minimalists that are against any changes in monetary policy.
Unsurprisingly, the majority of the hashrate at the time signaled support for Garzik’s BIP100, whereas the majority of Bitcoin companies backed Andresen’s BIP101. Caught between the two factions was the community of users and developers at large who seemed to have concluded that hard-forks were by and large a bad idea. Prominent developers began to highlight that block size increases were merely temporary solutions and that other approaches such as payment channels and sidechains would be required for mass adoption. Bitcoin’s single-threaded nature, whereby all transactions must be verified by all nodes canonically, inevitably leads to latency in a situation where it is massively adopted as a medium of exchange, even in the advent of gigabyte blocks.
As it became clear that the community was averse to a hard-fork, Gavin Andresen, along with another prominent Bitcoin developer, Mike Hearn, grew exceedingly frustrated with the Bitcoin Core developers that opposed BIP101. Both had already been working on an alternative client implementation of Bitcoin Core called Bitcoin XT, which had already activated BIP101. Tensions were high, and it became clear that the possibility of a chain split and two competing networks was not off the table. Tensions escalated in November 2015, when Mike Hearn accused Blockstream, which was founded by many Bitcoin Core developers, of purposefully campaigning against block size increases to push for their own sidechain solution.
After months of pushback against BIP100, by the end of 2015 the proposal was practically dead. Nevertheless, Andresen and Hearn were still pushing for the adoption of Bitcoin XT with BIP101. In December 2015, a couple months after the XT client was released, its network suffered a massive Distributed Denial of Service (DDoS) attack, presumably engineered by an adversary of BIP101. This ceaseless attack prevented XT nodes from communicating for weeks, which effectively killed the network. In January 2016, Hearn published The resolution of the Bitcoin experiment and declared that Bitcoin was dead as it failed to adopt big blocks. The name of this article is a mock-homage to Hearn’s article, as I strongly believe the adoption of bigger blocks (via BIP101, or otherwise) could have potentially destroyed Bitcoin’s long-term value proposition.
Block War II: History Repeats Itself
Ironically, not even one year after Hearn’s resignation and the dissolution of Bitcoin XT, the tiresome block size debate returned. The advent of SegWit, or Segregated Witness, proved that efficiency gains could be achieved without block size increases and served as a required path to Layer 2 solutions because it also solved the transaction malleability problem. Its proposed activation, however, faced pushback from the forgotten faction of the previous Block War: the industrial miners that supported BIP100.
These miners wanted a compromise from Bitcoin Core developers and in a meeting in February 2016, they expressed that they would only activate SegWit if block size was increased to 2MB, an agreement which was later dubbed “The Hong Kong Agreement.” This meeting was documented by the Bitcoin Roundtable Consensus, and the topic widely discussed throughout 2016. A storm was brewing as the very same fundamental debate that haunted Bitcoin development for nearly three years was back. Its revival, unfortunately, was set to once again divide the community and drive people away from Bitcoin.
During the Consensus conference of May 2017, a closed-door meeting took place to gauge the position of Bitcoin miners and businesses regarding SegWit. Like the Hong Kong agreement from the prior year, this group petitioned for the activation of SegWit and a block size increase to 2MB in what has been referred to as the “New York Agreement.” By the end of May 2017, less than 20% of miners signaled their intention to adopt SegWit without a block size increase. The 2MB version of SegWit became known as SegWit2x and at its height was backed by most Bitcoin businesses as well as over 80% of the network’s hashrate.
At the same time, some of the big block supporters began to campaign for the rejection of SegWit. Their unofficial leader, the CEO of Bitcoin.com, Roger Ver, was able to attract prominent supporters of SegWit2x, such as Bitmain, in a bid to also support a chain split that completely rejects SegWit. To do so, Ver backed an alternative Bitcoin client called Bitcoin ABC (Adjustable Block-size Cap) and announced a hard-fork they decided to call it Bitcoin Cash. Other alternative implementations, such as Bitcoin Unlimited, followed suit and the chain split became imminent. Also revived that year was the Bitcoin XT project that already had activated support for bigger blocks.
The Birth of Bitcoin Cash (BCH)
On August 1st, 2017, Bitcoin Cash forked from Bitcoin. Its lead implementation, Bitcoin ABC, had a variation of Bitcoin’s difficulty readjustment to account for the drop in hashrate after the split. This algorithm is called the (Emergency) Difficulty Adjustment, or EDA, which, as the name suggests, adjusted the difficulty in Bitcoin Cash’s mining challenge immediately after the fork. This was necessary given that only a minority of miners allocated hashing power to BCH after the split and blocks still needed to be created every 10 minutes. As expected, the EDA successfully readjusted difficulty and the first Bitcoin Cash block was mined 6 hours after Bitcoin Block 478558 by ViaBCT, a Chinese cryptoasset exchange and mining pool.
Over the course of the month of August, mining pools such as Antpool, F2Pool, and BTC.COM began to support Bitcoin Cash. During that time, both the price of BCH and its hashrate was highly volatile and very few exchanges listed BCH for trading immediately after the fork. Like Gavin Andresen’s BIP101, Bitcoin Cash’s initial block size was capped at 8MB, but it never reached that cap organically. Shortly after the fork, Roger Ver resumed his campaign to undermine Bitcoin technologies such as the Lightning Network and Blockstream’s Liquid.
Meanwhile, the SegWit2X drama was set to once again split the network. The 2X fork was scheduled to happen on November 16, but on November 8, news broke that it had been suspended. In an email from Mike Belshe (CEO of BitGo), he, on behalf of Wences Casares (Xapo), Jihan Wu (Bitmain), Jeff Garzik (Bloq), Peter Smith (Blockchain.com) and Erik Voorhees (Shapeshift.io) announced that “we have not built sufficient consensus for a clean block size upgrade at this time. Continuing the current path could divide the community and be a setback to Bitcoin’s growth. This was never the goal of SegWit2x.”
The caveat here is that these entities only felt threatened enough to drop 2x after the efforts of a coalition of users and developers called UASF. These activists pledged to activate SegWit and retain the current cap, with or without the support of enterprise, through BIP 148: User Activated Soft Fork. The efforts that led to the creation of UASF deserve a standalone post, but, in essence, only when it became clear that users were actually in charge of Bitcoin that the progenitors of 2x felt threatened enough to drop the fork.
With the demise of 2x prices signaled some expectation that Bitcoin Cash could be adopted by 2x proponents, but that was not the case. As cryptoassets across the board rallied in 4Q17, there was a moment where both Bitcoin and Bitcoin Cash co-existed without significant contention. Miners backed Bitcoin Cash, while most of the network stayed on Bitcoin and both networks saw significant inflows of capital. This ordeal somewhat changed the community’s perspective on forks, which is relevant to the BCH chainsplit.
The Other Hard Fork(s)
Given the price volatility that both Bitcoin and Bitcoin Cash experienced after the hard-fork, Bitcoin Cash’s hashrate was highly inconsistent. Just like any other cryptocurrency that employs SHA-256, Bitcoin Cash experienced massive inflows and outflows of hashing power. While there were very few transactions in the network, Bitcoin Cash blocks were inconsistent and confirmation times were high. As it turned out, the EDA was working against them. When Bitcoin’s price spiked, miners fled Bitcoin Cash to mine Bitcoin and vice-versa, constantly triggering readjustments.
Bitcoin was not affected by the outflows as its hashrate was orders of magnitude higher, but with every outflow on Bitcoin Cash, the EDA was triggered and block creation stalled. As a result, less than three months after the hard-fork that created Bitcoin Cash, the developers behind Bitcoin ABC decided to fork once again on November 13 to de-activate the EDA and replace it with a more responsive algorithm. Rather than readjusting every 12 hours, as originally implemented on the EDA, this new algorithm was designed to readjust difficulty every 600 seconds based on the hashrate allocated to the previous 144 blocks.
The November 13 hard-fork that introduced the new difficulty readjustment algorithm marked the beginning of a schedule of hard-forks intended to add and remove features as needed. The Bitcoin Cash development community pledged to hard-fork the protocol every six months to continually add functionality to the protocol. Despite the sheer lack of transactions in its network, Bitcoin Cash hard-forked once again in May 2018 in order to support 32MB blocks. This hard-fork also reactivated a group of opcodes that were present in Bitcoin’s first version that had been deactivated back in 2013 due to security concerns. For context, opcodes are the machine-level operations supported by the Bitcoin network, such as OP_ADD (addition) and OP_MUL (multiplication).
Combined, all opcodes supported by Bitcoin can be used to design algorithms and make up Bitcoin’s native scripting language, which is conveniently called Script. Script is very much based on Forth; an imperative stack-based programming language that allows for simple, yet comprehensive, computer programs. For example, programs can be created so that if (OP_IF) a certain statement is true, a function is triggered. Conversely, if something else is true (OP_ELSE), another function can be triggered. As such, basic smart contracts can be created using the language, despite Bitcoin not being Turing-complete.
Unfortunately, Script’s functionality comes with serious trade-offs. In Bitcoin’s first iteration, the deactivated OP_CODES that were added to Bitcoin Cash proved to be unnecessary attack vectors that, if exploited, could crash the entire network. OP_CAT, for example, was initially conceived to enable developers to concatenate two strings and link data points such as signatures. However, OP_CAT can also be used to create a massive Denial-of-Service (DoS) attack by overflowing a transaction. An attacker could use OP_CAT to concatenate two random strings, and then use OP_DUP (duplication), to infinitely duplicate that string. The figure below demonstrate what an exploit would look like:
OP_CAT OP_DUP OP_CAT OP_DUP BOOM!
Stack: A (length=32)
Stack: A A OP_CAT
Stack: AA (length=64)
Stack: AA AA OP_CAT
Stack: AAAA (length=128)
Stack: AAAA AAAA OP_CAT
Stack: AAAAAAAA (length=256)
Each round of OP_DUP coupled with OP_CAT doubles the size of the stack. If exploited, the combination of these operations would allow for exponential memory usage and all nodes in the network would crash when computing the transaction. When this vulnerability came to light in 2013 and its significance became better understood, Core developers decided to retire some of the opcodes that would enable such attack. Since there was no easy fix to the exponential memory usage vulnerability, even simple opcodes like OP_MUL (multiplication) were deactivated. After the Bitcoin Cash hard-fork, its proponents, most notably, Roger Ver, pushed the unfounded conspiracy theory that functionality was limited back then because Blockstream had a hidden agenda to offer private smart contracts through its sidechain initiative. After lobbying for the reactivation of these opcodes, Bitcoin ABC added them to their client in May of 2018.
In addition to the reactivation of these controversial opcodes, the developers at ABC and Bitcoin Unlimited also began to explore the creation of additional opcodes for greater functionality. That created controversy in the community and stern criticism from self-proclaimed Satoshi Nakamoto, Craig Wright. Prior to the Bitcoin Cash hard-fork, Wright had quietly praised the ABC initiative for its lower fees, which was in actuality a result of the lack of transactions in the network, rather than bigger blocks. However, his involvement was limited at the time, but that changed in September 2017, when Wright was connected with Jihan Wu (Bitmain CEO), Roger Ver (Bitcoin.com CEO), and Haipo Yang (ViaBTC CEO) in a Hong Kong conference to support Bitcoin Cash. Ver and the other proponents seized the opportunity and made Wright one of the main personalities behind Bitcoin Cash over the course of 2018.
The prelude to the Hash War
That also entitled Wright to make more active suggestions about the way Bitcoin Cash should evolve, and when new functionality was proposed, Wright made clear that he completely opposed the introduction of more opcodes. Unsurprisingly, things began to get contentious when the developers of Bitcoin ABC disregarded his suggestions. Against his recommendations, they pushed for the addition of functionality that was never a part of the original specification; an opcode called OP_CHECKDATASIGVERIFY, or OP_DSV. Additionally, they proposed adding canonical transaction ordering to the protocol, which was a 2014 proposal by Gavin Andresen to improve block propagation, as well as a couple of other minor changes.
Immediately after the announcement, Wright opposed the changes on Twitter, especially DSV, stating that:
“OP_CHECKDATASIGVERIFY is not happening — If a certain ABC developer wants to push this, then we will just fund replacement developers — Trust me — There are others.” — Craig Wright
The developer he was referring to was Andrew Stone, who had been researching ways to improve Script for Bitcoin Cash smart contracts.
Why OP_DSV is Controversial
DSV allows the network to verify the validity of any arbitrary data string, thereby enabling Bitcoin Cash smart contracts to validate data from external sources. Just like Bitcoin’s use of OP_CHECKSIG, which verifies the validity of a digital signature, DSV also uses Elliptic Curve Cryptography (ECC) to verify information represented as a signature. That information can be the proof of a transaction that occurred in another network, or the outcome of an event as reported by external oracles.
Although the ECC required in DSV could be theoretically implemented using Bitcoin’s existing set of opcodes, the required script to do so would have a massive footprint on the blockchain of as much as 1MB per verification round. DSV addresses this problem by implementing all of the underlying operations in a single opcode that is native to the protocol. A single opcode considerably reduces the cost of execution, as well as the amount of information that needs to be stored in the chain.
On the surface, the decision by Bitcoin ABC to have all of the underlying operations of OP_DSV represented as a single opcode is what led to the creation of the SV faction, who sees OP_DSV as subsidy.
Since DSV requires a lot of computation if it were to be implemented using Bitcoin’s native Script, making it a single operation with a lower footprint “would not be fair” to the miners according to the SV coalition. As a single opcode, contracts that use DSV would pay the same fee as a simple operation like OP_MUL (multiplication), despite its higher computational requirements. The essence of the problem is that, unlike Ethereum, Bitcoin Cash does not employ the idea of chromatic gas, whereby the cost of an operation is theoretically proportionate to how long it takes to compute it.
Instead, smart contracts and scripts in Bitcoin Cash follow the same base price of 1 satoshi (the smallest unit of the bitcoin) per transaction byte. What that means practically is that the underlying operations of DSV have a much lower cost to execute as a single opcode than if implemented using Bitcoin’s Script. Consider that, if implemented through Script, the entire DSV script could take up to 1MB (or 1M satoshi) and cost around $2.89 in transaction fees at current prices. Conversely, OP_DSV is priced at 1 satoshi, a fraction of a cent. While Bitcoin ABC sees that as improvement, Bitcoin SV sees the $2.89 as a subsidy that directly impacts a miner’s profitability.
If we take a step back, the issue at hand is the subsidy of computational costs through a singular opcode that computes all of the underlying DSV operations off-chain (and only stores the results on-chain), rather multiple opcodes that implement DSV via Script (and stores all operations and its results on-chain). The problem of considering DSV a subsidy is that it implies miners will incur costs (other than opportunity cost) by having to compute DSV as a single opcode. In reality, however, not only are these costs negligible; they pale in comparison to the negative externalities of a 1MB script. The ECC required by DSV could be computed on a TI-84, and its requirements do not burden miners in any meaningful way. Hence, all miners apart from the CoinGeek coalition have sided with the ABC side of the fork.
Bigger Blocks Are a Bad Idea Anyway
While an increase in block size may temporarily relieve network transaction congestion, it is not a long- term solution in any meaningful way, and blocks will eventually fill in the future assuming Bitcoin continues to grow and be adopted. Still, the block size debate will not go away anytime soon, and Bitcoin Cash has been an interesting experiment to follow. After four years allocated to this discussion, the trade-offs associated with bigger blocks are much better understood at this time. On one hand, bigger blocks allow for more transactions to be processed on-chain. But on the other hand, they are a temporary solution that can dramatically impact the network in the long run. Increasing block size through a hard-fork impacts many distinct areas, and there are four reasons why we don’t believe there are a viable long-term solution:
1. They are a temporary solution
Block size increases are seen as a vertical scalability solution because they do not circumvent the requirement for all nodes to process all transactions canonically. In a scenario where cryptoassets are widely adopted, throughput requirements entail horizontal scalability solutions that can sustainably increase transactional capacity horizontally. That includes the use of technologies that have already been deployed, such as Lightning payment channels and Liquid sidechains, which allow for parallel processing. Mere block cap increases only solve the problem up until the cap is hit and, as such, are a temporary solution.
2. Bigger blocks are hard to propagate
Larger blocks are unarguably harder to propagate under Bitcoin’s basic wire protocol, which is based on TCP/IP. Under the 10-minute block window, a 32MB block would take a long time to propagate from a miner in China to a node in the United States. A 128MB block would be extremely onerous to propagate to other nodes, even using more efficient propagation technologies like Graphene, which further introduces centralization to the system. As a result, miners would produce a much higher rate of orphan blocks, which are valid blocks that fail to be amended to the chain because they were not propagated fast enough. Hence, the chain would have to reorganize more often, which opens up the potential for double-spend attacks.
3. Less nodes decrease network security
128MB blocks every 10 minutes which are full, will result in 18.4GB of data a day, 129GB a week, and 0.5TB a month. Users that run full node Bitcoin clients are required to store a full copy of the blockchain and incur the storage costs of running the software. Under 128MB blocks, power users would not be able to run a full node and it would be very expensive, even for corporations, to do so using cloud computing. The current directory size of the Bitcoin blockchain is 190GB, which enables most personal computers to run the software. However, if blocks double in size, Bitcoin’s directory size will grow at a proportional rate, increasing users’ storage costs. Consequently, less users will be able to run full nodes thereby potentially increasing miner centralization, contradicting Bitcoin’s ethos of decentralization.
4. What happens without the Coinbase?
Bitcoin’s Proof-of-Work is an effective security mechanism that has contributed to Bitcoin’s 99.99% of service uptime in its ~10 years of existence. Miners collectively stake electricity to solve a computationally intensive puzzle, and this process makes it extremely difficult for an entity like a nation-state to attack the network. In Proof-of-Work, network security is subsidized through the block reward, which is intended to incentivize miners to keep mining even when there are no transactions in the network. This incentive is split in half every 4 years and in 2140 will reach zero. By then, the network will have to carry sufficiently high fees to justify miners to continue to devote hashing power to it.
Otherwise, it will not be profitable to mine Bitcoin and hashrate will decrease, which opens up a big security vulnerability. Larger blocks reduce fees, increase the supply of block space, and make a fee market more difficult to develop. On the other hand, if we assume that most transactions will take place on Lightning channels or Liquid sidechains, only large value transactions would take place on-chain, which is a model that makes a robust fee market more feasible.
The split of Bitcoin Cash marks the resolution of the Big Block experiment
I suspect SegWit2x was the last big block proposal to gain mainstream traction. As exemplified in the short history of Bitcoin Cash, the community that buys into the narrative of bigger blocks tends to centralize around personalities rather than facts. Inevitably, when there is contention amongst these personalities, the hard-fork culture offers dissidents an easy way out; “just hard fork it.” This is the only death spiral that destroys value from sound money.
And if the history of money has taught us anything is that when a society has the option to inflate its monetary base, it ends up having to take that option at some point. In Bitcoin, hard-forks offer this option and this is yet another reason to value immutability. What the Lightning Network has proven in 2018 is that scalability is possible without the need for impetuous decisions that have negative long-term implications.