To Reduce or Not to Reduce the Block Size?

Matt ฿
ChainRift Research
Published in
3 min readFeb 12, 2019

For years, it was generally accepted that increasing the block size was the best way to scale the Bitcoin blockchain – more space available means more transactions can be crammed in and lower fees can be paid due to less competition.

Larger blocks, however, come at a cost externalised to the peer-to-peer network: propagation becomes more difficult, and nodes require higher-end hardware to keep up – in essence, decentralisation is sacrificed in the pursuit of higher throughput. Most disagree strongly with this line of thinking, which is exactly why we saw so much pushback against bigger blocks (leading to the creation of BCH) in 2017.

Many were happy with the Segwit upgrade, which allowed for significantly more transactions to be added into a block (bypassing the 1MB limit), whilst also fixing transaction malleability. Though many rejoice with the solution, others think another soft fork is needed – specifically, to bring the block size back down.

Why?

In a word, decentralisation. Core dev Luke Jr has been particularly vocal for some time about the need to reduce the size to 300kb to lessen the burden of IDB (or initial block download), so that prospective Bitcoin users wouldn’t be deterred from running a node. At present, the blockchain is roughly 237GB in size, and appends approximately ~1.2MB every ten minutes (or ~172MB a day). Of course, it’s entirely possible for that number to double and for 300+ megabytes a day to become commonplace.

6102 on Twitter crunched some of the numbers – assuming 2MB blocks, an additional 1.05TB would be created within a decade, versus 0.15TB with 300kb blocks. In my own experience, syncing a node in early 2018 took me less than a week on a high-end computer with a good connection, versus a few weeks (possibly upwards of a month) when set up later in the year on a Raspberry Pi 3.

There is another point to be made in the case for a reduced block size: just as bigger blocks are more cumbersome to propagate, smaller ones are easier to relay, an attribute important in ensuring global participation. In the interests of censorship-resistance, 300kb blocks would make block propagation via TOR (and ideally other channels) easier.

To top it all off, Luke Jr has previously expressed concerns that, even with 1MB blocks, the growth of the chain will soon outpace the technology for Bitcoin to be usable. More recently, he cites a drastic drop in full nodes due to IBD increasing.

Likely?

It’s difficult to say. Gauging Twitter sentiment is a poor metric, and arguments have been put forth for both sides of the debate. Those opposed to a reduction question its necessity and worry that it will only drive up on-chain fees (conversely, proponents believe that it would spur adoption of off-chain transactions).

There has yet to be a concrete proposal for a soft fork to lower the block size, though an idea is being put together for a temporary one that will run from August until the end of the year.

You’d be hard-pressed to find a Bitcoiner disagreeing with the statement that the network needs to be as decentralised and as censorship-resistant as possible. However, this conversation raises some interesting questions as to which measures we need to take to achieve this.

Until someone more talented can provide an in-depth analysis, the jury is still out for me.

Cover art by the author.

--

--