Keeping up with the Etherians — A response to blockchain size concerns
The article makes a few good points but also contains significant amounts of misunderstandings and misinformation to the extent that overall the article is misleading. It also degrades into rambling propaganda towards the end — a type of “article” I usually ignore. However it seems the article got some traction on twitter not only among the trolls, and with an annoying mix of truths and falsehoods it made me motivated enough to write up a response.
The article makes strong claims and as this type of “discussion” in the blockchain space often degrade into tribalism and anti-intellectualism I’m going to try and write about where I find the author to be right, and where they are in error.
As a disclaimer, I hold BTC, BCH, ETH and a number of other tokens, so I’m generally not very (financially) biased when it comes to criticism of specific protocols / networks. However I’m definitely emotionally biased towards ETH, having worked on the protocols, clients and with many good people in the ETH community, and that bias is likely to manifest in this response (perhaps fitting as the StopAndDecrypt is a self-described BTC maximalist :))
What the author got right
- Ethereum is at capacity, and as a result we have seen a rise in average transaction fees as users compete for the available transaction supply. This is similar to Bitcoin, which is also at capacity with increased transaction fees as demand saturates available supply.
- Ethereum implementations (of which there are several) have seen performance issues handling the increased load, increasing the hardware requirements of running a full node.
- As the number of Ethereum transactions per second have risen, so has the bandwidth required to run a full node and to stay synced after the initial sync.
- Large (enough) blocks centralize validators. The larger blocks are, the more resources are required to run full nodes (and to compete in mining due to increased latency), which over time leads to increased financial centralization as fewer can afford to run a full node and even fewer can afford to mine. It also leads to geographical centralization as fewer places in the world are viable in terms of electricity and hardware hosting / maintenance costs.
- Large number of full nodes are important for decentralization, and too few puts the entire network at risk of being manipulated in various ways by attackers.
What the author got wrong
“(…) the incentive structure of the base layer is completely broken because there is no cap on Ethereum’s blocksize (…)”
This is misleading at best and false at worst. Ethereum has a block size limit due to the block gas limit enforced by the consensus protocol. The block gas limit is dynamically adjusted by miners. In each block, miners can increase or decrease the block size by a maximum of the previous block size divided by 1024. This is defined in equations 45 to 47 in the formal Ethereum protocol specification and implemented by all Ethereum clients. Effectively this means that a majority of miners must agree for the block size cap to change.
As miners are incentivized to act in ways that maximize the value of the tokens they receive in each mined block, they are unlikely to vote for a blocksize increase that would break the network. If the author trusts Bitcoin miners to act in ways that maximize the value of their bitcoins (such as not censoring transactions, generally prioritizing txs by their fee and otherwise act in ways that are beneficial for the network) the author should trust Ethereum miners to only vote for block sizes that can be handled by reasonable hardware, as the decentralized verification done by full nodes underpins the security of the network.
The author is encouraged to peruse further the above linked Ethereum protocol specification as well as theory on economic incentives.
“Even if one [blocksize cap] was put in place it would have to be reasonable, and then these Dapps wouldn’t even work because they’re barely working now with no cap.”
Nobody goes to the beach anymore — it’s too crowded.
If a blockchain network is at capacity, with all blocks filled with transaction, then all the tx senders found utility in sending their txs with their particular fees.
It’s not up to us to pass judgement on how people prefer to use permissionless, public blockchains like Bitcoin and Ethereum. If Ethereum sees so high tx fees that some apps become unfeasible, then it means other utility is provided (be it other apps, ICOs, or just plain value transfers) and people are paying the fair market price for this utility.
This is exactly the same reason why Bitcoin is still around and useful today even though it can only support 3 txs/s with tx fees as high as $20. The network still provides significant utility even though the Bitcoin community largely decided to support a stop to (level 1) scaling in favor of other things. We are currently seeing that the utility supply provided by public, permissionless blockchains with sufficient security generally fills up and the market reaches an equilibrium of what users are willing to pay for the utility.
The author continues their fallacy by directly contradicting themselves by arguing for apps on Ethereum to move over to Bitcoin. If apps become useless on Ethereum due to increased tx fees, then they would be useless on Bitcoin too if crowded out by other users who pay higher tx fees. Generally Ethereum tx fees have now settled around $0.05 to $0.2 for simple value transfers confirmed within 2 minutes, compared to $1.65 to confirm a BTC tx within 30 minutes. Moreover, apps would be less useful as one has to wait significantly longer to get a transaction confirmed on Bitcoin (yes, even if taking into account that Bitcoin block PoW solutions are more difficult due to the longer block times). Not to mention the inflexibility of the Bitcoin “script” language, but that’s a story for another time.
Also note that in Ethereum simple ETH transfers have constant fee complexity (protocol spec, Appendix G), unlike in Bitcoin where the fee is linear to the number of tx input/outputs —often as a pleasant surprise for users. This generally makes it harder for payment apps to estimate fees and for users to understand them in Bitcoin compared to Ethereum.
This also affects the “dust limit” — the point where it is no longer possible to move smaller amounts as they are below the required tx fee to move them. On Ethereum, it’s currently worthwhile to move ETH amounts as low as $0.05, whereas it’s quite a bit higher on Bitcoin.
There is no such thing as apps crippling Ethereum due to high load — it simply pushes tx fees up to the point where people are willing to pay them, which generally overshoots (just like it does on Bitcoin) and then finds an equilibrium significantly lower than the peak.
“The [Bitcoin] blocksize doesn’t restrict transaction flow, it regulates the amount of broadcast-to-all data being sent over the network.”
This is false for any sane definition of “transaction flow”. An arbitrary limit on tx/s does restrict transaction flow, as more transactions cannot flow within a given time period… And if we’re including off-chain solutions such as the lightning network as an argument that L1 tx/s limits does not decrease flow, then we should include in such discussions already live solutions on Ethereum too. Or recognize that the cost to setup e.g. payment channels increases as the L1 fees goes up…
“I am saying that this information needs to stop being obscured. I’m also saying that if/when it ever is unobscured, it’ll be too late and nothing can be done about it anyway.”
This information is not obscured. You can simply run a full node and query it for https://wiki.parity.io/JSONRPC-parity-module.html#parity_pendingtransactions on a regular basis and publish that on a website. Just because you haven’t found a website doing this does not mean the information is obscured.
Moreover, even if, for the sake of argument, we assume that this information is obscured, the argument that it “it’ll be too late” when it is unobscured is at best a faulty generalization and at worst the post hoc fallacy.
“Keep in mind, none of this information [block propagation times and transaction times] is available for Ethereum”
This is false. Block propagation times can easily be measured by running a few, geographically distributed full nodes, connect them to each other and measure when they see and relay new blocks and transactions.
And if someone that falsely states that this kind of information is not available in a public, permissionless network, is too lazy to spend a few hours learning how to deploy, use and even add debugging to Ethereum clients, in order to gather such information, they can always check propagation times for nodes connected to https://ethstats.net/
If the author feels there is a lack of online tools to easier view this information, they are welcome to contribute to the Ethereum open source community by building such tools.
[vague rant about using the blockchain the “right” way and hatin’ on CryptoKitties]
The author presumes there is a “right” way to use a public, permissionless blockchain. The beauty of blockchains such as Bitcoin and Ethereum is that users can use them for whatever they want as long as they can convince a miner to accept their tx.
For example, a lot of people actually _enjoy_ CryptoKitties, to the extent of bidding $140,000 worth of ETH for one cat at a recent auction. Moreover, I will here go as far as intentionally engaging in an ad hominem by conjecturing that StopAndDecrypt is probably a cat hater.
“The Bitcoin network has about 115,000 nodes, of which about 12,000 are listening-nodes.”
This appears to contradict several other sources on Bitcoin node counts, for example 9717 nodes as reported by https://bitnodes.earn.com/ , 9342 nodes reported by https://coin.dance/nodes and 8801 nodes reported by https://bitcoinchain.com/nodes
If all these sources are wrong, they would probably love to know exactly how nodes are counted by the site the author links.
Moreover, who has audited the scripts calculating these larger node count numbers? Do we have other measurements confirming these numbers, or should we simply trust a single developers efforts here? Is the script used open source?
“Again, there are 115,000 Bitcoin full-nodes that do everything.”
Has the script that was used to gather node counts verified that each node that _claims_ to be running a specific Bitcoin version _actually_ runs it, by challenging it or otherwise verifying that it correctly runs the full Bitcoin protocol? Or does it simply trust the response of https://en.bitcoin.it/wiki/Protocol_documentation#version ?
“That Ethereum node count? Guarantee you those are mostly Light-Nodes doing absolutely zero validation work (checking headers isn’t validation). Don’t agree with that? Prove me wrong. Show me data.”
How about the author provides some data supporting their speculative claims? “Guarantee you” implies an appeal to authority, and given the above false claims and misunderstandings, the author has in my mind lost enough credibility to be taken seriously on matters of (Ethereum) protocols and networks.
When your node can’t stay in sync it downgrades to a light client.
False. Even if a node is behind a number of blocks when syncing, it can still answer queries for past blocks and transactions and service other nodes that are syncing. The author would do good to examine the concurrency and state handling of clients such as parity and go-ethereum to understand more how nodes currently implement syncing and will work with new sharding proposals.
“How would you even know how many fully validating nodes there are in this set up? You can’t even tell now because the only sites tracking it count the light clients in the total. How would you ever know that the full-nodes centralized to let’s say, 10 datacenters? You’ll never know. You. Will. Never. Know.”
OK, so right now we are able to know, with full certainty, that there are 115000 correctly verifying full Bitcoin nodes, but in this hypothetical future the author imagines we are unable to know how many correctly verifying full nodes there are in the Ethereum network?
Clearly there is some network engineering design magic currently present in Bitcoin that this future Ethereum network could leverage. Given that both Bitcoin and Ethereum clients are open source, I expect this magic to soon be discovered by Ethereum developers and then merged in, enabling us all to know exactly how many full nodes are present at any given time.
Back To Reality
At this point it’s tempting to continue to point out false claims or the many logical fallacies present in the article (or how the ramblings degrade into outright propaganda).
What’s more useful is to recognize that yes, Ethereum, like Bitcoin, has scaling challenges as demand for the utility of the network is today greater than the supply. And to look at what concrete things we should be aware of and work on to fix this.
Major clients such as parity and go-ethereum are continuously deploying improvements that increase performance and make syncing faster, more stable and more secure. While the technology may look intimidating at a first glance, it is surprisingly accessible and you can quickly get up to speed with the help from a ton of developers available online that will be happy to answer any questions and help you out if you want to contribute.
Network Load and Miners
Lately, Ethereum has been processing around 800K txs every 24h (9.2 tx/s) compared to Bitcoin’s ~210K txs every 24h (2.4 tx/s). Naturally, it requires more to run an Ethereum full node. And it can strain especially older laptops, and definitely requires an SSD. However, it does not require a beefy server by any reasonable measure. In fact, any dedicated machine with a CPU from the last 6 years, 8 GB of RAM and a modern SSD can process an Ethereum full node just fine (or several full nodes as run on my pretty modest server). The bandwidth usage of tx and block relay is something to consider but is generally not a problem on well connected networks.
Miners are aware of the current block size (gas) limit and actively take part in discussions with other parts of the community around what an ideal block size limit is at any given point in time. Miners are not stupid — they operate for-profit operations, the larger of which deal with millions of $ in hardware costs, maintenance and operations. They’re not going to suddenly vote for drastically higher limits unless there are significant advances in the underlying protocol and implementing clients. They’re also not going to suddenly lower the gas limit unless performance issues around syncing become severe.
Miners have historically acted both to lower and to increase the limit before and after DoS vulnerabilities where fixed. They have also explicitly acted to maintain the block size limit at specific levels after carefully analyzing network usage and syncing performance.
Overall, as clients have continuously improved performance since the launch of the network, miners have gradually increased the limit towards the current value of 8M (Ethereum launched with 3.14M). Generally, if syncing issues become significant enough to affect the ETH price, miners become incentivized to lower the limit to regulate the network.
Practical Remedy: Checkpoints (!)
As others have already discussed the various sync modes supported by Ethereum clients and their varying resource requirements, another thing worth talking about as an emergency remedy — if the Ethereum network does indeed grow so fast that it becomes hard for most full nodes to keep up — are checkpoints.
Someone like StopAndDecrypt probably panics at the very mention of something as unholy and sinful as blockchain checkpoints. How can a blockchain be decentralized if clients implementing the consensus protocol agree on a checkpoint block rather than syncing from the genesis block?!
As it turns out, checkpoints can be used while preserving decentralization and security of the network. Clients can be (backwards compatible) updated every month to point to a checkpoint block 30 days into the past (determined from community consensus and confirmed by a large number of independent sources). This allows clients to quickly do a full sync from the checkpoint and then continue to do full validation of the entire Ethereum state, with far less storage required compared to a full archive node.
In the Bitcoin community, there is a commonly perpetuated assumption that — since the canonical chain is defined as the one with the most PoW — it can always reorg at an arbitrary chain depth to another chain and therefor clients should always do a full sync from the genesis block.
In practice, a reorg in either Bitcoin or Ethereum deeper than a few hours is extremely unlikely and would cause significant disruption for exchanges, businesses, apps and systems built on these networks. A reorg as long as a few days worth of blocks is practically unimaginable — the disruption would be absolutely massive and chaotic.
The probability of a chain reorg drops exponentially over time and in practice it’s extremely unlikely to see a reorg of more than a few blocks in either Bitcoin or Ethereum. We safely assume, and build entire businesses, apps and systems, on the reliance that after X confirmations a transaction is, for all practical purposes, finalized. After enough blocks, the only concern left is if there is any probability of a fork due to ongoing protocol upgrades or other types of forks.
It’s safe to assume that after a transaction has been confirmed in Bitcoin or Ethereum for a few days, there is no probability of it ever being reverted (except for the odd DAO, of course…).
It’s pretty trivial to quickly determine community consensus, at any given point in time, on what exact block is the “correct” block at a specific time stamp 30 days into the past. There are plenty of sources to check (other than running a full sync from the genesis block) and if all of them are currently about to be reorged by a hidden chain that a super-miner has been secretly mining for 30 days, well, then we’re all fucked anyway.
I’m not saying checkpoints are an ideal solution — there’s certainly drawbacks to them such as the manual work and coordination required to continuously update them— but it’s a very practical and simple solution that could quickly be released in Ethereum clients in case syncing issues really do become severe enough to impact network security.
Checkpoints would then provide much more time for long-term scalability efforts around sharding and off-chain solutions such as payment channels, probabilistic payments, state channels and dapp chains to be deployed and used by growing apps.
No one actually knows how many full nodes are required for a network to be “secure”. Despite eclipse and DoS attacks in both Bitcoin and Ethereum we have yet to see a large-scale attack to undermine an entire network through malicious full nodes. Until then, we cannot know if 1K, 5K, 10K or some other number is the minimum required to keep a network reasonable secure. We only know that more is better. This is similar to mining power — we cannot by definition know what amount is enough to keep a given network secure, because we cannot there prove a negative. We only know of some cases when the mining power was not enough, to provide sufficient security, giving us some information on lower bounds.
Generally, Ethereum continues to take a more aggressive approach to what the minimum hardware requirement should be to run a full node. This has the drawback of excluding raspberry pies, older laptops and slow Internet connections. The upside is that a growing ecosystem of apps (yes, some are live), decentralized exchanges (also live) and ICOs (they are not all scams — some do enable funding for important new projects).
Ethereum is currently processing more than 4X the number of transactions in the Bitcoin network — powering all a wide variety of use cases. That 4X has meaning — a lot of meaning. And the Ethereum community is generally in favor of the trade-offs involved in reaching and maintaining that 4X until more scaling technology is deployed.
And this demonstrates a core difference in philosophy. The fact is that few developers are building new apps/systems on top of Bitcoin. Most choose Ethereum or other new blockchains due to not only flexibility provided by smart contracts but also for the community of developers supporting a vision of increased throughput and scalability.
It may take Ethereum more time than we like to deploy new scalability measures, and we should continuously be aware of how many nodes are in the network and how much validation they perform.
And it is true that more nodes running full validation increases security, and if this number becomes too small it can compromise the security of the network. But we’re not there today — and techniques such as warp-sync does not necessarily impact the security of the network as long as full nodes do full validation after their initial sync and there are enough archive nodes in the network. And it’s worth pointing out that header-only validation includes PoW verification, which is increasingly harder to manipulate due to the excessive mining power of the network.
For those for who read this far — some other resources you may find interesting are: