Vitalik Buterin
6 min readJun 8, 2018

--

I think this article is conflating a few very different issues. First of all, an entire section (the one comparing Bitcoin to “Old Ethereum”) is focused on governance, and not scaling strategy. Bitcoin favors maximum predictability of the cost of reading the blockchain, at the expense of the minimum possible amount of predictability of the cost of writing to the blockchain, with predictably very healthy results in the former metric and disastrous results in the latter metric. Ethereum, with its current governance model, favors medium predictability of both. The Ethereum gas-per-second limit has gone up by a factor of ~3.02 over the last three years, which actually roughly matches the standard growth rates for Moore’s law and Nielsen’s law (and hard drive capacity growth).

To me it is not at all obvious that predictable access to reading the blockchain is more important than predictable access to writing to the blockchain. Centralizing the former is dangerous because, well, I wrote an article about that myself, and centralizing the latter is dangerous because it creates an incentive for people to switch to centralized layer-2 services and centralized-custody wallets that can make optimizations like advanced gas estimation and transaction batching.

In any case, the Ethereum community clearly does not view the miner-adjustable gas limit as something worth worrying about, but if in the future it does, a fixed gas limit could be introduced with a simple one-line-of-code soft fork. I would actually advocate this being done eventually. If a one-line-of-code soft fork is all the stands in the way of Ethereum being “inherently decentralizing” rather than “inherently centralizing”, then that’s not exactly a strong moat by which to argue for the superiority or inherentness of Bitcoin’s decentralization.

> The important takeaway is Ethereum nodes don’t reject blocks no matter what the gas limit is.

Ethereum nodes reject blocks whose gas limit exceeds their parent’s gas limit * 1025/1024.

So note that there’s actually something like half of the article that fixates on the fact that Ethereum’s current implementation lacks that single line of code that says assert block.gas_limit <= 8000000 , and if the Ethereum community wanted to, it could make half of the article irrelevant overnight by just adopting a soft fork that adds in that line of code. As I said, I think that it is the right choice to eventually do this. But at this point, doesn’t the fact that basically no one in the Ethereum community is campaigning to introduce a soft fork that adds this line of code in tell you something?

> In Bitcoin all nodes validate

> In Ethereum nodes are split into full & light versions, and only the full nodes validate

This is not even true. Bitcoin has light nodes too. The accurate version of this claim would be something like “37% of Bitcoin users run a full node, and 15% of Ethereum users run a full node, so the Ethereum community needs to be 2.5x bigger to support the same number of full nodes” (37 and 15 are examples; I have no idea what the actual numbers are, though I’ll grant that the percentage in Bitcoin is probably larger). I would be happy to get into a debate on the marginal value of having N+1 vs N full nodes in the network, though it would be good to agree on the meta-arguments of the debate first (I wonder if StopAndDecrypt would accept my “security through coordination problems” formulation from here https://vitalik.ca/general/2017/05/08/coordination_problems.html, or if he/she/ze would disagree with even this).

Now we get to sharding.

> An upcoming fundamental change to the network structure that shrinks this validating set even further, where you need 32 ETH just to be one

This conflates two types of “validation”. One type is “checking stuff for your own use”, and the other type is the word “validator” as used in proof of stake. 32 ETH is the expected minimum for being a PoS validator. PoW is even more inaccessible, requiring millions of dollars of capital to participate effectively. You can validate content on a shard by participating in the shard as a node and fully checking everything on that shard whenever you want, and there’s no mechanism or 32 ETH minimum that stops you.

> Staking requires 32 ETH ($16,000 as of now), so not only is the set of validators continuously decreasing, but those who have $16,000 at their disposal to stake don’t care about the data processing requirements. So this will only accelerate the data throughput growth

Is this a political argument? As in, “Ethereum governance will likely push for ever-larger block sizes because the nodes that actually run the network don’t care much about data processing requirements”? If so, I feel like PoW miners and mining pools are a 1000x worse influence.

> The “Ethereum with Proof of Stake” section

This section seems to double down on the error of conflating “node” as in “computer that is part of the P2P network and verifies stuff before propagating it”, and node as in “PoS validator”. The 32 ETH minimum only affects the latter, and much harsher minimums in that regard exist for PoW. Regarding the former, anyone can join the p2p subnetwork associated with a shard and fully validate all blocks on that shard before passing them along to its peers. It is also true that

> Light Nodes [Pink]: These are the nodes that you’ll be running

Once again, this is not true. Clients are welcome to run a node that fully validates any portion of the blockchain, and not forward those blocks that are invalid.

> The issue is that the difficulty to do so grows over time, and the amount of nodes shrinks over time because of it. It’s inherently centralizing.

What if the max number of shards, and the max capacity of each shard, were both fixed, requiring a hard fork to change (plus norms encouraging being careful about doing so)? Would you say that sharding is inherently decentralizing then?

> No. Use a 2nd Layer.

2nd layers are centralizing too in their own ways. Watchtowers, capital allocation costs, mass attack systemic risks, routing layers, privacy issues on routing layers, 51% miner attacks can now not only censor but also steal, etc etc. So I don’t think it’s at all fair to compare “New Ethereum” to “Old Bitcoin”, you have to compare “New Ethereum” to “Lightning Network”, which is the form of “Bitcoin” that users will actually experience.

> The directory size is analogous to the same exponential growth that’s occurring with the nodes processing requirements.

Even if it is, it’s still 20x smaller in an absolute sense, and that’s a large constant-factor difference that is very important to recognize.

So to summarize, I think this article is unfortunately weakened by two major conflations:

  1. Conflation of the sharding vs everyone-as-full-node architecture debate, and the miner-adjustable block size vs hard-fork-adjustable-only block size governance debate.
  2. Conflation of validators as in computers that validate in the sense of checking for themselves (and checking to filter what they forward to their peers), and validators as in nodes that actively participate in PoS. Anyone is free to “shadow” the PoS game, performing the exact same checks that the PoS node would make and forwarding only blocks that pass those checks, without the 32 ETH.

I also think it’s unfortunate that the article fails to mention the key differences between sharding and DPOS, namely that in sharding, and NOT in DPOS, it’s possible for a node to verify any specific transaction that they are concerned about, whereas in DPOS this is not feasible because there are no Merkle state trees and so verifying anything requires having verified absolutely everything before that point.

Issues that I think would be productive to talk about include:

  • What are our utility functions for the number and distribution of economic validating nodes, both in absolute count and as a percentage of users? What is our reasoning that justifies this? As I said, https://vitalik.ca/general/2017/05/08/coordination_problems.html represents my congealed thoughts on the issue, and I wonder if StopAndDecrypt’s framework is fundamentally different.
  • Technical arguments about how effectively our “fraud proofs plus data availability proofs” approach (and in the long run, STARKs plus data availability proofs) achieves the same security benefits that StopAndDecrypt seeks by requiring everyone to verify everything directly
  • Economic centralization and pooling incentives in proof of stake
  • Possible attackability of sharded P2P networks
  • Limits to sharding, including DoS attacks on the fraud proof mechanism, growing implicit minimum network node counts, stability of data availability proofs, and possibility of data forgetting; ways to quantify those limits and determine what levels of sharding are well on the safe side of them.

--

--