Vitalik Buterin
2 min readMay 24, 2018

--

>Because of Ethereum’s exponentially growing blocksize, the bottleneck is not regulated

> At some point your node will fall out of sync because of this or a blocksize cap will be put in place.

This is *severely* uninformed. Ethereum already has a block size limit in the form of its gas limit, and this gas limit is at 8 million and has been there for the last six months. Fast sync datadir growth has flatlined at 10GB per month for the last six months and it’s not going to go much higher, if only because increasing the gaslimit much further would lead to uncle rate centralization issues. So we *already are* experiencing the worst of it and have been for half a year.

Also, focusing on archive node size is highly fallacious because (i) you can have a much lower datadir size by either resyncing even once per year, or just running a Parity node, which prunes for you, and (ii) it includes a bunch of extraneous data (technically, all historical states, plus Patricia tree nodes) that could be recalculated from the blockchain (under 50 GB) anyway, so you’re not even “throwing away history” in any significant information-theoretic sense by doing so. And if you *are* ok with throwing away history, you could run Parity in state-only mode and the disk requirements drop to under 10 GB.

> Now with sharding it can downgrade to a “shard node” . None of this matters. You’re still losing a full-node every time one downgrades.

The whole point of sharding is that the network can theoretically survive with ZERO of what you call “full nodes”. And if there are five full nodes, those five full nodes don’t have any extra special power to decide consensus; they just verify more stuff and so will find their way to the correct chain more quickly, that’s all. Consensus-forming nodes need only be shard nodes.

Finally, you’re using the term “BCash” wrong; it’s an implementation, not a blockchain/cryptocurrency.

So please learn about the tech before you criticize next time.

--

--