I think this article is conflating a few very different issues.
Vitalik Buterin
7405

I think you know my argument is that it’s all tied together and the conflation only exists if you try to separate these into different issues. Bitcoin’s onchain governance and network design ensures it’s ability to continue growing. You then go and conflate them yourself and say Ethereum’s governance model has XYZ effect on scaling.

As for the reading/writing analogue, I much prefer to look at it as participating-in/requesting-for, or something similar. There’s no doubt Bitcoin focuses on the former more, but it certainly must account for the latter to some degree or else it’s limited to being a store of value. Some even argue that’s all it needs to be, but it doesn’t maximize it, else Segwit wouldn’t have been given a full 2MB of extra block space, nor would there be any focus on compression techniques that up the processing requirements.

There’s no indication Moore’s or Nielsen’s Laws will continue, or that they aren’t S-Curves. Some have argued it’s already slowed down, but that’s moot because there’s also no indication that the required processing demand won’t continue to go up until you decide to cap it. I also believe I do make mention that the blocksize cap isn’t the only thing that defines the “inherent” property, it’s just the one I was highlighting, so no, half the article isn’t negated by that hypothetical, and my entire first article (aside from your issue with the title) gets a confirmation stamp on it’s prediction given that hypothetical.

Furthermore, if you agree and advocate it gets capped eventually, then this only backs up my claims that you’ll need to implement a cap…which brings about the Dapp argument again because you know they’ll get priced out.

Sharding doesn’t scale limitlessly either because the shards are limited to nodes that have the funds to stake, and there’s a pretty good argument to make that these funds are centralized to begin with, and only become more centralized overtime since Ethereum does nothing to address its intrinsic wealth inequality that’s akin to the inflation of central banks and a high barrier to entry into the market of interest accumulation (staking).

> The important takeaway is Ethereum nodes don’t reject blocks no matter what the gas limit is.
Ethereum nodes reject blocks whose gas limit exceeds their parent’s gas limit * 1025/1024.

You highlight this but neglect that I already discuss this in an earlier section and explained what you then went on to explain.

The point here was, and perhaps I missed clarification in editing, Ethereum nodes don’t look and say “that’s over X, nope not valid”, there’s a process (of which I, again, already explained earlier than this quote) that effectively lets those in control set it to whatever they like with enough time to bring the average up. You can be semantic here if you still choose to be, but the point remains.

Bitcoin has light nodes too.

This misses the point though.

  1. Bitcoin has light-clients, and no one tries to give the impression that they have an effect on consensus enforcement/network security/privacy via distribution.
  2. Your light-nodes are effectively hamstrung validating nodes. The issue is then they are given a wheelchair and told they are equal. I pointed this out. The rational again is the “non-mining” nodes misdirect, which you didn’t address in your reply.
  3. I’m not talking about percentage of users and what they choose to use. I’m talking about network nodes. I’m not that interested in debating N vs. N+1, to me the only variable that matters is N, and the users ability to be part of N, not whether they choose or how many of them choose.
I wonder if StopAndDecrypt would accept my “security through coordination problems” formulation

I skimmed to get the point.

> would you really not update your client to accept the new chain?”

No, and when the infrastructure is built up enough (and I’m sure we can both agree that bootstrapping as premise exists) it will be physically impossible for parts of that infrastructure to upgrade. You could consider my position stronger than Greg’s in a way because I envision a completely ossified protocol in the future secured through infrastructure, and currently that takes the form of full-nodes, services that use full-nodes, wallets that would be disabled if a “new network” were created, and Lightning nodes.

Think AT&T and Verizon working together and switching their towers to support 5G and disabling 3G and 4G in the process. It’s impossible for them.

The second level is a “nearly fully verifying” light client. This kind of client doesn’t just try to follow the chain that the majority follows; rather, it also tries to follow only chains that follow all the rules.

I don’t think “strong light-clients” are “enough infrastructure” by any means to satisfy this analogy, nor do I believe you’ve figured out how to get fraud proofs working.

To be clear, we might be in-line here with the desirable outcome, but not how to achieve it. I don’t think “headaches”, or it’s “more seriously worded” analogue, will be enough.

he/she/ze

He. And you forgot the other 100 or so pronouns.

:)

This conflates two types of “validation”. One type is “checking stuff for your own use”, and the other type is the word “validator” as used in proof of stake.

This is where we fundamentally disagree.

I agree that there are two types, and they are the two that you describe, but how a node behaves on the network in tandem with these behaviors is what is important.

My Bitcoin node isn’t just “checking”. It’s refusing to send to it’s peers invalid information, and block peers that continuously try to send invalid information. Although I made this pretty clear in the section on Bitcoin’s network, they all validate and then propagate. Their collective power comes from them refusing the propagation of invalid blocks, and it gets stronger as the set of them grow.

My issue with Ethereum + Sharding is also made clear, because even though right now Ethereum may have a uncapped range (with enough time) on it’s blocksizese, all of it’s nodes still share some semblance of that same property, just very diminished because of that centralizing factor.

Is this a political argument? As in, “Ethereum governance will likely push for ever-larger block sizes because the nodes that actually run the network don’t care much about data processing requirements”? If so, I feel like PoW miners and mining pools are a 1000x worse influence.

I don’t see this as a governance issue. I see this as a flaw. It should be expected that this will happen with enough time, and built to avoid it. You even agree it’ll need one. I’m glad we (the crypto community as a whole) can “finally” move the discussion towards that…but I won’t get into that here.

I disagree with your argument for miners in the way it’s being applied here because of the reasons above regarding validation. Miners aren’t the network, “mining nodes” don’t exist at the protocol level, and you should check out Matt Corallo’s new mining protocol proposal.

This section seems to double down on the error of conflating “node” as in “computer that is part of the P2P network and verifies stuff before propagating it”, and node as in “PoS validator”. The 32 ETH minimum only affects the latter, and much harsher minimums in that regard exist for PoW.

It seems like you’re using “PoS Validator” in a way that to me means “Block Creator”, and then mirroring that over to Bitcoin’s network and calling “nodes that happen to mine” “PoW Validators”.

All nodes are equal in Bitcoin, and while they may also be equal in Ethereum with Proof of Stake, they aren’t with Sharding.

I even use verbiage that aligns with this in that section:

> Validators that stake

Implying validation is separate from staking, consistent with my earlier assertion that validating is seperate from mining.

That section only served to make that clear, and to highlight the two differences where I said much like in PoW Ethereum, the only differences are A & B.

Furthermore, I could buy miners for much less than 32 ETH and begin mining, and 32 ETH in this hypothetical future won’t always be as low as $16,000. On top of that, with Matt’s new mining protocol I’d have complete say over the blocks I create, no pool control.

anyone can join the p2p subnetwork associated with a shard and fully validate all blocks on that shard before passing them along to its peers. It is also true that
> Light Nodes [Pink]: These are the nodes that you’ll be running
Once again, this is not true. Clients are welcome to run a node that fully validates any portion of the blockchain, and not forward those blocks that are invalid.

I feel like I’ve addressed this from multiple angles enough. Light nodes can check, but they have no say. They are literally a tier below the network of validators (my definition, not yours). When I say “these are the nodes you’ll be running”, it’s true because they won’t be staking 32 ETH.

What if the max number of shards, and the max capacity of each shard, were both fixed, requiring a hard fork to change (plus norms encouraging being careful about doing so)? Would you say that sharding is inherently decentralizing then?

Probably not. I think there’s a valid grey area, and I prefer to stick in the white zone.

If those two things were true, barring the complications over keeping them true, you’d still have the issue where fees will outbid each other and Dapps dependent on low fee transactions (most of them) become useless.

2nd layers are centralizing too in their own ways. Watchtowers, capital allocation costs, mass attack systemic risks, routing layers, privacy issues on routing layers, 51% miner attacks can now not only censor but also steal, etc etc. So I don’t think it’s at all fair to compare “New Ethereum” to “Old Bitcoin”, you have to compare “New Ethereum” to “Lightning Network”, which is the form of “Bitcoin” that users will actually experience.

Not necessarily because Bitcoin still works as sound money without Lightning. With Lightning much of the on-chain demand will migrate. Some argue we might even need to restrict the blocksize if it works too well.

That being said I do agree the comparison should be adjusted, but the concerns over Lightning are a bit overplayed. Watchtowers can be built in to be all of your peers with an opt-out. Capital costs are at worst comparable to Proof of Stake but now they’re on L2 instead, and at best easy, behind the scenes, and a complete non-issue for the users who were going to opt for an SPV wallet anyway.

1. Conflation of the sharding vs everyone-as-full-node architecture debate, and the miner-adjustable block size vs hard-fork-adjustable-only block size governance debate.

What I’m actually doing is making a case for why Bitcoin is better on both fronts, and how they need to go hand in hand.

More importantly, I think “everyone as a full node” ideal is directly hindered by the miner-adjustable blocksize, so the debates must join together.

2. Conflation of validators as in computers that validate in the sense of checking for themselves (and checking to filter what they forward to their peers), and validators as in nodes that actively participate in PoS. Anyone is free to “shadow” the PoS game, performing the exact same checks that the PoS node would make and forwarding only blocks that pass those checks, without the 32 ETH.

Hopefully I made it clear here how I’m not conflating them.

I also think it’s unfortunate that the article fails to mention the key differences between sharding and DPOS, namely that in sharding, and NOT in DPOS, it’s possible for a node to verify any specific transaction that they are concerned about, whereas in DPOS this is not feasible because there are no Merkle state trees and so verifying anything requires having verified absolutely everything before that point.

No mention because this article is mostly about the importance of fully-validating nodes, in accordance with my definition of them and how they work together, ie: “not just checking”.