At a pretty fundamental level, these are graphs that correlate to ease of participation. I can easily participate by buying bitcoin, but can’t easily compete with the most wealthy bitcoin holders. I can pretty easily choose to mine, but can’t easily compete with China’s dominance. I can more easily make a few dev commits compared to a ton if it were my full time job, etc.
So I think that these measures of decentralization have a lot to do with 1) ease of participation, and 2) desire/willingness to participate, to any given degree. What’s crucial is that decentralization is built in to system and subsystem design, even if every day participation isn’t perfectly G=0.
Thus, a higher gini coeff or lower Nakamoto coeff due to ‘ease assymetry’ isn’t inherently a bad thing, so long as it doesn’t negatively affect network resilience.
So a central (no pun intended) question is, what should the target levels be for sufficient resilience of each subsystem? Increased network resilience is certainly one of the main features of decentralization, but other factors like switching costs/adaptability are big variables too. If there are low switching costs (i.e. geth to Parity as Vitalik noted), there might be a low Nakamoto coeff, but ultimately pretty high network resiliency.
Would be great to explore this further through the lens of network reslience, since that’s ultimately what we care about, and quantifying decentralization is a first step towards quantifying subsystem and system resilience.
