stakefish
Published in

stakefish

Navigating Data on Ethereum 2.0's Decentralization Transparency

This is the second post in a series diving into the concepts, data, and actionable insights of decentralization transparency on the Ethereum 2.0 Beacon Chain. Part 1 can be found here. Part 3 can be found here.

The state of ignorance in which commanders frequently find themselves as, regards the real strength and position, not only of their foes, but also of their friends.

Colonel Lonsdale Augustus Hale¹

Routine uncertainties in battlefields and the methods by which military theorists have dealt with them have spawned countless strategies which have ranged from the extremes of Sun Tzu’s² preference for spy games to Jomini’s³ meticulous reduction of chaos to information battles. Applications of modern wargames strategies have even often found their ways into corporate and legal environments. John Boyd’s OODA loop⁴(Observation, Orientation, Decision, Action), originally used for air-to-air dog-fighting, provides one such corollary from the rich military history of navigating fearful, uncertain, and doubtful waters.

Currently, navigating the state of Ethereum 2.0’s Beacon Chain decentralization presents an environment of uncertainty. This state of uncertainty around the network’s decentralization is a fog of war and ambiguity impairing comprehensive action for safeguarding the decentralization of the validator set.

Applying the OODA loop to our current situation we get:

  • Observation: collect relevant data; label large validator service providers
  • Orientation: analyze the data to determine the current system state of decentralization
  • Decision: determine if the state of decentralization is sufficient or not
  • Action: stake accordingly to defend decentralization

Let’s focus on Orientation — transforming raw data into something we can easily understand. We have the popular and trite moniker of: “don’t trust, verify”. Yet if most cannot verify, then what use is it?

This decentralization transparency must be made easy to verify and understandable to developers and users alike — especially in situations where the appropriate and literal Ethereum stakeholders directly impact decentralization with where and how they stake.

Providing actionable visibility in the form of clear analytics should provide a solution here. So, what are the types of data we should be clarifying? We can break it down into 2 categories:

Validator Ecosystem Clarification Data:

  • Identification / Labeling of validator service providers
  • Staked amounts per validator service provider
  • Staking returns per validator service provider
  • Nakamoto Coefficient Score
  • Herfindahl Index

Validator Quality Measurement Data:

  • Validator Performance

Validator Ecosystem Clarification Data is simply data that can start sharpening the edges to the blurriness and assumptions we are making about the state of decentralization in the validator set of Ethereum 2.0’s Beacon Chain. This is descriptive of data that you can take a look at and immediately understand. Going from the “raw” to “revealed”. Taking notes from Nielsen’s interface design heuristics, there should be focus on immediate recognition rather than recall⁵. Having interfaces that allow for intuitive navigation provides fewer barriers to interacting with data — such should be the case with the interpretation of data on specific ideas like the decentralization of the system. The verify part of the moniker we love to use in “don’t trust, verify” is a burden that we carry — a bystander effect as a tragedy of the open source commons. Here are a few metrics that would be responsible to track under the lens of tracking decentralization:

Identification / Labeling of Large Validator Service Providers

This is simply the act of labeling large validator service providers that manage significant amounts of Ethereum that, in aggregate, encompass a majority of the network. These specifically do not include smaller or independent validator service providers for good reason that I covered in my last post under the section “Decentralization. Transparency. Decentralization Transparency?”. Identification is the first step to understanding dominance. Appropriate action to push the ecosystem towards more even distribution for a robust and decentralized validator set should follow if needed.

The Nansen dashboard provides one such attempt tracking from Ethereum 1 deposit addresses, but does not show the entire picture.

This is a snapshot from February 1, 2021

Deeper dives here should also elucidate the types of validator service provider relationships under the lens of decentralization as well. There are nuances regarding relationships that large validator service providers have here.

A good example is of the recent Coinbase acquisition of Bison Trails⁶. We do not yet see the total percentage of Eth staked by Bison Trails, a reputable and distinctly large validator service provider, on Nansen. Coinbase, which will be supporting ETH2 staking coming early this year⁷, will likely also have a commanding percentage of the staked Ethereum.

  • What would the labeling look like here? Would the aggregate Ethereum staked currently on Bison Trails fall under Coinbase? Perhaps not. Bison Trails offers a non-custodial staking product and Coinbase will likely offer a custodial one.
  • If Bison Trails’ stake controls a non-trivial amount of ETH2 but its implementation holds no keys but provides whitelabel services or distributes the system (geographic distribution, clouds, and clients), how do we interpret this? Would this stake under Bison Trails be net positive because it would have otherwise taken longer to come online or deployed under high risk setups? What is the nuance to be observed between the dimensionality of stake control and distribution?

However, when measuring decentralization, the nuances of decentralization such as this example can lead to some important questions that should be addressed with clear data.

Clear Staked Amounts per Large Validator Service Provider

Metrics on the amounts staked per large validator service provider should follow naturally. How much ownership of controlled stake will ultimately provide more accurate data and visualizations on the current status of decentralization on the Ethereum 2.0 Beacon Chain.

Nakamoto Coefficient Score

Sometimes simply a score that can quantitatively encapsulate the extent of a system’s decentralization can be helpful as a check-in metric. Balaji Srinivasan and Leland Lee do an incredible job in breaking down the concept of a “Nakamoto Coefficient”⁸ score motivated by the likes of the Gini Coefficient (measurement of income distribution)⁹ and the Lorenz Curve (another measurement on wealth distribution)¹⁰. They expressly detail how the Nakamoto Coefficient can be used to measure decentralization on the blockchain by identification of participating entities, counts of compromised entities, and using minimums of compromised entities as a determining factor for the score. A single score — higher values indicating more decentralization.

Herfindahl Index

Another valuable metric to reference for decentralization is the Herfindahl-Hirschman index¹¹. This is an interesting metric which measures market concentration in relation to participating entity sizes (in this case it could be large validator service providers). In this case, rather than measuring potential monopolies, this would measure the size and power of each large validator service provider’s controlling/delegated stake on the Ethereum 2.0 Beacon Chain. The issue hinges on the accurate identification and labeling of these large validator service providers.

Vitalik has been providing Nakamoto Coefficient and Herfindahl-Hirschman Index values as quick litmus tests into decentralization on the Beacon Chain as well.

Validator Quality Measurement Data is an important quantitative measure for how well these validator service providers are running. Under the same umbrella of large validator service providers, the burden of quality, robustness, and performance is absolutely important. Clear quality metrics here should provide information to the ecosystem and potential delegators and ultimately service as actionable insight. Large validator service providers must bear the burden of quality and robustness — establishing clear measurement standards will help with this goal. These types of metrics have been well established, but it is important to clarify them here as a reminder.

Validator Performance

I will be heavily referencing a report on validator effectiveness scoring from Elias Simos of the Bison Trails team. He gives a breakdown¹² on important metrics to focus on and an example of a way to measure these types of data items.

Validator effectiveness can be broken down into proposer and attestation effectiveness — as these are the only two roles that validators are rewarded for in Phase 0. Performance measurement here provides indication of rewards potential for delegators and should thereby guide delegations to high quality validator service providers. At the same time, separating performance drivers from actual rewards earned, helps control for some of the randomness that drives the bottomline of rewards.

Here he breaks down performance between proposer effectiveness and attestation effectiveness:

Proposer Effectiveness

In his description he describes two key details for providing a score for validator proposer effectiveness:

  • how often a proposer actually proposes a block when they are called upon to do so
  • how many valuable attestations they manage to include in the blocks they propose

“From the two, what a proposer can more effectively control is how many blocks they actually propose from the slots they are allocated by the protocol — a function of their overall uptime”.

More specific data can be found in his post on “Eth2 Insights into Validator Effectiveness”.

Attester Effectiveness

Continuing with the conversation of uptime on performance, Elias also breaks down the importance of swift attestation participation. He describes the quality metrics that should be tracked for measuring performance as:

  • How often a validator participates in consensus
  • How quickly a validator participates in attestation during their turn

Under these categories, attention should be paid to both:

  • Aggregate inclusion delay/lag
  • Uptime

He describes the above as:

  • aggregate inclusion delay — measured as the average of inclusion delays a validator has been subject to overtime. A score of 1 means that the validator always got their attestations in without a delay, and maximized their rewards potential.
  • uptime ratio — measured as the number of valuable attestations vs the time a validator has been active (in epochs). A ratio of 1 implies that the validator has been responsive in every round they have been called to attest in.

And his scoring metric:

A_effectiveness = uptime ratio * 100 / aggregate inclusion delay

He combines both proposer effectiveness and attestation effectiveness scores into one “master” score.

V_effectiveness = A_effectiveness * ⅞ + P_effectiveness * ⅛

Where V_effectiveness reflects the ecosystem-wide expectation of the distribution of ETH rewards that validators stand to achieve by performing their attester and proposer duties.

Please refer to his writeup for more detail here: https://bisontrails.co/eth2-insights-validator-effectiveness/

These are just some suggestions that can elucidate decentralization on the Ethereum 2.0 Beacon Chain under the umbrellas of ecosystem clarification and quality measurement. Both aim to provide actionable insights for the broader community given the conclusions that can be drawn from them. These specifically rely on having the data clearly understandable and therefore actionable by the large service providers themselves and the ecosystem of future delegators. It is certainly exciting to see that there are now 3 distinct community initiatives both from private interests and public/community interests working toward building dashboards to display appropriate decentralization transparency metrics!

In the final post of this series I will be detailing the explicit Ethereum 2.0 Beacon Chain decentralization transparency dashboard initiatives from both private and public/community participants that are being taken. I will also dive into the methods by which we would be able to accomplish identifying and labeling quality and accurate source data to power these metrics. Stay tuned!

About stakefish

stakefish is the leading validator for Proof of Stake blockchains. With support for 10+ networks, our mission is to secure and contribute to this exciting new ecosystem while enabling our users to stake with confidence. Because our nodes and our team are globally distributed, we are able to maintain 24-hour coverage.

Website: https://stake.fish

Telegram: https://t.me/stakefish

Twitter: https://twitter.com/stakefish

Instagram: https://www.instagram.com/stakedotfish

LinkedIn: https://www.linkedin.com/company/stakefish/

Reddit: https://www.reddit.com/r/stakefish

Sources

[1] Fog of War; Carl von Clausewitz http://self.gutenberg.org/articles/Fog_of_war

[2] The Art of War; Sun Tzu https://www.gutenberg.org/files/132/132-h/132-h.htm

[3] The Art of War; Baron Henri de Jomini https://www.gutenberg.org/files/13549/13549-h/13549-h.htm

[4] OODA loop; https://en.wikipedia.org/wiki/OODA_loop

[5] https://www.nngroup.com/articles/ten-usability-heuristics/

[6] https://blog.coinbase.com/coinbase-to-acquire-leading-blockchain-infrastructure-platform-bison-trails-f879654421d6

[7] https://blog.coinbase.com/ethereum-2-0-staking-rewards-are-coming-soon-to-coinbase-a25d8ac622d5

[8] https://news.earn.com/quantifying-decentralization-e39db233c28e

[9] https://en.wikipedia.org/wiki/Gini_coefficient; primary source: https://sci-hub.se/10.1111/joes.12185

[10] https://en.wikipedia.org/wiki/Lorenz_curve; primary source: https://sci-hub.se/10.2307/2276207

[11] https://en.wikipedia.org/wiki/Herfindahl%E2%80%93Hirschman_Index

[12] https://bisontrails.co/eth2-insights-validator-effectiveness/

--

--

--

stakefish is the leading validator for Proof of Stake blockchains. With support for 10+ networks, our mission is to secure and contribute to this exciting new ecosystem while enabling our users to stake with confidence.

Recommended from Medium

How to buy ETH and open Metamask to start earning on DeFi

A Quick Guide to Zionodes Marketplace

PancakeSwap Welcomes Kattana to Syrup Pool!

We! Are! The 49%!

SEC Ramps Up Crypto Oversight — Hires Coin ETF ”Expert”

A simple guide to the Web3 stack

Anchorage Welcomes Head of Risk to Leadership Team

DeFi - Fixing the Current Financial System 💸

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Daniel Hwang

Daniel Hwang

@danhwang88

More from Medium

Decentralized Payroll Management for DAOs

Blockasset x Crossmint: Case Study

Arthur Cheong, Founder of DeFiance Capital, Loses More Than $1.6M in a Hot Wallet Hack — Derev Blog

Financial service and investment in the NFT market Place