Measuring decentralization in DAOs: Lessons from nation-states

Lucia Korpas
The Metagovernance Project
6 min readJul 8, 2022

By Lucia Korpas

Decentralization — the distribution of ownership and decision-making power away from any central entity or authority — has been a design goal of both blockchain infrastructure and the organizations built on top of it. However, even as decentralized autonomous organizations (DAOs) have proliferated, the extent to which they live up to the first part of their name remains an open question. To determine how decentralized a given DAO’s governance is, we need to be able to measure how (and how much) power is distributed in a decentralized way. But what does it mean to measure something like power or governance? We looked at 11 established indicators of nation-state governance, covering factors such as freedom, corruption, and democracy, to find out. There is a rich literature of quantifying governance for countries and their governments, and although only a small corner of the DAO space has positioned itself in direct analogy with nationhood, many factors considered in national governance also apply to DAOs patterned as alternatives to corporations, cooperatives, collectives, and other modes of organizing.

What factors are used in the indicators?

A wide range of specific factors (over 600, in the case of one indicator!) have been incorporated into measuring the quality of a country’s governance. Below are a handful of themes we observed across these, with an eye towards those generalizable beyond nation-states:

  • Separation of powers, especially of the judicial (dispute resolution) from the legislative (rule-making) and executive (decision-making and enforcement). The distribution of these responsibilities and authorities between distinct bodies, and the constraints they impose on one another, are often used to measure the quality of democracy specifically. Subsidiarity with local government is also considered as a positive contribution.
  • Involvement of the people being governed in their own governance. This includes, among many other things, voting, participating in town halls, or even just keeping up with political news. It refers both to the rights afforded to constituents and the rates at which people actually act on them. Two especially relevant factors: voting and elections, including participation rates (both for voting and for running for office!) and various metrics of competition between political parties, and flow of information as a prerequisite for informed involvement, both from the governing body to its constituents (for transparency) and vice versa (for accountability).
  • (Perceptions of) bureaucratic effectiveness and regulatory quality. For all that bureaucracy gets a bad rap, when it is functioning well, it provides services and enforces regulations without becoming burdensome. How well bureaucratic administration continues to function during transitions of executive power is also considered in evaluating governmental stability.

A couple of interesting patterns also emerged in the wording of the factors and their aggregation. One was the distinction between de jure and de facto governance practices: the letter of the law was considered separately from the actual, observed conditions for and activities of governance. Another was two different high-level design goals of the indicators: though some indicators explicitly classify nations on a spectrum from “democracy” to “autocracy”, others attempt to measure their properties irrespective of such a label or perhaps in comparison instead to anarchy (e.g., ability to provide “rule of law”).

How are the indicators computed?

The publicly available methodologies of the indicators considered here range from a handful of summary pages to over a hundred pages with detailed tables, and describe up to hundreds of factors. Individual factors are defined with specific wording and the values they can take. The factors are sometimes natively quantitative but very frequently involve converting categorical or ordinal responses to numeric values. These are then normalized where needed and then aggregated, sometimes hierarchically, to generate the single indicator number.

A measurable indicator is only as useful as the data that can be collected to compute it. Data sources range from expert assessments to public opinion surveys, as well as more directly observable “hard” data such as voting outcomes. In cases where expert surveys are used, several individual responses are aggregated for each data point, though what exactly constitutes “expertise” is not necessarily clear. One indicator uses solely perceptions-based data sources such as household surveys and assessments by NGOs, while another includes just two percentage values (“participation” and “competition”) computed from election data. However, even those two values end up requiring careful consideration to be well-defined and interpretable across a broad range of electoral systems (and lack thereof).

Despite these challenges, a wide variety of high-quality data sources are available for country-level data. Even so, some of the indicators ultimately publish results for less than 100 countries (or even less than 50) due to lack of available data. When values are missing from select years of a data series, many organizations use data imputation to replace missing values. Some acknowledge sources of uncertainty and make the effort to compute margins of error for their indicators.

Finally, it is impossible to remove subjectivity from the process of selecting and defining the contributing factors. So who is compiling these indices? NGOs and independent research institutions, often. However, the Polity data series, developed using CIA funding, has been scrutinized for Americentrism, and others are provided by for-profit organizations with apparent goals such as providing insight into foreign investment risks. It’s worth thinking about why an organization would go to the trouble to compute such an indicator.

So: why come up with a single number?

In crypto, where “code is law”, numbers are often treated as ground truth. But it’s worth briefly stepping back and considering the point of attempting to distill a vastly complex phenomenon such as “governance” or “decentralization” into a single value. Using DAOs as an example:

  • Compare across entities. Potential contributors looking for a DAO to join could consider how well several DAOs match their values. investors would have a new factor in assessment of risk. Regulators concerned about centralized control could identify which entities to focus on.
  • Track change over time for a single entity. Internal status monitoring to propose tweaks to governance; external accountability by stakeholders, voting and non-voting.
  • Relate to other indicators. For all that decentralization often seems to be treated as a goal in and of itself, what use is it if it doesn’t improve a DAO’s resilience, performance, or other outcomes of interest? (Or at the very least, not detract from these?)

That said, it’s necessary to contextualize the indicator value with an unquantifiable understanding of the system behavior being measured. Responsibly using the number on its own requires considering the underlying model used to generate it, margins of error, completeness of data, and sources of bias. Box’s aphorism and Goodhart’s law very strongly apply here. Ultimately, any number (or even family of numbers) used to describe a complex phenomenon will be just a shallow, distorted projection of what is actually happening… but we hope the projection can point us in the right directions for where to look deeper.

Developing an indicator of DAO decentralization (esp. distribution of ownership/power)

We want to operationalize the concept of decentralization in DAOs, with a focus on the distribution of financial and decision-making ownership. If you can articulate which factors affect the distribution of power, you can identify which social or technological levers to push in a DAO to effect change in it. That’s easier said than done: for all that a DAO’s complete activity on a public blockchain is by design available to all, it can be difficult to interpret without off-chain context of participant identities, forum posts, and a higher-level understanding of the specific DAO’s goals. Organizations such as DeepDAO and Messari have gotten into the business of making sense of the raw on-chain and chain-adjacent data, while the Wharton Cryptogovernance Workshop, Crypto Rating Council, and Prime Rating have undertaken efforts to collect and aggregate their own survey data. Metagov (an interdisciplinary research collective and nonprofit) hopes to build on and synthesize some of these efforts in constructing measures of DAO decentralization.

--

--