Engineering for Legitimacy

By Ilan Ben-Meir and Michael Zargham

19 min readMar 19, 2024


Image created by Ilan Ben-Meir using Midjourney 6.

As an engineering firm specializing in the design and analysis of digital public infrastructures and the institutions that care for them, we believe that the most reliable and effective way to approach such work is systematically, by first deriving a set of requirements, and then designing or evaluating implementations of the desired function in terms of those requirements. Requirements vary in nature from the highly contextualized preferences of a stakeholder group to more general domain-specific best practices.

We begin by surveying the organizational landscape, and mapping three types of context — the organization’s people, purpose, and environment — as described in our essay “What Constitutes a Constitution?” (Zargham, Alston, et al., 2023). This frame of reference provides the basis for a comprehensive stakeholder analysis, which charts stakeholders’ interests, the actions available to them individually, and the dynamics governing interactions amongst them, and functional decomposition, which breaks down the organization’s operations into a set of interdependent component functions, all of which are required to realize its purpose within its environment. (For a detailed description of our approach to functional decomposition, see “Method for Functional Decomposition of Organizations and their Environments” [Zargham and Ben-Meir, 2023].)

Once a stakeholder analysis and functional decomposition are complete, we have collected enough information to gauge the native complexity of the governance design challenge. BlockScience’s heuristic for estimating requisite complexity accounts for an organization’s Magnitude, Diversity, and Variety; roughly speaking, Magnitude is the sheer number of people affected, Diversity is the breadth of stakeholder interests, and Variety is the breadth of activities that must be coordinated to pursue the organization’s purpose. The minimal degree of institutional complexity required to govern an organization is estimated via these heuristics.

At this point in the process, however, we have only identified the lower limit on institutional complexity — the point at which further simplification can only lead to loss of important functionality or adaptive capacity. Complexity, however, is bounded both from below and constrained from above; in participatory systems, the upper limit on institutional complexity is imposed by the need to achieve and maintain legitimacy, because as an institution’s complexity increases, its inefficiency and illegibility also increase — until a tipping point is reached, past which that institution can no longer be seen as legitimate by those who interact with it.

Legitimacy as Perception

When it comes to participatory governance, there is little if any daylight between “legitimacy” and “perceived legitimacy.” Irrespective of its other attributes, “legitimate governance,” at minimum, consists of decision-making processes that are considered sufficiently acceptable by the set of stakeholders whose acceptance is required such that those processes are allowed to remain in effect. Stronger conceptions of legitimacy, such as “an accurate reflection of the preferences of the governed,” are desirable — but functionally, the set of “legitimate governance decision-making processes” for a given unit of political organization contains only, but also all, of the governance decision-making processes which that polity perceives as legitimate enough to accept, rather than revolt against.

In her ethnographic chronicle of SourceCred, a Decentralized Autonomous Organization built around an open-source tool for measuring and rewarding value creation, Ellie Rennie uses the term “terraforming” to describe the means by which a community collectively alters their shared landscape (Rennie, 2023). By many theoretical definitions, the governance of SourceCred would be considered “obviously legitimate,” insofar as the stakeholders affected by decisions were all meaningfully enfranchised. Despite terraforming their digital landscape through participatory governance, the SourceCred community nonetheless fractured due to disagreements concerning the legitimacy of various decision processes (both human and automated) that they had developed.

The concept of legitimacy has only recently caught the attention of thought leaders in the blockchain development community. A 2021 blog post by Vitalik Buterin declared that “The Most Important Scarce Resource is Legitimacy” (Buterin, 2021) and the ERC BlockChainGov reading group’s 2022 “Report on blockchain technology & legitimacy” explored a range of opinions on the subject (De Filippi, Mannan, et al., 2022) — but both discussions privilege an unanswerable question (“What does legitimacy mean?”) over a more urgent and practical query (for those seeking to build institutions, at least): “How is legitimacy cultivated?” In other words, given that legitimacy is not itself a controllable attribute, what controllable attributes most directly affect whether or not an institution is viewed as legitimate? What concrete requirements can be set — and what trade-offs need to be negotiated — when aiming to create and maintain legitimacy? In short, how does one go about engineering for legitimacy?

To begin answering this question, we turned to the work of Harvard Professor Arthur Isak Applbaum, whose scholarship has inquired deeply into the relationship between legitimacy and governance. Although legitimacy is ultimately a sensed phenomenon, rather than a measurable one, Applbaum’s recent work points the way to a set of heuristics that can be used to engineer for legitimacy, helping decision-makers identify the points along three distinct trade-off curves at which the chances of their choices being perceived — and continuing to be perceived — as legitimate are likely to be maximized.

The Components of Legitimacy

Optimizing for legitimacy in the design of a governance apparatus is more complicated than it may seem, because perspectives regarding what constitutes legitimacy vary widely, and the phenomenon itself is more something that can be subjectively sensed than something that can be objectively measured. In Legitimacy: The Right to Rule in a Wanton World (2019), Applbaum provides an inventory of just some of the questions surrounding the ways that legitimacy can be defined:

Is the right to rule a claim-right that entails a moral duty to obey or a mere liberty? Is the right of a legitimate ruler conclusory or merely presumptive? Does legitimacy come in degrees or is it a binary property? Is legitimacy a minimal threshold that binds or an aspirational ideal that guides? Are the criteria for legitimacy necessarily tied to notions of pedigree or procedure? Are the criteria for legitimacy necessarily tied to the beliefs or the wills of those subject to legitimate rule, so that only rulers who are believed to be legitimate, or who have the consent of those ruled, can be legitimate? How is legitimacy connected to other normative ideas such as justification, legality, and justice?

In one context, legitimacy may depend on whether or not governance actions closely follow a prescribed set of formal rules; in another, legitimacy may be primarily a function of who was (and was not) given the opportunity to be involved in a decision-making process. There is thus no universal recipe for legitimacy — but while the proportions among legitimacy’s ingredients can be highly variable, the ingredients themselves generally are not.

Applbaum organizes his exploration of legitimacy’s component elements around what he calls “the most plausible normative conception of legitimacy, the free group agency account,” which provides that “A legitimately governs B only if A’s governance of B realizes and protects B’s freedom over time, and this is the case only when A is a free group agent that counts a free B as a constituent member of that group agent.” This account of legitimacy, he explains, can be broken down into three basic components:

Three principles guide three different dimensions of public governance, which protect against three distinct threats to free agency and therefore legitimacy. What to decide is subject to a liberty principle, under which all citizens are entitled to the protection of basic rights and freedoms. When the liberty principle is seriously violated, government sinks to a tyranny by practice, and we are dominated by inhumanity. Who decides is subject to an equality principle, under which each citizen is to have equal say in selecting who bears decision-making powers. When the equality principle is seriously violated, government sinks to a tyranny without title, and we are dominated by despotism. How to decide is subject to an agency principle, under which decision-making powers are to be exercised by decision-makers who constitute a self-governing and independent group agent that counts all citizens as self-governing and independent members. When the agency principle is seriously violated, government sinks to a tyranny of unreason, and we are dominated by wantonism.

Applbaum’s analysis suggests that the legitimacy of any governance apparatus can be analyzed along three component axes — Liberty, Equality, and Agency — each of which marks a gradient along which “legitimate governance” fades and “tyranny” begins.

Considering the “principles” at the heart of each of Applbaum’s axes in turn makes it clear, however, that while such a framework is broadly useful, is also too reductive. The concepts around which his axes are structured are themselves contested sites of meaning, and can be further decomposed around the heuristics of their contestation. As such, decomposing legitimacy into trade-off curves rather than axes makes it possible to more fully and accurately model the components of legitimate governance.

The Liberty Curve

In Applbaum’s schema, the first component dimension of legitimacy relates to the concept of “liberty”: “What to decide is subject to a liberty principle, under which all citizens are entitled to the protection of basic rights and freedoms.”

Applbaum’s concept of “liberty” seems to function by setting certain “basic rights and freedoms” beyond the reach of governance authority, excluding or exempting them from susceptibility to governance; thus, “when the liberty principle is seriously violated” by overreaching governance, “government sinks into a tyranny by practice, and we are dominated by inhumanity.” For Applbaum, in other words, liberty appears to involve drawing borders around regions of citizens’ lives that governance cannot cross, thus creating zones of autonomy within which one has the freedom to act as one chooses.

The issue with conceptualizing liberty in this way is that it presents as monolithic a concept that is more accurately understood as multifaceted. As the Stanford Encyclopedia of Philosophy explains, “[t]he idea of distinguishing between a negative and a positive sense of the term ‘liberty’ goes back at least to Kant, and was examined and defended in depth by Isaiah Berlin in the 1950s and ’60s,” and defines the distinction as follows: “Negative liberty is the absence of obstacles, barriers or constraints. One has negative liberty to the extent that actions are available to one in this negative sense. Positive liberty is the possibility of acting — or the fact of acting — in such a way as to take control of one’s life and realize one’s fundamental purposes” (Carter 2022). The entry goes on to note that “[a]s Berlin showed, negative and positive liberty are not merely two distinct kinds of liberty; they can be seen as rival, incompatible interpretations of a single political ideal. Since few people claim to be against liberty, the way this term is interpreted and defined can have important political implications.”

It is worth lingering for a moment on the language of the assertion that the two “liberties” identified by Berlin “are not merely two distinct kinds of liberty” but rather can be understood “as rival, incompatible interpretations of a single political ideal.” In other words, the single concept “liberty” always encompasses a tension between (at least) these two definitions; therefore, “liberty” cannot be adequately evaluated against an axis running from “high liberty” to “low liberty.” To do so requires, instead, a curve parameterizing the trade-off between the concept’s distinct conceptions. As we shall demonstrate, the same is true of “Agency” and “Equality” — all three are examples of what W.B. Gallie calls “essentially contested concepts” (1956), and therefore are best understood as trade-off curves interpolating amongst at least two distinct conceptions of the same concept. These three curves provide ways of specifying what “equality,” “agency,” and “liberty” mean in a particular context, rather than mechanisms for measuring “how much equality, agency, or liberty” is present in that context.

To illustrate this phenomenon, let us return to “liberty.” Rather than litigate a univocal definition of the essentially-contested concept “liberty,” and then attempt to measure how well an organization’s governance accords with that concept, the Liberty Curve provides a tool for thinking about how the organization in question negotiates the tension and trade-offs between positive liberty and negative liberty in its own definition (and implementations) of the overarching concept.

Figure 1: The Liberty Curve

As the graph above makes clear, there is not, in fact, a linear anti-correlation between the two conceptions; although there does exist a relationship between the relative weights that organizations assign to these two dimensions of liberty in their particular understanding of the idea, the contour of this relationship cannot adequately be described with a single straight line.

A negative conception of liberty understands liberty to exist anywhere that it is not explicitly and formally constrained. Organizations with a predominantly negative conception of liberty therefore generally tend toward “governance minimization” (Zargham & Nabben, 2023) as a mechanism for increasing the liberty of their members; “freedom” is understood as “freedom from” formal rules and constraints, even when these rules and constraints exist to prevent potentially-malicious actors from harming others. A positive conception of liberty, on the other hand, understands liberty not in terms of the absence of constraints, but rather in terms of the presence of affordances. Organizations with a predominantly positive conception of liberty therefore generally tend toward provisioning additional resources and affordances as a mechanism for increasing the liberty of their members; “freedom” is understood as the “freedom to” make decisions or take actions that would otherwise be unavailable, even if those decisions and actions are only made possible by the intervention of a governance apparatus.

In a digital universe, all liberty is positive insofar as a protocol’s set of admissible actions dictates the contours of a user’s action space; within a given protocol, a user can perform only the actions that the protocol affords (but also typically has negative liberty within the action space so defined — in Web3 organizations, there is generally little distinction drawn between “admissible” and “permissible” actions). Primarily, however, decentralized communities privilege a negative conception of liberty when they seek to minimize impositions by external regulatory bodies attempting to impose constraints that the impacted communities do not view as legitimate or well-aligned with their values.

It is worth reiterating that the two heuristics that structure each curve are not opposites, even though they may sometimes be characterized that way; rather, they are internal tensions within individual concepts. No one heuristic is “better than” its counterpart; all are desirable in some contexts and less desirable in others, but orienting toward one member of a pair involves making trade-offs in terms of the other.

Pursuing any of the three components of legitimacy therefore consists of tuning along that concept’s curve, navigating the trade-off space that it traces as one seeks ways of negotiating the tensions inherent within these contested concepts that are fit to the context at hand.

The Equality Curve

Applbaum argues that just as “what to decide is subject to a liberty principle,” “[w]ho decides is subject to an equality principle, under which each citizen is to have equal say in selecting who bears decision-making powers. When the equality principle is seriously violated, government sinks to a tyranny without title, and we are dominated by despotism.

Once again, the issue with Applbaum’s conception of equality is that it collapses an essentially contested concept into a single dimension of meaning: “Equality” is reduced to “equal weight in the process of allocating governance authority,” a specific form of absolute equality; meanwhile, relative equality, the other primary heuristic for navigating the contested meaning of “equality,” is left entirely out of the picture. The Equality Curve makes no such omission:

Figure 2: The Equality Curve

An absolute conception of equality understands equality as equal treatment despite significant differences. There are many situations in which absolute equality is warranted: A citizen’s religious affiliation, for example, should not have a bearing on that citizen’s right to vote. One need not look far from this last example, however, to find a situation governed by a relative conception of equality (one that understands equality to consist of treatment that is equally responsive to significant differences): The voting age. American society has come to the decision that every citizen under the age of eighteen is equally unqualified to vote, regardless of their personal maturity level, while every citizen over the age of eighteen is equally qualified, regardless of their personal maturity level (except in extreme cases, such as those of convicted felons). Even the most ardent voting rights activists do not claim that equal access to the franchise requires mailing a ballot to infants — only that every qualified voter’s right to vote should be equally robust. Once one opens the door to relative equality, however, it can be a slippery slope to creating proxy measurements for insidious forms of discrimination. For example, voter “qualification” exams were used to provide cover for institutionalized racial disenfranchisement in the Jim Crow South (Onion 2013).

Another way of looking at the difference between absolute and relative conceptions of equality is by considering whether it is “more equal” for a government to levy identical taxes against (and provide identical benefits to) all of its citizens regardless of their socioeconomic status, or if “equality” looks more like the government taking “from each according to his ability,” in order to be able to give “to each according to his needs.”

In the governance design space, the tension between the absolute and relative conceptions of equality often manifests around the question of permissionless access. A permissionless ethos is centered around an absolute conception of equality: the idea that privileges should be equal regardless of a particular user’s “credentials.” This ethos has the benefit of opening up the system to perspectives more diverse than the sometimes-narrow view of “expert opinion,” but comes at the expense of being able to privilege expertise; giving users equal privileges despite unequal qualifications thus manifests an absolute conception of equality, while an organization with a more relative conception of equality would likely view it as more legitimate to give all users privileges that are commensurate with their specific individual qualifications.

Gitcoin’s Sybil-resistance mechanism offers a practical example of tuning along the Equality Curve (Emmett, Nabben, et al., 2021). Insofar as the legitimacy of quadratic voting depends on treating each person equally, treating each address equally can only be legitimate if there is also a mechanism for credentialing an address as unique; absolute equality between credentialed addresses is therefore made possible only by the acceptance of relative equality between those addresses that are credentialed and those that are not.

By developing permissionless infrastructure that anybody can use, but that contains within itself the affordances necessary for defining permissioning schemes, the Web3 community has significantly increased the precision with which it is possible to negotiate the trade-off space marked by the Equality Curve.

The Agency Curve

The essentially-contested concept that organizes the last of the three components of legitimate governance identified by Applbaum is “agency”:

How to decide is subject to an agency principle, under which decision-making powers are to be exercised by decision-makers who constitute a self-governing and independent group agent that counts all citizens as self-governing and independent members. When the agency principle is seriously violated, government sinks to a tyranny of unreason, and we are dominated by wantonism.

Applbaum’s notion of “agency” is the least straightforward of the three, and appears to be primarily intended as a tool for thinking through “principal-agent problems” — situations in which the preferences and priorities of authorized representatives (the “agents”) may fail to align with the preferences and priorities of the people who those agents have been authorized to represent (the “principal[s]”). In short, Applbaum’s concept of agency concerns how well an agent represents its principal, or how well a governance apparatus translates the desires of the community that it has been designed to govern into actions that the community believes are in accordance with those desires. In this case, the “agent” in question is an organization’s governance apparatus, and the “principal” is the polity which that governance apparatus governs; the question is whether the governance apparatus in question produces governance that the governed are willing to be governed by.

The problem with this way of conceptualizing agency is that it once again glosses over a structural tension inherent in the concept itself: this time, between process-oriented and outcome-oriented ways of assessing how well an agent represents its principal. Applbaum gestures toward this internal tension with the repeated phrase “self-governing and independent”; one can think of “self-governing” as referring to the ability to define one’s own processes (which we have elsewhere called “tactical autonomy” (Zargham, Zartler, et. al, 2023), and “independent” as denoting the ability to decide one’s own goals (which we have elsewhere called “strategic autonomy”). In this case, we are interested in the tactical and strategic autonomy of the principal (the community being represented, taken as a political unit), rather than the strategic and tactical autonomy of the agent (the governance apparatus that the principal has authorized to represent it). A general (but not universal) rule of thumb, however, is that increasing a principal’s autonomy correspondingly decreases the autonomy of that principal’s agent, insofar as increasing the principal’s autonomy relative to its agent consists of increasing its authority over that agent.

A community with a predominantly process-oriented conception of agency will generally seek to maximize its members’ “tactical autonomy” — their ability to decide “how to get the job done” — even when doing so comes at the expense of efficacy; such communities tend to over-provision decision-making processes in ways that encourage paralysis or stagnation. On the other hand, a community with a predominantly outcome-oriented conception of agency, by contrast, will generally seek to maximize its members’ “strategic autonomy” — their ability to decide “what jobs need to get done” — because insofar as “the ends justify the means,” the question of “what to do” becomes the relevant locus for decision-making, rather than the question of how to do it. Such communities tend to under-provision decision-making processes, often sanctioning abuses or other extractive behavior that drives outcomes (at the expense of some unmeasured cost) in the process.

Once again, the heuristics of process-orientation and outcome-orientation are not in opposition to one another, but they are in tension with one another. The more that a given community evaluates the representativeness of its representatives in terms of whether or not their actions result in the desired outcomes, the less invested that community can be in the processes by which those outcomes are attained; the less able that a particular community is to observe (reach universally-agreed-upon evaluations of) the outcomes of governance decisions, the more that organization must evaluate the representativeness of their representatives in terms of procedural (process-oriented) rather than practical (outcome-oriented) legitimacy.

Figure 3: The Agency Curve

Insofar as it collapses the distinction between process-oriented and outcome-oriented conceptions of agency, Applbaum’s treatment of the concept elides the difficulty of ascertaining the preferences, priorities, and desires of the principal (in this case an entire polity, taken as a polity). After all, one must know what that principal’s agents are supposed to be representing before one can evaluate the representativeness of that representation. As Eric Alston writes in “Governance as Conflict: Constitution of Shared Values Defining Future Margins of Disagreement” (2022):

Collective action at scale poses mechanical representative losses to the individual preferences of organization members. The questions of central relevance to governance of impersonal organizations are those surrounding disagreement or dispute among members. This means the extent to which a given organization’s governance can accommodate heterogeneity in members’ governance preferences is also an integral input to that organization’s resilience.

In other words, as the agent representing an entire community (whose members may have strongly divergent preferences and desires), a governance apparatus must produce governance that is acceptable to (which is to say, is viewed as legitimate by) a wide variety of stakeholders — including those whose preferences are not ultimately centered. A process-oriented conception of agency makes it possible for stakeholders to view governance that results in outcomes that do not align with their individual preferences as nonetheless adequately representative of those preferences, due to the legitimacy conferred by the process by which those questionable outcomes were attained; an outcome-oriented conception of agency makes it possible for stakeholders to view governance that employs processes that do not align with their individual preferences as nonetheless adequately representative of those preferences, due to the legitimacy conferred by the outcome that those questionable processes made it possible to reach. Relatedly, an emphasis on process enables a governance apparatus to ascertain its constituents’ preferences more precisely, while an emphasis on outcomes empowers that apparatus to pursue the goals that it sets on the basis of those preferences more aggressively.

Given a diversity of stakeholder preferences, a purely-outcome-oriented conception of agency will inevitably result in disappointed stakeholders also being dissatisfied with the quality of their agents’ representation. A purely-process-oriented conception of agency, meanwhile, results in the wholesale decoupling of satisfaction from success, leading to situations in which stakeholders are content with systems that fail to realize any of their desired outcomes — a situation which is not, strictly speaking, unacceptable to the stakeholders themselves, but which is far from optimal, and is almost always more difficult to escape than it would have been to avoid. Ultimately, legitimacy can only be maintained along the Agency Curve by designing governance processes that afford the governed sufficient opportunities to give their input such that no stakeholder feels ignored, and that produce outputs that every stakeholder can accept as legitimate, even in disagreement.

In a world in which all stakeholders agreed on a single desired outcome, and the outcomes of governance decisions were perfectly observable across time scales, a process-oriented conception of agency would be nonsensical. Such a situation, however, exists only in Utopia — it can be found nowhere in the world that we actually inhabit.


Special thanks to Eric Alston, Kelsie Nabben, and Jessica Zartler for their feedback and contributions.


Alston, E. (2022). Governance as conflict: constitution of shared values defining future margins of disagreement. MIT Computational Law Report.

Applbaum, A. I. (2019). Legitimacy without illusions: the right to govern in a wanton world. Harvard University Press.

Buterin, V. (2021). The most important scarce resource is legitimacy. Vitalik Buterin’s website.

Carter, I. (2022). Positive and negative liberty. In: Zalta, E. (ed.) The Stanford Encyclopedia of Philosophy (Spring 2022 Edition).

De Filippi, P., Mannan, M., Henderson, J., Merk, T., Cossar, Sofia, Nabben, K. (2022). Report on blockchain technology & legitimacy. EUI RSC Research Project Report.

Emmett, J., Nabben, K., Bernardineli, D., & Zargham, M. (2021). Deterring adversarial behavior at scale in gitcoin grants. BlockScience Blog.

Gallie, W. B. (1955). Essentially contested concepts. Proceedings of the Aristotelian Society, 56, 167–198.

Onion, R. (2013). Take the impossible ‘literacy’ test Louisiana gave black voters in the 1960s. Slate.

Rennie, E. (2023). The CredSperiment: an ethnography of a contributions system.

Zargham, M., Alston, E., Nabben, K., Ben-Meir, I. (2023). What constitutes a constitution? BlockScience Blog.

Zargham, M., & Ben-Meir, I. (2023). Method for functional decomposition of organizations and their environments. Zenodo.

Zargham, M., & Nabben, K. (2023). Aligning ‘decentralized autonomous organization’ to precedents in cybernetics. MIT Computational Law Report.

Zargham, M., Zartler, J., Nabben, K., Goldberg, R., & Emmett, J. (2023). Disambiguating autonomy. Zenodo.

About BlockScience

BlockScience® is a complex systems engineering, R&D, and analytics firm. By integrating ethnography, applied mathematics, and computational science, we analyze and design safe and resilient socio-technical systems. With deep expertise in Market Design, Distributed Systems, and AI, we provide engineering, design, and analytics services to a wide range of clients including for-profit, non-profit, academic, and government organizations.

🌐 Website | 🐦Twitter | 📚 Medium | 👻Blog | 🎥 YouTube | 👥Linkedin




BlockScience® is a complex systems engineering firm that combines research and engineering to design safe and resilient socio-technical systems.