Holographic consensus—part 1
Decentralized governance is the field of protocols that coordinate large number of agents into a collective action, implemented with smart contracts on the blockchain. It is also the basis for decentralized autonomous organization (DAO). With DAOs we envision thousands and even millions of people spontaneously cooperating on shared goals, generalizing and unifying the notion of the firm, digital networks and collective intelligence. The DAO vision is becoming soon possible with the maturing of the needed technology, and in particular the DAO stack and its first interfacing DApp, Alchemy, both of which are live on the mainnet of the Ethereum blockchain.
The scale problem
Decentralized governance holds many challenges and that’s why we haven’t seen any DAO yet in action. In particular, I’ve described before what I believe to be the biggest of them all, the scalability problem and its inherent tension with resilience.¹ As described there: requiring in a DAO too much of the collective attention to be spent on and approve each and every decision makes a decentralized governance system obviously unscalable; while requiring too little potentially makes it not resilient to faulty decisions, collusion, or simply misrepresentation of the collective opinion. The bottom line is that naive consensus (and in particular, governance) systems are simply not scalable.
Almost by definition a proper solution to this problem should allow for decisions in a DAO to be made locally —i.e. with limited attention and voting power, as long as these decisions are ensured to be in line with the global opinion of the DAO. We’ve coined² this peculiar situation holographic consensus, being reminiscent of a hologram where every little piece of the picture actually contains the information of the entire image. In this article I’m presenting the holographic consensus (HC) solution for scalable, resilient and decentralized governance, describing its basics and how it is enabled. In the next post of this series I’ll continue with a more detailed prescription of a HC protocol, and will discuss its scalability and resilience properties.
It worths noting that holographic consensus as scaling solution for decentralized governance systems is closely analogous to TrueBit as an off-chain computations scaling solution for the blockchain. We will describe this analogy in more details elsewhere.
Holographic consensus resolves the tension between resilience and scalability of governance systems. To make the discussion below as precise as possible (though not too rigorous) we’ll start with our definition of these two terms.
Firstly, let’s define the DAO’s global opinion, or simply the DAO’s opinion about a proposal under consideration. The DAO’s opinion about a proposal would be its ultimate decision, or output, given its decision-making protocol, when all agents in the DAO had the sufficient bandwidth of attention to sufficiently consider that proposal and express their opinion to the DAO. (Having no opinion is a legitimate opinion too.) Sufficient bandwidth of attention is exactly what we’d like to relax later for sake of scalability, but we need the DAO’s opinion as a theoretical point of reference. The DAO’s approximate opinion about a proposal would then be achieved when most opinionated agents have approximately sufficient bandwidth to consider the proposal and express their opinion.
It’s worth noting that a general DAO is like a living creature, with its own subjective mind³ that senses, perceives, thinks, makes sense and takes decisions. In particular it has its own subjective right or wrong, good and bad, rather than objective ones. We will not discuss in here what drives a DAO into making ‘good decisions’ or what good means altogether. Rather, for now we only wish a ‘good DAO’ to act in line of its own global opinion.
With this setting in mind, a decentralized decision-making system will be denoted resilient if it ensures all decisions made in the DAO to agree with its approximate opinion (or to approximately agree with its global opinion).⁴
The DAO has a mission and its governance purpose is to make decisions that promote that mission. In particular, it could be decisions about allocation of funds to incentivize agent actions supporting its mission. More contributing agents in the DAO means more possible contributions, and the need for more resource-allocation decisions about those contributions. Thus, in order to grow effectively, a DAO decision-making system needs to process more and more decisions in a fixed period of time as the DAO grows in number of agents. When speaking about DAO scalability we refer to its ability to scale up the number of decisions it can effectively make in a period of time.
It’s important to note that the one thing common to all economic organizations known today is that they become less and less effective in decision-making as they grow. Thus, we propose that DAOs are a new breed of organizations that are scalable in that they can effectively grow their operation and decision-making when growing the number of their agents.⁵
As previously stated, having most voters considering each and every proposal is clearly not scalable. At the same time, a resilient DAO governance system means that decisions are made in line with its global opinion. Thus, scalable and resilient governance systems by definition must enable decisions to be made by relatively small amount of influence or voting power (which we also call reputation) on behalf of the entire DAO, under the condition that those decisions are still ensured to be in good alignment with the DAO global opinion. This tricky situation is the very definition of holographic consensus.
The simplest solution to scale up decision-making power is to have decisions approved by relative majority rather than absolute majority. An absolute majority is the majority of all voting power in the DAO. For example, out of a hundred equal agents, an absolute-majority decision requires the approval of fifty-one agents, and implicitly the active participation of at least fifty-one of them. Very resilient and non-scalable situation.
Relative-majority approval requires the majority only out of those who voted on a given proposal in a certain voting timeframe. A proposal is open, say, for one week, by the end of which a decision is made. If only nine out of a hundred equal agents voted on this proposal within that week, it is enough that five of them support the proposal in order to approve it. A relative-majority voting system is indefinitely scalable, being able to process and reach a decision about any number of proposals at a given period of time, but is also potentially not resilient. If there’s no limit on the number of proposals and no further conditions to relative majority, anyone can easily attack the system by spamming it with millions of proposals, diluting the voters’ attention where most proposals will remain unnoticed. The attacker could be the single voter on a malicious proposal to transfer to himself all of the DAO funds. But beyond the overflowing attack on the collective attention, decisions will not represent the DAO’s global opinion, and increasingly so with increasing scale of decisions made in a fixed timeframe (thus the aforementioned tension). In our previous terminology the system will not be resilient, which means it would be both manipulable and would not act with or generate coherence.
It’s worth mentioning that the standard use of relative-majority based voting systems includes a quorum: a minimal threshold of voters that’s needed to render a vote valid. The problem with quorums is that they hit the scale problem of an absolute-majority system when being set too high, the resilience problem of a relative-majority system when being fixed too low, and often both at the same time. This conflict increases with the volume of decisions made, and consequently quorums are bad for scalable governance.
Decisions should be approved with absolute majority by default. However, we would like to allow decisions to be made by relative majority under some protective conditions that ensure the alignment of those decisions with the DAO’s absolute majority. We denote by boosting the transition of a proposal from an absolute-majority to a relative-majority vote, and by boosting conditions the conditions a proposal needs to satisfy in order to be boosted.
Boosting a proposal allows agents to consume the scarce resource of collective attention —perhaps the most scarce resource in a DAO— and enforce decision on their matter in a finite time. It should accordingly be monetized.
The simplest boosting condition would be payment in DAO tokens. Want the DAO to screen your proposal? Easy, for hundred DAO tokens the collective attention is (mostly) yours. You cannot buy a decision, but you can buy it into consideration. Anyone can promote a proposal, not only the proposer, and once a certain promotion threshold is reached the proposal is boosted.
A fixed boosting rate won’t do well, similarly to quorums. Setting it too high would make the decision-making process too exclusive, and setting it too low would lower the price of attack. Moreover, for sake of resilience the collective attention should be priced according to demand and supply; if twenty proposals are boosted and under the scrutiny of the DAO, the collective attention should be definitely more expensive to dilute even further than it is when only one decision is on the line. A proper boosting threshold needs to have a protective nature and should robustly be exponential in the number of already-boosted proposals. Note that paid DAO tokens may be redistributed to voters for their efforts which is useful to incentivize voting.
Exponential boosting threshold tightly limits the number of decision made with relative majority at any given time. The upside is that it makes the system pretty resilient —the entire collective attention is funneled into the same few proposals, allowing decisions to reflect the global opinion. The volume of simultaneous boosted proposals can slightly be stretched while voters are (exponentially) well compensated to maintain attendance. The downside of exponential threshold is that it quickly makes the cost of boosting very very expensive. Once again the system is not as scalable as we’d like it to be.
To make the last point more concrete let’s define the internal value of a proposal to be the total value seen by promoters of a proposal in its approval. By definition, promoters would agree to maximally pay the total internal value of a proposal for its boosting.⁶ Exponential boosting threshold means ‘exponential filtering’ of proposals in regard to their internal value. If the boosting threshold is 1000 DAO tokens then only proposals with internal value higher than 1000 DAO tokens will make it to the collective attention.
To improve boosting condition we need to separate two different economies. Voters attention is deployed to form decisions and this needs to be monetized, their efforts compensated. To consume that scarce resource one needs to pay with another scarce resource that’s tied to their network effect, the DAO token. A second need for monetization is related with boosting, the transition of a proposal from an absolute to relative majority approval condition and its penetration into the DAO collective attention. This latter monetization has two purposes:
- Effectivity—protecting and reducing the expense of voters attention by filtering the insurmountable amount of proposals in line into those which voters should decide about in a focused manner and finite timeframe.
- Alignment—ensuring the sufficient and unbiased voters attendance for each boosted proposal to guarantee the alignment of resulting decisions made by a relative majority with the opinion of the absolute one.
Note that the two monetizations are different and should not be tied up as in the earlier example. In particular, they are best produced by two different network effects and two different tokens. Decisions in the DAO require the attention of DAO voters and are tied to the DAO network effect and its token monetization; while filtering proposals and protecting the decision process around them is better executed by an open, economic and permissionless network, a network of predictors.
Consider a network of predictors placing predictions about the fate of proposals in different DAOs and staking tokens to back them up. Successful predictors who’ve staked for ‘good proposals’ are compensated —good in the only sense that the DAO has eventually approved them. And unsuccessful ones lose (perhaps part of ) their stake. Similarly, predictors are compensated for staking against boosted proposals that eventually got rejected, and bare loss otherwise.
Now, a proposal is boosted only when achieving sufficient stake on its favor, reflecting predictors’ trust that it’s going to pass in the DAO. Exponential threshold —this time of tokens staked— is still important for protection, which means the amount of upstake needed to boost a proposal is roughly exponential in the number of already-boosted proposals. To incentivize predictions, a predictors’ bounty is offered to successful predictors of retrospectively approved proposals. The bounty can be derived from the proposal’s promotion fee or be contributed by the DAO itself.
Predictors have three roles in scaling up DAO decision-making, in line with the boosting conditions purposes mentioned above:
- Fishing out ‘good proposal’—predictors filter a long list of proposals for the voters to focus on. The predictors’ bounty creates an economic incentive to find out those proposals and an open market will take over that opportunity.
- Signal and balance the propagation of ‘bad proposal’ —once a proposal is being boosted predictors have an even higher incentive to downstake it if they think it’s not going to be approved by voters after all. Again, an economic market should saturate this arbitrage.
- Finally, once predictors are staked on both sides of the future they have the incentive to maintain the voting process and its alignment with the global opinion. Those who observe a misalignment would call out the collective attention, and specifically those voters who they believe will fix it.
Note that predictors serve merely as routing agents and need not have any interest in the success of decisions or be related to the DAO whatsoever. In particular they do not need to hold voting power in the DAO. Predictors are driven by pure short-term economics, like traders, and they can be anyone who believes she has an information advantage she can profit from, and who’s confident enough to stake tokens for that purpose. It is an open and permissionless network, in contrary to the DAO voting system which is generally premiossioned. This new global network is the DAOstack predictors network: the predictors of social collective behavior. This network has its own token used for predictions, the GEN token, and a strong network effect to protect its utility. For a closer look on this angle see this recent article.
We envision many thousands of people cooperating on shared goals. This vision is soon becoming possible with the invention of of the decentralized autonomous organization (DAO), the basis of which is already implemented with the DAO stack on the Ethereum blockchain. But DAO operation requires a decentralized, resilient and scalable governance system, and finding such one has been a long-standing challenge due to inherent tension between scale and resilience in decentralized governance systems.
In this article I’m proposing a novel solution to this problem, dubbed holographic consensus: a process that allows decisions in a large-scale DAO to be made quickly and locally —by the approval of relatively small amount of influence in the DAO, while ensuring those decisions to actually reflect the opinion of the absolute majority of influence in that DAO. The alignment of local decisions with the global opinion is achieved via a crypto-economic game, transforming a possible mismatch between the local and the global opinions into an economic opportunity. An open market of predictors can take advantage of that opportunity and thus support the upscaling of DAO governance. In essence, the DAO is outsourcing to an effective market the navigation of its collective attention by placing the incentive to produce a good —and backed— navigation signals. A solution of this kind may be critical to the resolution of the tension between scale and resilience in governance systems, and thus the success of DAOs.
In the next post I’ll describe the prescription of a HC process and the resulting alignment of its local decisions with the global opinion in greater details. In further chapters we’ll explore a variety of HC protocols, some open questions and a thorough discussion of their game-theoretic analysis, and in particular potential failure modes and their coverage.
It’s interesting to note that in parallel to the development of this idea, Vitalik came up with a very similar proposal, put out for the domain of curation⁷ and described as a scaling solution to a centralized decision-maker rather than a decentralized one. The idea behind it, however, is identical in spirit, and is applicable to general decentralized governance systems as described above.
Finally, I’d like to note that DAOstack’s Alchemy, a first platform for DAO operation on Ethereum, is already deployed on the mainnet and implemented with an early version of the holographic-consensus concept, the Genesis protocol. A detailed description of (the next version of) the complete protocol will be published in another post in this series.
¹ This is not very different from the scalability problem of blockchains and consensus protocols, which are themselves a sort of objective version of decentralized governance. More on that somewhere else.
² This term has been suggested by Jordan Greenhall.
³ Different comprising agents, different decision-making protocol and different power distribution to agents in a DAO will produce a different hive mind.
⁴ Note that decisions must agree with the DAO’s approximate opinion with perfect certainty, rather than perfectly matching with the DAO’s global opinion with approximate certainty. A DAO cannot allow itself to make a suicide or other huge mistake — like throwing away all of its funds on an obviously wrong matter — once in a while.
⁵ One could even imagine super-scalable DAOs that become even more effective per person when they grow, as network effects do. We’ll discuss this potential scenario somewhere else.
⁶ In practice, due to the probability of the proposal being rejected, promoters would probably agree to only pay somewhat less than its internal value, according to their confidence in the approval of the proposal.
⁷ Note that the scale problem is indeed more severe in curation than in any other domain due to the pretty high frequency of decisions generally needed in that domain.