A design for decentralized organisations: part 2
In part 1 of this series, I introduced four virtues that define the design space of decentralized organisations insofar as they are modelled as state machines: determinism, performance, decentralization, and finality. I then introduce this series’ first principle: that any design for a decentralized organisation cannot be imposed from without; for it to be acceptable, a design must be independently verifiable, and must be adopted at a grassroots level.
With this in mind, we move to our second principle, and extend it to the problems of scaling decision making and scaling truth-finding. I offer three “conclusions” intended to guide a real-life implementation of this design, and discuss existing, free tools available to implement the design.
Principle 2: scale the communication load
Group communication load gets out of hand very quickly as group size increases. As such, every working consensus protocol has a way of radically reducing the amount of communication required. In human interactions, it’s most often necessary to engage in conversation, which means most optimisations applicable to software are unworkable in the context of normal human social interaction, and so intercommunication load must be kept extremely light.
A metric: if two people need to communicate, then there will be one communication channel between them. Add another person and you have three channels. Add a fourth person and there are six channels. 8 people? 28 channels. By the time you scale to a tiny community of 200 people, you’re lumped with 19,900 channels. 2000 people? 4,498,500 channels! (The formula here is n(n-1)/2, where n is the number of people communicating.) So if you think committees are archetypally bad at making decisions, try a crypto community.
No wonder Discord or Slack channels so quickly degenerate into unfocused, endlessly circular discussions that never reach consensus, and probably foster arguments and resentment more often than anything else. Research on groupwork suggests optimal numbers of between 4 and 7 people, with a sweet spot of 5. Odd-numbered groups are advantageous. Once there are more than about 8 in a group, the intercommunication load starts to limit efficiency, and consensus frequently becomes difficult to reach. In other words, groups greater than about 8 in size are not performant when using conversation to make decisions.
Conclusion A: if conversation is to be used to reach consensus, no sensible design would implement consensus-reaching conversations in groups bigger than 8. Aside from spreading information or casually hanging out, it is a bad idea to attempt to reach agreement in larger groups at all.
Moreover, a non-performant group size tends to multiply vices. Firstly, there are too many voices per unit time for each novel perspective to be entertained, resulting in an inability of the group to consider alternatives rigorously. This leads, secondly, to high rates of contributor dropout, since many potential participants are aware of the limited bandwidth of such a forum and shrink back from adding to the chaos by venturing a new perspective. Thirdly, group conversations are easily dominated by a vocal minority and, worse still, it is not generally possible to gauge how many and how strong other views in the group are, because they will usually have been dominated by the vocal minority. Fourthly, introverts are inclined to drop out, as would those who prefer more detailed or extensive, careful thinking, along with experts who find the conversation unstimulating or frustrating. In short, large groups are nightmarishly bad at making decisions, and their typical nature actively alienates those who would be best at making decisions.
Fortunately, some questions can tolerate radically reducing the communication load without compromising a group’s ability to make a good decision, and may simultaneously permit broad participation in the decision-making process. For example, relatively simple questions that require only relatively public information to answer can permit the use of voting. To complement these sorts of questions, there are ways other than informal conversation to reach consensus, and these remain performant at far greater scales. One is direct-democratic voting, which is achievable in decentralized organisations with the right infrastructure. Provided that (a) the questions voted upon determine an outcome unambiguously, (b) the voter turnout is either high or can be shown to be a representative sample, and (c) provided those responsible for putting into operation the action prescribed by a vote-outcome are policeable (e.g. they could lose their jobs if they don’t pursue the outcome voted for), voting is a generally reliable method of changing state.
Voting carries a major operational problem, though, in that it is exceedingly common that a decision to commence a vote is made in conversation in a large group. As per conclusion A, this is a poor way to change state, and so if a voting system is available, this easily leads to votes being taken on questions that are better decided using some other method, perhaps by a small group of those who are best informed. As such, commencing a vote should be strictly limited, perhaps by a rule about the maximum complexity and the public availability of the information about which voters may make a decision. To give an easy example, if daily personal knowledge of what someone is like as a co-worker is required in order to assess their performance, this is (a) not a publicly available experience, and (b) the interpersonal fit between workers makes assessing a person’s performance a complex relation, since it might vary enormously depending on who they work with. As such, their performance is not a “simple” question that can be evaluated by a crowd. Hence, public votes should not be taken to assess workers’ performance. I’ll return to this matter in subsequent sections, because it also concerns finality.
A further problem with voting is that members of a decentralized organisation do not typically have verifiable identities, opening up voting systems to sybil-attacks. Fortunately, a raft of self-sovereign identity service providers is emerging, including uPort, BlockPass, Bitnation, Yoti, Sovrin, and Hu-manity. These may provide robust sybil-attack protection in varying circumstances. My cursory analysis suggests that a sybil-resistant kludge can be built cheaply by (a) users downloading the Blockpass app and using it to undergo Onfido identity verification, (b) retrieving their Blockpass verification record on the Ethereum blockchain and using the Blocknet wallet to create a signed message corresponding to a non-zero Blocknet address into fields when voting, and © my writing an open source Talend job that removes any votes made using Blockpass user IDs not validated over their API, removes any invalidly-signed or zero-balance Blocknet addresses returned via XRouter, and removes all duplicate entries.
The result would be a simple sybil-resistant voting platform. This solution would be easy for community members to use, since the Blocknet wallet supports signing messages via GUI, and Blockpass is a free and well-designed consumer app. Moreover, the solution is epistemically workable because Blockpass does not store consumer data, and because anyone may run my Talend job and validate the voting results themselves, effectively removing the need to trust anyone’s testimony about the validity of voting results.
Conclusion B: a sybil-resistant, direct-democratic, scalable decision-making process is achievable for decentralized organisations, provided they take care not to decide to use it merely through group discussion, and instead work off clear guidelines for doing so.
Over and above the workability of a voting system is the question of whether the outcome of a vote is determinable as the best or correct outcome. After all, at most, voting can represent the will of the people, but this is completely different from a mechanism that establishes the truth of a matter. Now it turns out that there exists a decentralized technology that achieves exactly this, and with astonishing accuracy: prediction markets.
Decentralized prediction markets are smart contracts in which anyone may “bet” on their knowing the truth about something. For a yes/no question, the price of entering the market ranges from $1 to $0 per “share” in proportion to the number of dollars staked on “yes” and “no.” This proportion also determines the price of the “share” between $0 (0% chance that x is true) and $1 (100% chance that it’s true). As a result, if you happen to know x is true when the price is anywhere below $1, you would make money by buying “shares,” because when the truth comes out, you would profit. The keys to how prediction markets predict truth, though, are that market participants put their money where their mouth is, and that each market participant takes this risk on their own without potential losses being insulated by the herd. (This is the exact opposite of conversational decision-making, where, once consensus is reached, each participant has effectively conferred responsibility to the group and does not solely bear their own risks or responsibilities of being wrong.) It turns out that the underlying game-theoretic construct of prediction markets, known as the wisdom of crowds, is especially good at filtering out bad intentions provided the crowd’s responses are processed using a suitable market scoring rule, such as the LS-LMSR. A clear presentation of the performance of an LMSR is available here.
In essence, prediction markets enable people to determine the truth of any question that has a determinate answer. The answer need not be yes/no, either. Scalar markets (e.g. 1–100), and multi-factor markets are achievable, and their applications are wonderfully broad. What this equips a decentralized organisation to do is create and participate in prediction markets on questions that either (a) don’t have obvious answers or (b) lack an entity that everyone is prepared to trust to provide the correct answer.
For reference, the best introduction I am aware of on prediction markets is the “papers” page on Paul Sztorc’s project, Hivemind. The available prediction markets today are Augur, Hivemind, and confusingly, another project called Hivemind.
Conclusion C: a scalable and superior truth-finding mechanism is available for decentralized organisations, and should be employed whenever a contentious question arises that cannot be (or is not already) best entrusted to some individual or small group to determine.
Continued in part 3.