A Token Curated Registry (TCR) is a mechanism to incentivize the decentralized production of a high-quality list. For example: a list of Sichuan restaurants in Fairfax, Virginia; a list of high-quality colleges; a list of high-quality software products. We assume that a list is valued by the public and we assume — since the list is valued by the public — that producer-applicants will be willing to pay to get on the list.
To get on the list, an applicant nominates a list item and then stakes a minimum quantity of tokens. A challenge period begins. During the challenge period a challenger may stake an equal number of tokens and demand a trial. A trial is decided by a jury which votes by staking tokens on “reject” or “accept”. If a majority of tokens are staked on “reject” then the item is rejected and the applicant’s stake is forfeited. Half of the applicant’s stake is given to the challenger and half is given to the majority voters weighted in equal proportion to their votes. If more tokens are staked on “accept” then the item is added to the list, the challenger’s stake is forfeited and given to the majority voters weighted in proportion to their votes. Voters never lose their stake. If no challenge is made during the challenge period then the item is added to the list.
A variety of parameters, such as the initial stake and the percentage of the applicant’s stake to be given directly to the challenger, can be adjusted (see Goldin for details). In some more recent versions of TCRs, voters in the minority lose their stake (see AdChain for an example of the process in action). AdChain is trying to root out fraud on advertisers by identifying real webpages/publishers.
Voting is at the center of the mechanism but the literature is vague on the incentives of voters. The dominant model is the Schelling point model.² In this model, voters want to vote with the majority since only the majority side is paid. In the TCR mechanism votes are hidden until tallied but voters may use other information to deduce where the majority is headed. For example, voters may look to reviews or they may run tests to determine if a piece of software is of high-quality. If I look to reviews to discover if a piece of software is high-quality and I know that you do the same, then if I discover the software is of low-quality, I know that you will also discover the same. Thus, the fact that the software is of low-quality becomes common knowledge (I know that you know that I know…) and this makes voting to reject the software a plausible Schelling point.
In order to be useful we need the Schelling point that voters coordinate on to reveal socially useful information. However, this is not guaranteed. The truth is a Schelling point but it is rarely the only Schelling point. Suppose that instead of researching information — which is costly — a group of voters vote their biases. If their biases are widely shared the result can easily lead to absurd but majority-profitable results. For example, AdChain voters rejected Facebook and NYTimes as real publishers because the majority correctly reasoned that a majority of voters didn’t like Facebook and the NYTimes! AdChain is meant to reveal to advertisers which websites are real and which are fake so from the social point of view this is an absurd and counter-productive result.
If the Schelling point isn’t strong, colluders might also be able to swing a trial while holding only a minority of tokens. In the classic Schelling point story, strangers are told to meet in New York tomorrow. Where? When? Despite the infinity of possibilities, a surprisingly large fraction of people coordinate on Grand Central Station at noon. Suppose, however, that one person loudly announces “the new Schelling point is the Empire State Building at noon.” Is Grand Central really more focal than the Empire State Building? Even if Grand Central is more focal, the mere fact that someone has announced the Empire State Building could make it a Schelling point. In a similar way, if it is costly to read reviews so the truth is not widely known, then a random announcement such as “the software is bad and I am going to vote 1,000 tokens on reject” could swing trials.³
Assuming that problems with bias and collusion are not insurmountable so that the truth is the only Schelling point, the model could generate useful new information that is revealed in the equilibrium. The key parameters that make the model work, however, are hidden in the background. Most importantly, how much will challengers and voters be willing to spend on information acquisition? Suppose that items can be (H)igh or (L)ow quality and there is some information that can be tapped with a cost of C that reveals the truth with probability p (p>½). That is, if the item is high quality the information source returns H with probability p and L with probability (1-p) and if the item is of low quality it returns L with probability p and H with probability 1-p. Alternatively put, the information source reveals the Schelling point with probability p. Each voter gets an independent draw. Then if voters vote according to the information source their expected payoff is:
(p ½ Stake)/N
Where N is the number of other voters who also acquire the same information (I assume that each voter votes an equal number of tokens). Note, however, that this is the payoff versus doing nothing. A plausible alternative, however, is to vote randomly. A random voter will be in the majority 50% of the time so a random voter will earn:
Thus the payoff from spending C and learning the information is (p- ½) (½ Stake)/N (noting that p> ½ ) and so it will pay to learn the information if:
(p- ½)(½ Stake)/N>C
in essence, this equation determines equilibrium N such that:
N=(p- ½) ½ Stake/C
Assume that the information source is reasonably accurate say p=.9 then N=.2 Stake/C so the Stake must be at least five times bigger than the cost of information acquisition for a voter to learn the information. If the information source is less accurate then the Stake must be bigger relative to C.
Recall also that voters only get to vote if a challenger challenges. Assuming that there is an N that satisfies the voter equation, a challenger can pay cost C to learn H or L. If the challenger learns L a challenge will pay if:
p ½ Stake — (1-p) Stake>0
which implies that at minimum p has to be greater than 2/3 to motivate a challenge. Thus challengers will search out information when:
(I assume that there is only one challenger at a time and no race to be the challenger.) Note that if there is no source of information with p>2/3 then the mechanism can’t work. Moreover, the Stake might have to be very big to induce challenges for any given C. Assume that the distribution of p is close to 2/3, for example, then the left hand side is barely positive and so the Stake must be very large to induce information acquisition.
Applicants can increase their Stake and applicants with greater quality have a lower cost of staking so the Stake can act as a signal that separates high and low quality. It’s therefore a good feature that the mechanism lets applicants decide their own Stake. But the problem is deeper than finding a Stake that is large enough to induce information acquisition. We also have to take into account the incentives of list applicants. Given p and C, the value of being on the list may not be enough to induce applicants to post high Stakes. That is, a separating equilibrium is not guaranteed.
Token Curated Registries can work but there is no guarantee that voters will coordinate on the truth as a Schelling point so care needs to be taken in the design stage to imagine other Schelling points. The less focal or more costly it is to discover the truth, the more vulnerable the mechanism will be to biases and manipulation via coordination or collusion.
To understand whether a TCR will work in practice attention needs to be placed on the information environment. The key practical issues are the cost of acquiring high-quality information and the value to an applicant of getting on the registry. Put simply, TCRs are likely to work when high quality information is available at low cost. Vitalik Buterin’s examples of Schelling points were (wisely) all of this kind. Extensions of the Schelling point model to TCRs which are trying to surface information that is much more uncertain, variable and disputed need to recognize the limitations.
It will often be more important to put effort into lowering the cost of acquiring high quality information than it will be to modify the particulars of the mechanism. If high-quality, low-cost information is available many mechanisms will work tolerably well. If high-quality, low-cost information isn’t available, perhaps none will.
Even when it works, voting does have some negatives. For example, the voting model requires that information acquisition occurs multiple times. Wasteful duplication is a cost of not being able to rely on a trusted source which could acquire and reveal the information once. Trust saves on resources.
Thus, another lesson is that, if it all possible, even a decentralized mechanism should introduce opportunities for trust to develop. Verified users, ratings from independent parties, certifications from sources like Consumer Reports and Underwriter’s Laboratory are all important in the market process. A decentralized mechanism should make it easy for consumers and producers to develop, discover and use information from trusted sources. Signals of trust may also develop endogenously. Successfully placing items on a list may signal trustworthiness as a voter, for example. Mechanism designers may benefit by allowing trusted sources to be more influential in the mechanism process.
- This document was produced under the auspices of Wireline. I thank Tom Bell, Tyler Cowen, Andrew Dickson, Ankur Delight, Lucas Geiger, Garett Jones and Joshua Gans for useful comments.
- A second implicit model seems to be that voters each have a bit of private information; for example, they have eaten at the Sichuan restaurant being proposed for the list, attended the school, used the software etc. Each voter then votes according to their private information. The net vote can then be informative, although how informative depends on the exact model. In essence, the private information model makes the TCR similar to a survey where participants are paid with a lottery ticket. The private information model is less interesting as a general model but may have specific use cases.
- Coordination on “sunspots,” irrelevant information, can also destroy the good equilibrium. For example, suppose a group of voters coordinate so that on even numbered days they vote “accept” and on odd number days they vote “reject” — this coordination could be profitable as the coordinators could swing the vote, especially if they announce their intentions. Voters can vote as many tokens as they want and payoffs are token-weighted so large players plausibly can swing the vote. The coordination could become locked in over time. If coordinators win 3 votes in a row, for example, which way will you vote on the 4th trial?