A TCR Protocol Design for Objective Content

Moshe Praver
Dec 5, 2018 · 10 min read

Introduction

In this post, we outline the basic protocol for an objective TCR. This will be followed by a formal quantitative analysis of the security guarantees and potential attack vectors. We assume familiarity with adChain’s original TCR design and encourage the reader to review our prior work on TCRs.

We prefer to call Objective TCRs: Trustless Incentivized Lists (aka TILs). We consider these a type of oracle and treat them as work tokens.

The entire protocol centers around one goal: providing an incentive for a verifier to actually check the validity of an application. In the original TCR paradigm it is assumed that the subjectivity of the application will naturally cause friction amongst token-holders. Such friction cannot be assumed in a TIL. In other oracle designs, such as Augur, the protocol assumes that, in the context of a prediction market, a self-interested market participant will check the validity of an outcome to prevent being cheated. Such self-interested participants cannot be assumed in a TIL.

The protocol is inspired by Truebit. The team at Truebit introduced the concept of “forced errors” to motivate verifiers to expend computational resources in search of these errors. Anyone who discovers a forced error receives a reward. The forced error concept draws upon the idea that, for a given objective task there should be a deterministic outcome. Input a Output b. We distill this idea and eliminate the need for forced errors.

Simply: if one individual (applicant) commits to an input/output pairing and provides another individual (verifier) with the same input, that individual should be able to produce the same output. In the absence of communication between the two individuals, if they produce the same output, we can conclude with great confidence that the input/output pair is valid across some function. Thus, the crypto-economic goal of the protocol is to prevent collusion between the applicant/verifier.

Demo Video

Protocol

I. Pre-amble

  1. A TIL requires a “native token”. We use a standard ERC-20 contract.

II. Application Phase

  1. An applicant submits a hint and a secret, such that a verifier who is provided the hint can deterministically derive the secret (examples: token-name/ticker→ token contract address; physician name/specialty →license number).

III. Verification Phase

  1. 2x native tokens are transferred from the selected verifier’s work contract to the application contract.

IV. Secret Reveal Phase

  1. Following the verifier committing to a secret, the applicant is given a fixed amount of time to reveal their random number.

V. Power Challenge Phase

If the applicant and verifier secrets differ, a “power challenge” commences in which the voters determine if the application hint/secret pair are valid. Here, we replace the traditional PLCR style voting with a round-based game of escalating stake. Each round signals a tentative outcome and lasts a fixed window of time. If the tentative outcome is not disputed during the time window of a given round, the tentative outcome becomes the final outcome and all voter stake that supported the losing outcome can be claimed by voters who supported the final outcome.

The amount of stake required to dispute a tentative outcome is: 2*(total tokens staked) - 3*(current stake on alternate outcome). By nature of the prior applicant-verifier interaction, the starting state of every power challenge is:

Image for post
Image for post
Starting state of a power challenge

A power challenge that continues through 6 rounds would appear as follows:

Image for post
Image for post
6 power challenge rounds

We prefer the power challenge over PLCR voting for several reasons. First, voting notoriously suffers from poor voter turnout. This makes it likely that a single ‘whale’ can dictate the outcome of a vote. Over time, this deters smaller voters from participating. Second, PLCR voting rewards are unpredictable. Typically, the number of tokens in the minority are so small that rewards to the majority token holders are simply not worth the effort (in fact, sometimes the gas costs associated with PLCR voting are greater than the actual rewards for winning). In the power challenge, a voter on the winning side is guaranteed a 50% ROI in native tokens. This is heavily inspired by Augur’s dispute round mechanism (white paper section C-8). Third, PLCR voting requires voters to return and reveal their vote during a “reveal window” — this leads to a poor UI. The power challenge does not rely on obfuscation of votes (it is secured by the credible threat of a fork, see section VI below), and requires only one period of voter interaction.

  1. If the power challenge resolves in favor of the verifier, the verifier receives y ETH and x native tokens from the applicant. They are returned their deposit of 2x native tokens and y ETH. The applicant is not admitted to the registry.

VI. Forking Phase

The power challenge can, in theory, be won by the individual(s) who possess the largest amount of native tokens. To prevent plutocracy, the ultimate security backup is a fork. When the amount of native tokens involved in a power challenge exceeds a certain threshold, the registry forks. Following a fork, there exists a version of the registry with the new applicant included and a version with the new applicant rejected. For simplicity, in a forking scenario the applicant and verifier are returned their respective y ETH. All pending applications are aborted and participants in these pending applications reclaim their respective ETH and native tokens. All native tokens currently involved in the power challenge are automatically migrated to the version of the registry they signaled support for. All tokens not involved in the power challenge must manually migrate to the version they believe is valid. Users of the registry and token-holders determine which version of the registry they wish to continue using. Presumably, the tokens in the valid version will retain their value, whereas the ones in the invalid version will become valueless. In this forking paradigm, even an attacker with >51% of the native tokens could be incapacitated by off-chain coordination that would render her native tokens valueless in an invalid deprecated registry.

Qualitative Protocol Analysis

In this section we will qualitatively describe 2 possible attack vectors. Following this post we will provide a more rigorous quantitative (LaTeX) analysis of the security guarantees.

TILs rely on the truth as a Schelling Point. If the property being curated in a TIL is truly objective there should be a way to construct a hint → secret pair that is deterministic.

Because we have adopted a work token model with slashing conditions, verifiers in a TIL will lose native tokens if they neglect to review applications that are assigned to them (Step III-3) or provide a secret that does not match a valid hint/secret pair. This provides an incentive for verifiers to routinely check for new applications (e-mail/text notifications are possible with some degree of centralization) and perform the necessary work to derive a secret from the provided hint. Verifiers who perform their jobs appropriately will receive predictable rewards in ETH.

A malicious actor who wishes to admit a faulty application to the registry has two attack vectors: 1) attempt to communicate with their verifier or 2) verify their own application by owning the assigned verifier account.

The protocol dissuades attack vector #1. A verifier should never trust an applicant who reaches out in an attempt to “help by telling the secret”. As can be seen in step V-2, a rational applicant should attempt to trick a verifier into committing to an inaccurate secret for a valid hint/secret pair because the applicant is rewarded y ETH from the verifier in such a scenario. A verifier cannot be sure that the provided secret is the actual secret committed on-chain, because what is stored on-chain is a hash of the secret plus a random number that is unknown to the verifier. If an applicant truly wanted to “help out” a verifier, he would have to reveal both the secret and the random number. However, as can be seen in step III-4, a rational verifier that knows the applicant’s random number should simply whistle-blow. The reward for whistle-blowing is greater than the reward for providing accurate verification. For these reasons, it is in the verifier’s best interest to simply do the work of deriving the secret himself.

Attack vector #2 is akin to the well known 51% attack and other brute force attacks. These attacks are technically unpreventable. Our only option is to design the protocol such that they are extremely expensive, not profitable and (hopefully) not worthwhile even to a spiteful attacker. A malicious entity could set up a verifier account and submit consecutive applications to a TIL until the application is assigned to his own verifier account. At this point, he would know the secret to an otherwise bogus hint/secret pair and gain admittance to the registry. Each failed application costs the attacker x native tokens and y ETH. The cost of such attack is therefore related to the number of verifiers in the work contract (akin to the hash power of a PoW chain), the cost in ETH to apply (value of y * price of ETH) and the cost in native tokens to apply (value of x * price of native token). We presume that, as the registry becomes more desirable, the price of the native token will increase, therefore providing some built-in protection from this attack. However, the parameters should be set such that this attack is prohibitively expensive.

Future Work

Private Transactions

Both primary attack vectors are related to collusion. This could be made far more difficult by making the verifier assignments private.

  1. Currently the assigned verifier for an application is publicly known. If this was not the case, it would prove very difficult for an applicant to attempt to bribe a verifier using an off-chain agreement or even another smart contract.

General Trustless Oracle

While our burning passion at MedX is to improve healthcare, we have also found ourselves quite far down the decentralized oracle rabbit hole. The crypto ecosystem is still maturing and we may end up building key infrastructure pieces along the way as we build MedX. We are exploring modifications to the TIL protocol which could serve as a trustless and secure method to migrate virtually any piece of objective data to a blockchain.

Image for post
Image for post

Give it a TRY

The Objective TCR (TIL) protocol is in beta 1.0 on the Ropsten Testnet using ERC-20 token metadata as a use-case: https://tokenregistry.medxprotocol.com/. Expect bugs!

MedX Protocol

MedX is a global healthcare market controlled by the people…

Thanks to James Todaro

Moshe Praver

Written by

Libertarian physician building free healthcare markets on the Ethereum blockchain at MedX Protocol

MedX Protocol

MedX is a global healthcare market controlled by the people who use it.

Moshe Praver

Written by

Libertarian physician building free healthcare markets on the Ethereum blockchain at MedX Protocol

MedX Protocol

MedX is a global healthcare market controlled by the people who use it.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store