A TCR Protocol Design for Objective Content

Introduction

In this post, we outline the basic protocol for an objective TCR. This will be followed by a formal quantitative analysis of the security guarantees and potential attack vectors. We assume familiarity with adChain’s original TCR design and encourage the reader to review our prior work on TCRs.

We prefer to call Objective TCRs: Trustless Incentivized Lists (aka TILs). We consider these a type of oracle and treat them as work tokens.

The entire protocol centers around one goal: providing an incentive for a verifier to actually check the validity of an application. In the original TCR paradigm it is assumed that the subjectivity of the application will naturally cause friction amongst token-holders. Such friction cannot be assumed in a TIL. In other oracle designs, such as Augur, the protocol assumes that, in the context of a prediction market, a self-interested market participant will check the validity of an outcome to prevent being cheated. Such self-interested participants cannot be assumed in a TIL.

The protocol is inspired by Truebit. The team at Truebit introduced the concept of “forced errors” to motivate verifiers to expend computational resources in search of these errors. Anyone who discovers a forced error receives a reward. The forced error concept draws upon the idea that, for a given objective task there should be a deterministic outcome. Input a Output b. We distill this idea and eliminate the need for forced errors.

Simply: if one individual (applicant) commits to an input/output pairing and provides another individual (verifier) with the same input, that individual should be able to produce the same output. In the absence of communication between the two individuals, if they produce the same output, we can conclude with great confidence that the input/output pair is valid across some function. Thus, the crypto-economic goal of the protocol is to prevent collusion between the applicant/verifier.
Demo Video

Protocol

I. Pre-amble

  1. A TIL requires a “native token”. We use a standard ERC-20 contract.
  2. The protocol involves 3 types of parties: applicants, verifiers and voters. All three require native tokens. A single individual can perform all 3 roles.
  3. The applicant & verifier are incentivized by ETH rewards. The voters are incentivized by native token rewards.
  4. There exists a pool of verifiers who are available to review new applications to the registry. To become a verifier, one must stake a fixed amount of native tokens in a work contract. We use a fixed amount of tokens due to computational complexity that arises when allowing a variable stake size.

II. Application Phase

  1. An applicant submits a hint and a secret, such that a verifier who is provided the hint can deterministically derive the secret (examples: token-name/ticker→ token contract address; physician name/specialty →license number).
  2. The applicant generates a random number in their browser.
  3. The applicant commits the following on-chain: i) his hint, ii) a hash of his random number and iii) a hash of the concatenation of his secret and random number. Note, the secret cannot be credibly revealed without also revealing the random number.
  4. Along with the above transaction the applicant sends x native tokens and y ETH. The ETH is used to pay the verifier. The native tokens are used to stake in the registry (for a successful application).
  5. The following block is mined and the block hash is used as a source of pseudo-randomness. This pseudo-randomness is used to randomly select a verifier from the work contract.

III. Verification Phase

  1. 2x native tokens are transferred from the selected verifier’s work contract to the application contract.
  2. The verifier is presented with the hint and is given a fixed amount of time to derive and submit the corresponding secret.
  3. If the verifier fails to respond in time, the applicant selects a new verifier, is rewarded x native tokens from the original verifier and the original verifier is returned x native tokens. If a verifier’s staked tokens falls below a given threshold, they are removed from the verifier pool. Repeat step III-1.
  4. If at any time prior to the verifier submitting a secret, the applicant’s random number becomes known, one may “blow the whistle” by submitting the random number. If the hash of this random number matches the hash stored on chain, the whistle-blower is rewarded x native tokens and y ETH from the applicant. The protocol terminates.
  5. Assuming the absence of a whistle-blower, the verifier then submits a corresponding secret along with y ETH within the time window.

IV. Secret Reveal Phase

  1. Following the verifier committing to a secret, the applicant is given a fixed amount of time to reveal their random number.
  2. If the applicant fails to reveal their random number in time, the verifier is rewarded y ETH. The verifier’s 2x native tokens and y ETH are returned. The applicant’s x native tokens are returned. The application is not admitted to the registry. The protocol terminates.
  3. Otherwise, the applicant reveals their random number and secret. If the applicant and verifier secrets are the same, the applicant is admitted to the registry with x native tokens staked. The verifier is awarded y ETH for their diligent curation. The verifier is returned their 2x native tokens and y ETH. The protocol terminates.

V. Power Challenge Phase

If the applicant and verifier secrets differ, a “power challenge” commences in which the voters determine if the application hint/secret pair are valid. Here, we replace the traditional PLCR style voting with a round-based game of escalating stake. Each round signals a tentative outcome and lasts a fixed window of time. If the tentative outcome is not disputed during the time window of a given round, the tentative outcome becomes the final outcome and all voter stake that supported the losing outcome can be claimed by voters who supported the final outcome.

The amount of stake required to dispute a tentative outcome is: 2*(total tokens staked) - 3*(current stake on alternate outcome). By nature of the prior applicant-verifier interaction, the starting state of every power challenge is:

Starting state of a power challenge

A power challenge that continues through 6 rounds would appear as follows:

6 power challenge rounds

We prefer the power challenge over PLCR voting for several reasons. First, voting notoriously suffers from poor voter turnout. This makes it likely that a single ‘whale’ can dictate the outcome of a vote. Over time, this deters smaller voters from participating. Second, PLCR voting rewards are unpredictable. Typically, the number of tokens in the minority are so small that rewards to the majority token holders are simply not worth the effort (in fact, sometimes the gas costs associated with PLCR voting are greater than the actual rewards for winning). In the power challenge, a voter on the winning side is guaranteed a 50% ROI in native tokens. This is heavily inspired by Augur’s dispute round mechanism (white paper section C-8). Third, PLCR voting requires voters to return and reveal their vote during a “reveal window” — this leads to a poor UI. The power challenge does not rely on obfuscation of votes (it is secured by the credible threat of a fork, see section VI below), and requires only one period of voter interaction.

  1. If the power challenge resolves in favor of the verifier, the verifier receives y ETH and x native tokens from the applicant. They are returned their deposit of 2x native tokens and y ETH. The applicant is not admitted to the registry.
  2. If the power challenge resolves in favor of the applicant, the applicant receives y ETH and 0.5x native tokens from the verifier. They are admitted to the registry with x native tokens staked. The remaining 1.5x native tokens of the verifier are rewarded to voters who supported the final outcome. Of note, the verifier forfeits both his 2x native tokens and y ETH.

VI. Forking Phase

The power challenge can, in theory, be won by the individual(s) who possess the largest amount of native tokens. To prevent plutocracy, the ultimate security backup is a fork. When the amount of native tokens involved in a power challenge exceeds a certain threshold, the registry forks. Following a fork, there exists a version of the registry with the new applicant included and a version with the new applicant rejected. For simplicity, in a forking scenario the applicant and verifier are returned their respective y ETH. All pending applications are aborted and participants in these pending applications reclaim their respective ETH and native tokens. All native tokens currently involved in the power challenge are automatically migrated to the version of the registry they signaled support for. All tokens not involved in the power challenge must manually migrate to the version they believe is valid. Users of the registry and token-holders determine which version of the registry they wish to continue using. Presumably, the tokens in the valid version will retain their value, whereas the ones in the invalid version will become valueless. In this forking paradigm, even an attacker with >51% of the native tokens could be incapacitated by off-chain coordination that would render her native tokens valueless in an invalid deprecated registry.

Qualitative Protocol Analysis

In this section we will qualitatively describe 2 possible attack vectors. Following this post we will provide a more rigorous quantitative (LaTeX) analysis of the security guarantees.

TILs rely on the truth as a Schelling Point. If the property being curated in a TIL is truly objective there should be a way to construct a hint → secret pair that is deterministic.

Because we have adopted a work token model with slashing conditions, verifiers in a TIL will lose native tokens if they neglect to review applications that are assigned to them (Step III-3) or provide a secret that does not match a valid hint/secret pair. This provides an incentive for verifiers to routinely check for new applications (e-mail/text notifications are possible with some degree of centralization) and perform the necessary work to derive a secret from the provided hint. Verifiers who perform their jobs appropriately will receive predictable rewards in ETH.

A malicious actor who wishes to admit a faulty application to the registry has two attack vectors: 1) attempt to communicate with their verifier or 2) verify their own application by owning the assigned verifier account.

The protocol dissuades attack vector #1. A verifier should never trust an applicant who reaches out in an attempt to “help by telling the secret”. As can be seen in step V-2, a rational applicant should attempt to trick a verifier into committing to an inaccurate secret for a valid hint/secret pair because the applicant is rewarded y ETH from the verifier in such a scenario. A verifier cannot be sure that the provided secret is the actual secret committed on-chain, because what is stored on-chain is a hash of the secret plus a random number that is unknown to the verifier. If an applicant truly wanted to “help out” a verifier, he would have to reveal both the secret and the random number. However, as can be seen in step III-4, a rational verifier that knows the applicant’s random number should simply whistle-blow. The reward for whistle-blowing is greater than the reward for providing accurate verification. For these reasons, it is in the verifier’s best interest to simply do the work of deriving the secret himself.

Attack vector #2 is akin to the well known 51% attack and other brute force attacks. These attacks are technically unpreventable. Our only option is to design the protocol such that they are extremely expensive, not profitable and (hopefully) not worthwhile even to a spiteful attacker. A malicious entity could set up a verifier account and submit consecutive applications to a TIL until the application is assigned to his own verifier account. At this point, he would know the secret to an otherwise bogus hint/secret pair and gain admittance to the registry. Each failed application costs the attacker x native tokens and y ETH. The cost of such attack is therefore related to the number of verifiers in the work contract (akin to the hash power of a PoW chain), the cost in ETH to apply (value of y * price of ETH) and the cost in native tokens to apply (value of x * price of native token). We presume that, as the registry becomes more desirable, the price of the native token will increase, therefore providing some built-in protection from this attack. However, the parameters should be set such that this attack is prohibitively expensive.

Future Work

Private Transactions

Both primary attack vectors are related to collusion. This could be made far more difficult by making the verifier assignments private.

  1. Currently the assigned verifier for an application is publicly known. If this was not the case, it would prove very difficult for an applicant to attempt to bribe a verifier using an off-chain agreement or even another smart contract.
  2. One way to make attack vector #2 prohibitively expensive is to require multiple verifiers for each application. This allows us to exponentially increase the cost of the attack. While presently possible, there is the concern that co-verifiers will either attempt to coordinate their secrets or worse, co-verifiers will mimic the response of the first verifier who submits a secret.

General Trustless Oracle

While our burning passion at MedX is to improve healthcare, we have also found ourselves quite far down the decentralized oracle rabbit hole. The crypto ecosystem is still maturing and we may end up building key infrastructure pieces along the way as we build MedX. We are exploring modifications to the TIL protocol which could serve as a trustless and secure method to migrate virtually any piece of objective data to a blockchain.

Give it a TRY

The Objective TCR (TIL) protocol is in beta 1.0 on the Ropsten Testnet using ERC-20 token metadata as a use-case: https://tokenregistry.medxprotocol.com/. Expect bugs!