Cheater Checking: How attention challenges solve the verifier’s dilemma

Ed Felten
Offchain Labs
Published in
6 min readSep 5, 2019

Yesterday I wrote about the cheater checking problem: how to make sure that parties who are supposed to be checking others’ computations are really doing so, rather than lazily guessing/assuming that the computations are correct. That post showed that conventional approaches to incentivizing checking don’t give the desired result — there is a baseline rate of cheating, below which checkers won’t find it worthwhile to check.

Let’s quickly review the model. There are two players, an Asserter, who makes a claim that is either true or false, and a Checker, who can check the claim at some computational cost. The Checker’s utility is R*X-C if it checks, and -X*L if it doesn’t check, where R is the reward for catching cheating, C is the cost of checking, L is the Checker’s loss if it fails to detect cheating, and X is the probability that the Asserter cheats (which is chosen by the Asserter). A bit of algebra shows that if X < C/(R+L), the Checker won’t bother to check, so the Asserter can get away with that much cheating.

To fix this, and create a situation where an incentive-driven Checker will always check, we have to change the Checker’s incentives. The fundamental problem is that in the original model the Checker’s positive incentives to check are all in terms that are proportional to X, the probability of cheating, so the Asserter can make those terms small as long as they don’t cheat too much. If we want an incentive to check that operates no matter what, we need to create an incentive to check, or disincentive to not check, that is independent of the Asserter’s actions.

TrueBit tried to do this by adding deliberately false claims to the set of assertions, essentially replacing X by X plus a constant. That approach had some problems. (The original Arbitrum paper has a section on incentive problems in TrueBit.)

Attention challenges

We use a different approach that we call attention challenges. The idea is that if the Asserter is computing a value f(x), it first posts x along with an encrypted challenge. To respond correctly to the challenge, the Checker will need to know f(x). Only after the challenge has happened does the Asserter publish f(x) — and at that point the Checker has already done the hard work of computing f(x) so it has no incentive to be lazy. (More details on the protocol are below.)

To reduce the number of on-chain transactions this requires, we’ll arrange things so that the Checker’s correct response to the challenge will usually be silence. But in rare cases the Checker will have to publish a tiny transaction on-chain. If the Checker gives the wrong answer — silence when it should post, or posting when it should be silent — it will lose a small deposit.

Let’s adjust the original incentive model to incorporate attention challenges. We need two new parameters (both of which we can choose): P, the fraction of the time that the Checker’s correct response will be to post, and A, the penalty if the Checker gives the wrong answer. Now the Checker’s utility is:

  • if checking: R*X-C
  • if not checking: -L*X-P*A

The key observation is that as long as P*A > C, then checking is the best strategy, no matter what X (the probability of cheating) is.

Cost is very low

To evaluate the cost, let’s look at a specific case. We’ll assume there is one assertion every five minutes, and checking costs the Checker $0.001. If we set the probability P to 0.3%, the Checker will have to post a deposit A of $3. Now, the Checker’s cost per assertion is $0.0003 for gas ($0.10 of gas to post its non-silent response, times 0.3% probability it will have to post), plus about $0.0003 for the interest cost of locking up its $3 stake for five minutes, for a total cost of $0.0006 per assertion.

Scaling to multiple Checkers

Attention challenges scale well if there are multiple Checkers. The protocol issues a single challenge that will affect each Checker differently, forcing each Checker to compute f(x) for themselves. Each Checker experiences the same cost ($0.0006 per assertion in our example).

In an open system, where anybody is eligible to check on computations, you can allow anyone to register as a Checker and place the small required deposit. This will make them eligible for attention challenges, and possibly for compensation from a dapp developer. Anybody is free to challenge an incorrect claim by the Asserter, but only the registered Checkers will be faced with attention challenges.

Technical details of the protocol

Having seen what attention challenges can do for us, let’s dig into the technical details of how they work.

Each Checker has a private key k and corresponding public key gᵏ, defined in a suitable group. Each Checker’s public key is known to everyone. We will also rely on a suitable hash function H.

To issue a challenge for the computation of f(x), where the function f is known to everyone in advance, the Asserter generates a random value r, then publishes (x, gʳ) as the challenge.

A Checker who has private key k should respond to the challenge by posting a tiny transaction on-chain if and only if H(gʳᵏ, f(x)) < T, where T is a suitably chosen threshold value. Note that only the Asserter (who knows r) and that specific Checker (who knows its private key k) can compute the hash, because they are the only ones who can compute gʳᵏ. Note also that the computing the hash requires knowledge of f(x).

(A caveat is that if a Checker can guess f(x) with nontrivial probability, without actually computing it, then we might need to increase the Checker’s deposit to deter guessing. In short, if a Checker can correctly guess f(x) with probability at most G, then multiplying the Checker’s deposit by 1/(1-G) will deter guessing.)

After the Checkers have had a window of time to post their responses to the challenge, the Asserter can post its f(x) which will be subject to challenge as normal if any Checker disagrees with it. At this time the Asserter can accuse any Checker who responded incorrectly; the Asserter must publish r to substantiate its accusation. The miners or a contract can check whether the accusation is correct and penalize the offender; but the accusation will be ignored if the Asserter’s claimed f(x) is not eventually accepted as correct. If any Checkers are penalized, the Asserter will get half of the seized funds, and the other half will be burned.

This approach gives Checkers the right incentives. Knowing how a Checker should respond to the challenge requires knowledge of that Checker’s private key and of f(x), so each Checker will want to compute f(x). A Checker can‘t execute the protocol safely unless it computes f(x) itself. The responses of other Checkers aren’t useful in determining f(x) because they depend on those Checkers’ private keys. If a Checker relies on somebody else to tell it f(x), it has no way to verify that claimed value (other than computing f(x) itself), and the Checker risks a penalty if it is wrong. There is even a party who has incentive to try to mislead Checkers about f(x) — that’s the Asserter, who profits from Checker mistakes and might use those profits to bribe a Checker’s “friends” to give the Checker bad information.

Optimizations and Takeaways

There are several tricks that can make this protocol more efficient. As an example, we can bundle one assertion together with the next challenge, into a single on-chain transaction, so that challenges don’t add to the number of transactions. If P is small (e.g. 0.3% in our example) and the number of Checkers isn’t too large, then Checkers will rarely need to write on-chain transactions, so the overall effect of the protocol on the number of on-chain transactions will be minimal.

With clever implementation, the cost of this protocol should be very low, compared to the pre-existing cost of posting assertions on-chain. In our example, adding attention challenges to an existing assert-challenge protocol adds less than 1% to the total cost.

And the benefit is substantial — we get an incentive-compatible checking protocol that doesn’t suffer from the verifier’s dilemma. As long as at least one Checker is behaving rationally, the Asserter’s claims will always be checked.

Coming soon to a blockchain near you

We’re building attention challenges into Arbitrum, our Layer 2 system for accelerating smart contracts. Stay tuned for more research advances from the team at Offchain Labs. Or come join the team and help us discover, implement, and ship faster, cheaper, more secure smart contracts.

--

--

Ed Felten
Offchain Labs

Co-founder, Offchain Labs. Kahn Professor of Computer Science and Public Affairs at Princeton. Former Deputy U.S. CTO at White House.