Incentivizing Correctness

Dieter Shirley
Dapper Labs
Published in
4 min readSep 17, 2019

Quality feedback results in a better design.

Ed Felten of Offchain Labs took the time to share some concerns he had with a new cryptographic primitive the Dapper Labs’ team developed, called SPoCKs (Specialized Proofs of Confidential Knowledge).

I want to start by sharing my personal admiration for Dr. Felten, who has spent his career working towards the principles of freedom in the realm of software systems. His name will long be remembered as being on the right side of history, working to help our society understand the best ways to empower humanity with software. I’m very grateful to Ed for taking the time to contribute his sound thinking to our work.

Dr. Felten had two major concerns:

  1. Forgeable proofs
  2. Incentive design

Concern One: Forgeable Proofs

Dr. Felten’s first concern is that the SPoCKs, as originally defined, weren’t unforgeable as we believed and claimed:

Notice that there is nothing about this process that requires the participation of the party whose ID appears in the SPoCK. Anybody can make a SPoCK containing Alice’s public ID — so a SPoCK containing Alice’s public ID proves nothing about Alice’s knowledge of ζ.

In other words, our original proposal made it completely trivial for Alice to create a SPoCK for Bob, without Bob’s involvement. This is a valid concern. We’ve modified the definition of SPoCKs to resolve this oversight, and will be updating our Technical Papers with the updated procedure. (There is a short summary of the change at the bottom of this article.)

Concern Two: Incentive Design

The main thrust of Ed’s piece didn’t depend on forgeable proofs. Instead, it was based on an incentive analysis of the actors in the system. (I’ll use the “Asserters” and “Checkers” terminology from Dr. Felten’s excellent introduction to the Verifier’s Dilemma, which I would recommend to anyone. Note that the “Asserters” in Flow are the Execution Nodes, and the “Checkers” are the Verification Nodes.)

The problem with Ed’s incentive analysis isn’t the analysis as such; he simply made some incorrect assumptions about our incentive model.

Here’s footnote 4 from their paper (page 10): “We assume that honest nodes will not accept unproved values for ζ, because they would be slashed if the ζ value was incorrect.” But that doesn’t follow. Flow doesn’t slash Checkers for being wrong; they slash for being in the minority. [Emphasis added.] Which means that if others are going to accept a sketchy value, then your incentive is to accept it too. That’s a bad equilibrium.

That assumption — that “majority rules” when it comes to slashing — is very common in cryptoeconomic systems, but we went out of our way to design our system differently. In Flow, Asserters and Checkers can never be slashed for correct results, even if they are in the minority. It takes just one honest Checker to report an error to the Consensus Nodes for any number of Byzantine Asserters and Checkers to be slashed for publishing or approving wrong results. This is possible because, unlike most verification schemes, Flow doesn’t treat “no news as good news”: in order for a result to be accepted as correct, a supermajority of Checkers most positively affirm that they have checked and accepted the result. The network would rather halt than accept an incorrect result.

To illustrate how our system works, let’s revisit Ed’s example. Alice, Bob, and Charlie are Checkers in a situation where an Asserter has published an incorrect result. Even if Alice and Bob agree with the Asserter’s wrong result, it’s still rational within Flow for Charlie to challenge the result. Charlie will present the challenge to the Consensus Nodes, who will re-execute the disputed computation themselves. When they find Charlie’s challenge to be valid, all three of the Byzantine nodes will be slashed, the Asserter, Alice, and Bob. Our hero Charlie, meanwhile, isn’t in any danger of being slashed, and will even get a reward for reporting the malfeasance.

“Change is the essential process of all existence.”

Thanks again to Dr. Felten for his thoughtful critique of SPoCKs — they have already shaped Flow for the better. If you have any concerns or feedback regarding Flow, we want to hear from you. You can find a contact link and learn more about Flow at withflow.org.

If you have any questions regarding Flow, I’ll be hosting an “Ask Me Anything” in the Flow Discord on Tuesday, September 17th at 2:30pm PDT.

> Join the Flow Discord

Appendix: The Flaw and Fix for SPoCK Forgeability

In a nutshell, a SPoCK works like this:

  • Use a secure, one-way function to deterministically generate a key-pair from the Confidential Knowledge.
  • Use that private key to sign some value unique to the prover (such as their public ID number), and publish that signature along with the public key that can be used to verify it.

Any observer can see that two such proofs must have been generated with the same Confidential Knowledge (because the public keys will be the same), but the published proof doesn’t leak the Confidential Knowledge, or allow any other prover to generate an equivalent SPoCK because they can’t derive the necessary signing key from any publicly available information.

While all of the above is true, it doesn’t prevent one prover from generating “proofs” for any number of other actors. If Alice has the Confidential Knowledge, she can sign her own public ID, or Bob’s public ID, or Charlotte’s. In other words, Alice can create a “proof” that Bob and Charlotte have access to Confidential Knowledge, without them actually having that knowledge.

The fix is straightforward: Instead of using the SPoCK key-pair to sign Alice’s public ID, we use it to sign a value that only Alice could have: another signature (this one generated with the standard key-pair Alice uses to prove her identity). The value she signs isn’t important, but to prevent replay attacks, it should be derived from some value publicly associated with the current challenge which is highly unlikely to repeat for other challenges, and which is highly unlikely to be signed by any potential prover for any other purpose.

Editor’s note: An earlier version of this article misspelt Dr. Felten’s name. It has since been corrected.

--

--