Series;Verified / Revisiting the Verifier’s Dilemma with Raghav Kulkarni

Truebit Protocol
Truebit
Published in
9 min readJul 9, 2021

In this Series;Verified, we meet with Raghav Kulkarni, co-author of the Verifier’s Dilemma paper, “Demystifying Incentives in the Consensus Computer,” a joint work with Loi Luu, Prateek Saxena, and Jason Teutsch written during the authors’ tenure at the National University of Singapore. This interview commemorates the sixth anniversary of the July 4, 2015 Bitcoin fork which occurred while the paper was under review at the CCS security conference. During this incident, roughly half of Bitcoin miners were SPV mining. While the miners failed to validate blocks, they did sufficiently validate the Verifier’s Dilemma so as to garner the paper’s acceptance to the scientific conference. The team’s discovery brought Truebit into focus on the horizon as a practical solution to the computational limitations of Nakamoto consensus.

1. Welcome Raghav! Please tell us about yourself and why you entered the National University of Singapore (NUS)

I am a Computer Scientist who enjoys exploring, understanding, and solving problems in diverse areas of computer science. I graduated from the University of Chicago with a Ph.D. Thereafter, I pursued my post-doctoral studies in a lab called LIAFA (University of Paris 7). Subsequently, I moved to the Center for Quantum Technologies in Singapore after the professor I was working with in Paris had moved and invited me to join his lab at the NUS. During this time, I got the opportunity to learn and publish research in some of the emerging fields in computer science such as quantum computing, machine learning, and blockchain technologies. After my post doctoral time I was able to apply some of my algorithms and machine learning skills at scale in production at LinkedIn. Since then, I have been passionate about building real-world solutions using cutting-edge technologies.

2. How you met Jason Teutsch? Can you share about your collaborations with him, and the most interesting interactions you both had on an academic level?

I met Jason at the University of Chicago. He was one of the smartest and most knowledgeable people around in the area of logic and recursion theory. We had several stimulating discussions on the topics of algorithms, complexity, and combinatorics which built some of the foundations for our collaborations. Later on, when I was working at the Center for Quantum Technologies, Jason was visiting Singapore where he was working with Prateek Saxena. He introduced me to the exciting area of blockchain technology and we conceived the idea that the property testing of boolean functions that I was working on, could have some relevance to his verifiable computation on blockchain research. Thereafter, we worked out some intriguing examples which led to more interesting questions and ideas, such as the very initial ones that we see in Section 6 of the paper. Jason took the lead to take these ideas to the next level, and together with Prateek Saxena and Loi Luu we made two exciting discoveries; First, that blockchain is susceptible to certain types of attacks, second, with the right incentives and structure, the blockchain can potentially be used as a powerful source of trusted computation. Interestingly, the July 2015 incident happened on Bitcoin which gave supporting evidence to our theory- that rational miners are incentivized to be dishonest and ignore the validity of the blocks. This theory is now well known as the Verifier’s Dilemma.

I enjoyed this process due to its collaborative nature which gathered insights from diverse researchers in the computer science space. Jason, besides having strong intellectual rigor and honesty in his area, is keen to work hard to understand the ideas from different areas and connect them together. I gathered that he knows how to enjoy life outside of work and knows best how to break the silence with laughter when the moment demands.

3. How do you see your work playing differently in both, the centralized and decentralized computing realms? How did this specific research influence your current endeavors outside of blockchain?

As I mentioned earlier, some of the ideas (not all) of this paper seem to have some parallel to the very classical setting of property testing, which makes perfect sense in centralized computation as well. However, the Verifier’s Dilemma addresses something unique about ‘Ethereum-like’ blockchains. The ‘smart-contract’ scripts for these blockchains are so called ‘Turing-complete’, i.e., they have much stronger expressive power. So, the Verifier’s Dilemma makes specific observations about these types of systems — Ethereum blockchains that allow complex computations or Bitcoin blockchains that allow large transaction sets. One can still argue that, in their abstract form, these ideas can be applied to understand the limitations of other models of computation (e.g. prover verifier games in the context of randomized and approximation algorithms in classical as well as distributed settings or even to the quantum computation models) and provide algorithms for addressing those limitations.

4. What is the Verifier’s Dilemma and why was this analysis important to you back in 2015 when the paper was written?

Roughly the idea is the following. Since ‘Ethereum-like’ blockchains allow strongly expressive ‘smart-contracts’, there are situations when the rational miners are incentivized to be dishonest by not verifying the computations and skip verifying the blocks. The paper identifies two possible attacks: 1) the resource exhaustion attack by problem givers and 2) the incorrect transaction attack by provers. These attacks can affect the sanity of the blockchain consensus, and our focus was to highlight how the incentive mechanisms play a role in allowing such attacks.
My interest started through exploring a possible connection between property testing and verifiable computation on blockchain. In particular, to understand the power and limitations of the emerging model of consensus based computation and to provide the algorithms inspired from property testing literature. When I started exploring, I realized that there are further exciting branches and applications, for instance, how to achieve robust computation under limited trust, understanding incentive compatibility in multi party games, zero-knowledge proofs on blockchain, etc.

5. What is the gas limit? And how does it help prevent attacks?

Since Ethereum scripts are highly expressive, they are susceptible to certain types of attacks called Denial-of-service (DoS) attacks, which exhaust honest miners’ resources. Moreover, in such attacks, a heavy transaction can eventually lead to the blockchain not being able to support additional transactions, hence coming to a halt.
The ‘gas limit’ is introduced with the hope to mitigate such a risk. Loosely speaking, it makes the creator of the block pay for the expensive verifications. The intention is to make the DoS attacks expensive for the attackers. However, as we have discussed in the paper, this mechanism does not entirely solve the DoS attack problem.

A potential attacker can exhaust the resources of other miners at no cost. The main reason is that the transaction fee — the gas — is collected by the block founder only. A potential attacker can freely add his own transactions, whose validity he already knows, to his own block. The attacker does not need to spend computational effort verifying his own transactions, but an honest miner does. Ethereum rewards the winner of the race — the one who is the fastest to verify the next block. The attacker can use his unspent computational resources to gain the competitive advantage, hence, winning the race over the honest miners. Due to the Verifier’s Dilemma, rational miners do not know whether or not to verify heavy blocks. This can affect the health of the entire blockchain if it is flooded with unverified blocks.

6. The paper illustrates potential attack scenarios. One of them describes what you just mentioned above. In the context of the July 4th fork, how did the ‘invalid’ block affect the health of the blockchain?

In the context of Bitcoin’s 2015 incident, it was discovered that roughly half of the network was mining on top of invalid blocks (with respect to BIP66) without validating them. The validation was likely skipped due to the computational/time cost of verifying the invalid block. Because of the previously existing incentive structures, skipping steps and dishonesty were inadvertently rewarded and blockchain health remained at risk.

7. In summary, the paper makes 3 contributions; It identifies the Verifier’s Dilemma and attacks, but it also proposes a security model and the techniques to realize a consensus computer. Can you tell us more about this?

Broadly speaking, we propose a model that incentivizes rational miners to verify correctly by limiting the amount of work required to verify the transactions within a block. This means deviating from honest behaviour does not give much competitive advantage. This new framework is realized in multiple ways. We describe a couple of them in this paper: one with exact consensus for a limited class of problems, and one that supports approximate consensus for a wider class of problems.

8. What is the status of the consensus computer as of now?

The consensus computer is a framework. There are challenges in implementing it and there are techniques to overcome those challenges. As opposed to a classical computer, the consensus computer deals with decentralization. This gives it some unique powers (e.g. trustless computation) and limitations (e.g. apparent constraint in our 2015 model that one can only correctly solve easy-to-verify problems). The blockchain technology, which is still evolving, looks like it has the potential to realize the consensus computer in some form or other. Truebit, for example, resembles a consensus computer, yet surpasses and replaces it in a way with the help of a novel and scalable verification game mechanism introduced in the whitepaper.

9. How has the concept of a consensus computer changed since 2015? Can you take us back in time with a mathematical example?

Back in 2015, the consensus computer that we started with seemed to require the problems to have ‘easy-to-verify’ solutions. This required some clever mathematical arguments. As a simple example, consider the problem of computing the GCD of two integers m and n. The complexity of a typical solution is cubic in the number of bits that are required to represent the integers. However one can encode the solution in such a way that the complexity of verification is much lower. For instance, if we ask the prover to provide 5 integers a, b, x, y, z such that (i) ay + bz = 1, (ii) x divides both a and b, and (iii) |y| < b and |z| < a then using Bezout’s Identity one can verify that x is indeed the correct GCD of m and n using only 10 arithmetic operations whose complexity is only quadratic in the number of bits. The consensus computer back in 2015 exploited these special mathematical structures to come up with the verifiable problems.

Truebit, on the other hand, has taken these concepts to the next level where the problem giver does not explicitly have to worry about the mathematical structure of the problem(s) he is posing. Almost any code can be compiled into a Truebit task. Therefore, this concept of a consensus computer has evolved, and Truebit may have replaced it.

10. Can you tell us about verifiable computation? How does Truebit fit in within the scope of this paper?

The idea of verifiable computation is to be able to outsource problems and reward the problem solvers in a fair way after verifying their work. In order to realize such a verifiable computation over decentralized systems such as blockchain, one possible direction is to design the right incentive structure and computational constraints so as to ensure the correct execution. Truebit, for instance, is a promising step in this direction which takes the ideas of the paper to the next level and materializes parts of it in a concrete way. This potentially opens up whole new sets of possibilities as a real-world application in blockchain technology.

11. What real world solutions have been put in place to solve the Verifier’s Dilemma?

Truebit definitely is the one leading the efforts. In a way, Truebit attempts to bypass the Verifier’s Dilemma by introducing a scalable verification mechanism — the interactive verification game — which is based on a simple binary search algorithm. The interactive verification game in Truebit efficiently resolves the dispute between a Solver (who has provided a solution to a computational task) and a Challenger (who disagrees with the solution). As explained in the Truebit whitepaper, this additional mechanism overcomes the bottleneck created by the Verifier’s Dilemma. By successfully mitigating the Verifier’s Dilemma, Truebit creates an opportunity for blockchain technology to be more reliable and scalable.

Interested in diving into this subject further? Check Raghav’s Ask Me Anything [AMA] on Reddit HERE which took place on Thursday July 15th 2021

--

--