Validity Proofs Are Not Effective for Bridging Blockchains
What are validity proofs and why they are not an adequate stand-alone solution for bridging
--
With the recent wave of announcements of zero knowledge rollups on Ethereum, the blockchain community is getting excited for these validity-proof-based systems as scalable infrastructure for blockchains.
But validity proofs are also sometimes presented as effective cross-chain communication (or bridging) solutions. It is believed that validity proofs avoid the need for honest participants in bridging the same way they do for layer-2 scalability solutions.
Unfortunately, this is not the case.
At Connext, we are constantly researching ways to make communications between blockchains more secure and efficient, so we wanted to shed some light on the technology and its feasible solutions.
In this blog post, I will explain what validity proofs are and why they are not an adequate stand-alone solution for bridging.
Blockchains and consensus
At a high level, a blockchain network is a peer-to-peer network where all participants keep track of an ever-growing ledger of transactions. They each store a copy of this ledger locally, on a computer that they have ownership over.
These participants use a set of rules — called a consensus protocol — that they agree upon in advance. The consensus protocol provides a guarantee that all of the locally stored ledgers remain identical, even as the ledger grows. As long as more than half of the participants follow the protocol honestly, this guarantee is maintained.
When a transaction is submitted to a blockchain, it is propagated across a peer-to-peer network. Participants collect transactions as they receive them, and order them into blocks. Then they use the consensus protocol to determine how the new set of transactions will change the ledger. The consensus protocol works in two phases.
Phase 1: To add the block to the ledger, they need to execute a specific operation, a function, that takes a set of transactions as input and then outputs a new ledger based on those transactions. Once they have done this, they produce a new candidate ledger, and propagate this to the rest of the network.
Phase 2: Now each participant is receiving different candidate ledgers from other participants in the network. They use another function which tells them which candidate ledger to choose. Once they have run this function and have the correct new candidate ledger, they execute the same function of Phase 1 to verify the candidate ledger was computed correctly. If verification succeeds, they include this as their local copy of the ledger and discard all other candidates.
Validity Proofs
Validity proofs are proofs of computation of Phase 1. They allow a prover to output a new candidate ledger from a set of transactions, and then prove that this ledger was computed using the correct function.
We use the word proof for a reason. If p is a validity proof, then there exists a verifier function that outputs True if it is given a valid new ledger. On the other hand, if it is given a ledger that was computed incorrectly, or on an illegal set of transactions, then verifier function will output False with almost 100% certainty (the chance of a false positive is less than getting struck by lightning).
Therefore we do not need to assume that the prover is honest. This is a very powerful feature. Thanks to the magic of math, probability, and (limits of) computation — even if the prover is dishonest, they will be physically unable to cheat.
Validity Proofs are very handy for scaling solutions, like zk-rollups, because they enable an off-chain process to compute multiple transactions and prove that it did so correctly.
After processing these transactions, a zk-rollup submits a new candidate ledger, and a validity proof (proof of computation of Phase 1) to its Layer 1 Blockchain.
As the new candidate ledger and proof are propagated across the peer-to-peer network, participants only have to verify the proof, rather than run the Phase 1 computation all over again.
Importantly, they are able to verify multiple transactions at the same time, which decreases the computation time of Phase 1.
This optimization is possible because of the iron-clad security of validity proofs, where provers are physically unable to be dishonest, even when computing things off chain.
zkRollups are able to make Phase 1 of the consensus process more efficient.
Phase 2, on the other hand, is still left to the underlying blockchain.
Validity Proofs and Bridging
A secure bridging protocol between Blockchain A and Blockchain B guarantees that if a transaction is processed and added to the ledger in Blockchain A, then it will be processed and added to the ledger of Blockchain B within some pre-specified time interval.
If we want to mimic the security and trust assumptions of validity proofs in zk-rollups (ie the proof is so secure that dishonest participants will always be stopped) then Blockchain A would need to prove both Phase 1 and Phase 2 of the consensus protocol to Blockchain B.
We already know how to prove Phase 1. We simply provide a validity proof like we do with zk-rollups. The Phase 1 computation only needs to check that the correct function was used to process a set of transactions. It doesn’t really care what those transactions are — only that they are acceptable transactions and only that they are correctly processed when computing the new candidate ledger.
Phase 2, on the other hand, is not possible to prove. To illustrate this, let’s consider what we would learn from a proof of computation of Phase 2. A proof of computation of Phase 2 would tell us that, given a set of candidate ledgers, the winning ledger was chosen correctly.
But how do we know that this set of candidate ledgers that Blockchain B receives is accurately reflective of the candidate ledgers that were propagated across the p2p network of Blockchain A? There is no way to know that the claimed inputs reflect the actual physical reality of the system.
A cryptographic proof can only encode static data. It is great for encoding publicly-known functions and the data that was used with that function. But proofs cannot encode ever-changing flows of information in an ever-changing permissionless network.
There is no way to provide a proof for Phase 2, and therefore there is no way to create a completely trustless bridging protocol.
Some bridges have learned this the hard way. Ronin, for example, had a bridge where a transaction was considered “proven” if it was signed by 5 out of 9 authority members. In April, it was victim to an attack where a hacker managed to gain access to 5 out of 9 keys, and sent fraudulent transactions to the recipient blockchain.
In this case, Ronin provided a proof of computation of Phase 2, where the function for accepting a candidate ledger was to check that it was signed by 5 out of 9 validators. However, it was unable to provide a proof of the state of the actual physical system, and so naturally fell victim to a censorship attack, where the actual transactions posted to the ledger of the originating blockchain (Blockchain A) were censored and not sent to the destination blockchain (Blockchain B).
Bridging solutions require a trusted party. A formal proof of this fact can be found here. It is possible to distribute this trust amongst many people and it is possible to use game-theoretic techniques to dis-incetivize bad behavior. But there is no way to create a completely trustless bridging protocol.
Since there is no way to prove Phase 2 without trusting an honest party, a secure bridging protocol mandates that somebody is watching both blockchains to ensure they are consistent with each other. A well-designed bridge will minimize the trust required for such watchers as much as possible. One class of protocols that does this well are optimistic bridges.
Optimistic Bridges
Designers of Optimistic Bridges embrace the fact that a trusted third party is needed. They work to distribute that trust and to design incentives in a way that minimizes that trust as much as possible.
More specifically, current optimistic bridges work by delegating out tasks to an external network of participants. Some of these participants, called Updaters, poll Blockchain A for transactions, sign them, and then send the signatures and transactions to Blockchain B. Other participants have the role of a watcher, where they poll Blockchain B, compare its ledger to Blockchain A, and look out for any discrepancies.
When a watcher reports a discrepancy between blockchains, it sends the signed incorrect transaction. Remember that the transaction was signed by an Updater. Because cryptographic digital signatures are impossible to forge, there is no way for the watcher to lie and so the malignant Updater will always be implicated.
You may have noticed that there is no cryptographic mechanism for ensuring that updaters and watchers are honest. As explained in the previous section, such mechanisms don’t exist. Instead, we use monetary incentives so that it is in their best interest to act honestly. The Updater, which is in charge of signing and sending transactions (and who has the ability to be fraudulent) must stake money to assume its role. If a watcher reports fraud, this money is given to the watcher.
Importantly, you only need one honest watcher monitoring the chain to keep the system secure.
Optimistic systems are still being improved. One of the main goals of the research team at Connext is to design optimistic systems that minimize trust as much as possible using a combination of cryptographic and game-theoretic techniques. But even in their current (and early) state, optimistic mechanisms are extremely helpful for any blockchain bridging system.
Conclusion
The goal of this blog post was to shed some light on the notion of validity proofs in bridging and to add some nuance to the “validity proof vs fraud proof” debate in bridging. Please leave questions in the comments and add a clap or two if you liked the post :)
About Connext
Connext is a network for fast, trustless communication between chains and rollups. It is the only interoperability system of its type that does this cheaply and quickly without introducing any new trust assumptions. Connext is aimed at developers who are looking to build bridges and other natively cross-chain applications. To date, over $1.5b in transactions have crossed the network.