# Validity Proofs vs. Fraud Proofs Strike Back

## Introduction

Recent months have seen considerable interest in Optimistic Rollup (OR), a Fraud-Proof (FP)-based scalability framework. We at StarkWare are implementing Validity Proofs (VP) because they are more secure than FP. This post points out a few additional benefits that VP have over OR and corrects common misconceptions about VP, and about StarkEx — StarkWare’s STARK-based VP scalability engine.

In particular, in comparison to OR, VP is:

- Fundamentally more secure
- 1000X more capital-efficient for withdrawals
- More scalable (on-chain)
- At least as efficient, computationally

## Security

In our previous analysis we compared VP, where state transitions occur strictly across valid states, to OR, where any state transition is allowed (hence: optimistic), and participants can submit proofs of fraud, i.e. of invalid state transitions. Ours was essentially a security analysis, and pointed to a well-defined attack on OR, an attack which results in all funds being stolen from the OR (additional possible attacks continue to be deliberated). Infrastructure solutions offered for blockchains must be resilient enough to haul the full load of the financial world, where $Ts are transacted daily. How do VP and OR measure up to this task? Since the cost of stealing funds from OR is unrelated to the bounty size, rational agents are bound to break the system once substantial value is placed on it. In contrast, VP cannot be moved to an invalid state and funds cannot be stolen, irrespective of the bounty size. For large-value financial systems, VP is sturdy, OR is likely to break.

Security can also be analyzed based on the role of data, and the implications of its absence. In OR, data plays a far more dangerous role than in VP. In the absence of data, funds in OR can be stolen — for that reason, current designs focus on on-chain DA. In VP, with on-chain data (i.e. ZK-Rollup) funds are as safe as Layer-1 itself. In VP with off-chain data, funds can merely be frozen, not stolen.

## Capital Efficiency

One painful problem with cryptocurrency liquidity is the delay associated with funds withdrawal. In OR, the standard withdrawal window is ~1 week — the expected duration for submitting fraud proofs (a security parameter). In VP the standard withdrawal window is ~10 min (and likely much less, with additional software and hardware improvements) — the duration of producing a proof attesting to the integrity of the latest computation. The standard withdrawal window for OR is therefore 1000X longer than for VP (as 1 week / 10 minutes ~ 1000). This 1000X penalty must be paid when using OR, whether that penalty is denominated in capital efficiency or in duration.

We previously described a trustless Fast Withdrawal mechanism, wherein a user who wants to withdraw funds promptly, gives the liquidity provider a signed conditional payment — an IOU — for funds *off-chain*, and in turn is paid out — at network speed — the same amount *on-chain*, out of the liquidity provider’s “cookie jar” smart contract. The liquidity provider can replenish their “cookie jar” periodically with funds accumulated in their off-chain balance.

Fast Withdrawal can be used for both VP and OR. But for OR, the liquidity provider needs to hold 1000X more capital in their “cookie jar”, as it needs to service requests received during a window that is 1000X longer. This ratio is independent of any assumptions we make in the algorithm to determine liquidity required in the “cookie jar”: whether it is based on expected withdrawals, or only on withdrawals-minus-deposits; whether it is based on peak liquidity requirements or only on some average/likely case.

Unfortunately, Fast Withdrawals are not always a viable option. When dealing with non-fungible assets (or simply smaller fungibility sets) they are impossible (or are likely to be considerably more expensive):

- Non-Fungible Tokens (NFT): As first described by Vitalik, if a valuable crypto-kitty, call her Mitzi, is stored off-chain, its owner cannot ask to receive it on-chain, for there is but one Mitzi in the world.
- Shielded transactions: A Zerocash-style commitment is also non-fungible, in a way. To activate a fast withdrawal when moving a shielded token back to the main chain — where it is to remain shielded — the user would have to reveal a particular commitment to the liquidity provider.

In these cases, where Fast Withdrawals are not available, users would revert to the standard withdrawal window, which is 1000X faster for VP than for OR.

## Scalability (On-chain)

In this section, so as to compare apples to apples, we will consider only rollup systems, where by definition data is available on-chain: OR vs. STARK ZK-Rollup (StR). Any solution that stores data on-chain has the undesirable property of on-chain resource consumption growing linearly with the volume of rollup transactions. On-chain data includes some *state* (e.g. transaction details) and a *witness* (e.g. digital signatures that prove the consent of the transacting parties). The difference between OR and StR is that OR witness scales linearly with the number of transactions whereas StR replaces all those witnesses with a single proof that scales poly-logarithmically with the number of transactions. Importantly, for sufficiently large batches, StR’s on-chain data footprint is significantly smaller than OR’s..

In some more detail: In StR, the witness can attest to checks done by the rollup operator (e.g. all digital signatures were verified), and therefore a witness (e.g. a zk-proof) is required *per batch* of computations, avoiding the need to append a witness to every single transaction. Fortunately, in modern zkp systems, the proof is of a practically fixed size (well, poly-logarithmically, as we state above). Consequently, as the batch grows, the amortized cost of the witness per transaction diminishes.

In OR, a witness is required *per transaction* in order to allow validators to ensure correctness, and so there is no amortization benefit to larger batches. Moreover, the witness in OR is considerably bigger than the transaction: for example, the OR witness needs to include all users signatures¹, whereas in VP those can be removed (as the proof will attest that they were checked off-chain). In transparent payments, the witness is 3X-5X bigger than the payment itself; in more complex applications (e.g. shielded transactions), the witness is often 10X bigger than the state, and sometimes more.

In summary, OR consumes considerably more on-chain resources, and will therefore hit a scalability ceiling well before StR does.

## General Computation Overhead (STARKs Get Better as They Get Older)

One comparison often made between VP and OR has to do with their computational overhead: for a given computation to be carried out off-chain, how much more work is required in each approach? We will focus on StarkWare’s STARK capabilities, as that is the VP framework we’re currently implementing.

OR: The number often quoted for OR is 100X, as having 100 validators keeping watch of one another will hopefully suffice to ensure computations are done correctly. In order to validate, each one of these validators needs to conduct the actual computation, hence 100X.

Note that the smaller/more-predetermined the set of validators, the easier it is for them to collude, or for outsiders to bribe/attack them.

STARK: Here, only one entity — the prover — needs to conduct a large computation, as verification is computationally trivial. How trivial? One can today verify proofs for huge batches of computation with a simple smartphone, and so we can ignore verification costs. The number often quoted for the prover’s overhead is 10,000X, as generating the proof is computationally demanding. The reality for StR is that this overhead is actually about 100X, and therefore in the same ballpark as for OR. It is only 100X for several reasons:

- For arithmetic/algebraic operations, we are already at a computational overhead of less than 100X. The Pedersen hash function, which we currently use, is only 20X more expensive than a native computation: STARK-proving a Pedersen hash is 20X slower than computing a Pedersen directly.
- For operations such as SHA-256 there is indeed considerable overhead, but we intend to replace these hash functions with STARK-friendly ones. We were funded by the Ethereum Foundation to do so, and expect the selected committee of academic cryptanalysis experts to issue its recommendation in Q1 of 2020. We expect the selected STARK-friendly hash function to be about 100X slower to prove than an efficient hash (e.g SHA-256) computation.

Finally, one argument often cited in favor of OR is that it applies to general computations, and will support EVM code, whereas VP do not. We’re optimistic about STARK for general computation.

Thanks to Dan Robinson, John Adler, and Vitalik Buterin for their comments

¹ *BLS is often cited as an efficient signature aggregation mechanism. We believe BLS won’t serve this particular use-case, as it is best suited for many signatures on a single message. In the OR use-case there many messages to be signed by their respective sender/receiver; verification of BLS signatures here would take about **10ms/signature** (a single pairing operation per message), which would burden both the validators (off-chain) and the main chain in case of a fraud dispute.*