Recursive SNARKs and Incrementally Verifiable Computation (IVC)

PART I: Recursive SNARKs and Incrementally Verifiable Computation

Veridise
Veridise
8 min readJul 27, 2023

--

This is part of an article series on accumulation, ZK folding, and NOVA:
📓
Part I: Recursive SNARKs and Incrementally Verifiable Computation (IVC)
📓
Part II: Halo and Accumulation
📓
Part III: Nova and Folding (1/2)
📓
Part IV: Nova and Folding (2/2)

This is the first of a sequence of blog posts on Nova, a recursive proof system based on a clever folding scheme for R1CS statements (more precisely relaxed R1CS statements, these and their generalizations will be defined in the next blog posts). We will start gently and later on in the series we will dive deeper into the technicalities.

Nova is a high-speed recursive SNARK (Succinct Non-interactive Argument of Knowledge). A SNARK is a type of cryptographic proof system that enables a prover to prove a statement to a verifier with a short proof and succinct verification, and a recursive SNARK enables producing proofs that prove statements about prior proofs.

In this post we will introduce the beautiful idea of composing SNARKs (recursive SNARKs), try to explain what we gain from composition and how it can help with realizing Incrementally Verifiable Computation.

In the next posts we will continue the journey to see how deferral helps (the idea behind Halo and other accumulation schemes), the important concept of folding (the main ingredient of Nova).

To set the stage, let’s talk quickly about SNARKs.

A fast introduction to SNARKs

SNARKs allow an untrusted prover to convince a verifier that it “knows” a witness ω satisfying certain properties. The witness could, for instance, be an execution trace of a computation; the proof would testify to the correctness of the computations and hence its output.

What’s really fascinating here is that this can be done in a way that is:

  • Succinct: the proof is short and requires minimal work on the side of the verifier, in some cases even less work than just reading the witness;
  • Non-interactive: anyone can verify the proof and be convinced by its correctness, as it does not involve any interaction with the verifier; and
  • Possibly zero-knowledge: the proof does not leak anything about the witness.

Recursive SNARKs

Rather than verifying a long computation, SNARKs allow us to verify only a much shorter proof of the correctness of the computation.

But why stop here? Can one not go on further and use a SNARK to reduce the verification of the proof to a verification of the correctness of the proof of correctness of the computation? This idea takes us directly to SNARK composition and recursive SNARKs.

Many freshman CS students are stunned by their first encounter with recursion — how can the definition of a function include the function to be defined? Similarly, digesting recursive SNARKs might take some time to get used to: rather than proving knowledge of a witness, we have proof of knowledge of a “proof of knowledge of a witness”.

After some reflection, it does in fact come very naturally. Using a SNARK, we can create a proof π for correctness of a computation, i.e. we can prove that I know a witness ω such that upon running the computation on this witness we will get a given result. This computation is encoded as a circuit 𝐶. A verifier can convince themselves of the validity of the proof by correctly running the verification algorithm with the proof π as input.

This verification algorithm itself can now also be encoded as a circuit. The fact that “the verification algorithm on proof π as input (witness) gives an acceptance result” can be proven using yet another SNARK. So a second SNARK called the outer SNARK attests that the verification algorithm of the first SNARK (the inner SNARK) on the proof π checks out.

This way the verification algorithm of the inner SNARK becomes the circuit of the outer SNARK, and the proof provided by the prover of the inner SNARK becomes its witness.

Sounds complicated? Let’s see it visualized:

What do we gain? The optimal blend

You may wonder if there is any gain in such a convoluted construction — proving knowledge of a proof of knowledge of a witness, rather than proof of knowledge of a witness itself. There are, in fact, multiple benefits.

For regular SNARKs there are various performance measures, including the cost for the prover, the cost for the verifier and the length of the proof. As is to be expected,there is always a tradeoff between these parameters.

However, by composing suitable SNARKs, we can hope to benefit on both ends: we start with a SNARK (this will be the inner SNARK in the composition) incurring low cost on the prover (a SNARK with a fast prover). It will in general result in a long proof. Instead of feeding this long proof into the verifier, we can use a second SNARK (the outer SNARK) with short proofs and fast verification to assert that the verification circuit of the inner SNARK would accept the long proof provided by the prover of the inner SNARK. So for the composed SNARK, on the prover side we would benefit from fast prover performance of the inner SNARK, and on the verifier SNARK we would benefit from fast verifier performance and short proofs of the outer SNARK.

If the statement to be proven has a repetitive/iterative structure (consists of repeated application of a limited number of functions), composition and the resulting recursive SNARKs become a valuable tool. This will occupy us in the rest of this post. Let’s start by introducing Incrementally Verifiable Computation (IVC).

Incrementally Verifiable Computation (IVC)

Let’s say a computation consists of a repeated application of the same function F. The function F takes as input the output of the previous application of F and possibly some additional input (hints).

A prominent example of this would processing transactions on a blockchain — updating the status s of the blockchain according to a new transaction T. The function F takes as input the current state of the blockchain and returns the updated state. The transaction is fed as additional input. The output of F is fed into another instance of F as input in order to process the next transaction.

So, the repeatedly applied function takes as input the current state and a transaction, then returns the updated state. The updated state is fed into the same function again, with the new transactions as input.

More formally, we have:

or more precisely

We might want to provide a proof for the correct execution of this computation. In the example above involving updates to the status of the blockchain according to incoming transactions, such a proof can help us to avoid having to repeat the processing of all transactions each time a new node joins the blockchain. The node just needs to verify the provided proof of correctness.

There are several difficulties with this. As the number of iterations can be huge, the computation can be very long and producing a proof for a big bulk of computation would be very expensive. More importantly, the iterations might continue perpetually.

After each new application of the function F we will need to produce a new proof for the whole computation from scratch. Moreover, to start producing a proof for 𝓃iterations, we would have to wait until the end of the 𝓃 iterations (in the blockchain example we have to wait until all transactions have arrived until we can start producing a proof).

Ideally we would like to be able to update the proof of computation correctness after each step (to build it up incrementally) and be able to verify its correctness at any given point, with the length of the proof/work of the verifier not continuously increasing.

This takes us to the idea of Incrementally Verifiable Computation (IVC). Because of the iterative structure of the computation, we could obviously create a separate proof for each of the applications of F, i.e., πᵢ proves that computation at step i is correct. But we can in fact compose these SNARKs so that proof πᵢ asserts not only the correctness of step i, but also that the verification of proof π-₁checks out, i.e. it is a proof for the following two facts:

the proof asserting that from s we reach sᵢ- in i -1 steps is correct

AND

application of F to sᵢ-(with the additional input) gives s

In other words, we augment the computation F of each step with the verification circuit applied to the proof from the previous step. This means that by verifying the last proof we can be certain that the last step is correct and we will have verified the proof πᵢ -1 through the verification circuit embedded into the computation. So both the given step and the input fed into it are correct.

Having verified π-₁, we know that step i -1 is correct and proof πᵢ- checks out, which in turn asserts the correctness of the previous step and the proof fed into it, and so on until the very first step.

What’s next?

The verification circuit is embedded into each step and it is expensive as it involves checking opening claims in a polynomial commitment (more about this next time).

We can, however, cut out the costly part from each of these, accumulate them to a single instance and only verify this aggregate instance; put simply, verify them all at once.

This is the idea behind Halo and other accumulation schemes: deferring and combining expensive parts. Nova takes this idea to the maximum: the entire verification step is deferred. This is achieved by the use of a newly introduced primitive called a folding scheme — an elegant way to fold two instances (and by repeated application all instances) of the whole step to a single one.

Sounds exciting, right? Wait until our next post where we will dive into accumulation and folding.

To make sure you don’t miss the next article from this exciting series, connect with us:

Website | Twitter | Facebook | LinkedIn | Github | Request Audit

--

--

Veridise
Veridise

Hardening blockchain security with formal methods. We write about blockchain & zero-knowledge proof security. Contact us for industry-leading security audits.