What is PLONK?

Luca Franceschini
12 min readJan 29, 2024

--

A comprehensive guide to the PLONK SNARK construction

Introduction

In this article, we explore the intricacies of PLONK, a groundbreaking advancement in the realm of SNARKs (Succinct Non-Interactive Arguments of Knowledge). We’ll delve into its foundational concepts and mechanics.

To understand how PLONK is used to construct a universal SNARK, let’s first recap what a SNARK is. If you are not familiar with basic ZK jargon, use cases, and key innovations, you can first have a look at the Guide to Zero Knowledge Proofs (Part 1).

What is a SNARK?

In the Guide to Zero Knowledge Proofs (Part 2), we have covered in detail how the “modern” way of building a universal SNARK for general circuits consists of combining an IOP with a PCS.

The IOP provides us with the interactions between the Prover and Verifier that we need in order to construct the proof. As the IOP is a theoretical object, we need to instanciate it by using a PCS.

The PCS lets a committer (the Prover) commit to a polynomial, so then he can later open the polynomial at any point of his choice and convince the Verifier that the polynomial he has committed to satisfies some properties.

We then use the Fiat-Shamir heuristic to obtain a non-interactive proof from an interactive one.

As SNARKs requires succinctness, we need to make sure that proofs are short and fast to verify. To be more specific, the evaluation proof size and verifier time should be at most logarithmic in the degree of the polynomial (where the degree of the polynomial corresponds to the number of gates of the circuit minus one).

How to build a SNARK?

To build a modern SNARK we need to:

  1. Represent the program as an arithmetic circuit, by describing our progra via a circuit with addition and multiplication gates. The reason we need to transform our program into algebraic relations, is that when working with elliptic curves (or other cryptographic primitives) we cannot efficiently verify the operations of the bytecode of a computer program, while we can very efficiently evaluate complex algebraic expressions.
  2. Convert the circuit description we obtained from the arithmetization step into a polynomial identity. This means using the arithmetization to obtain a bunch of equations proving properties of polynomials. This allows us to succinctly represent multiple instances of the arithmetic gates by representing them via polynomials, allowing us to succinctly commit to them. Keep in mind that the more complex the program, the higher is the degree of the polynomials we’re dealing with. By using polynomial relationships, we can compress the circuit information into representations of the polynomial that have a constant size no matter the degree. For example, we can embed a vector with millions of variables into a polynomial, and then perform arithmetics on that polynomial as if it were a normal vector. And this is cool.
  3. Commit to this representation using a Polynomial Commitment Scheme. We can choose between different commitment schemes, each of those with different tradeoffs, properties and cryptographic assumptions. We generally prefer commitment schemes that gives us a constant proof size, so that the proof size doesn’t depend on the size of the circuit (aka the degree of the polynomials).
  4. Evaluate the polynomial identity by evaluating polynomials at a random point. We can do this because if a polynomial identity holds at a random point, it is almost guaranteed to hold at every other point (Schwartz–Zippel lemma). Being able to verify these polynomial relationships by only looking at a single point of evaluation is the reason why we can do it so efficiently.

Types of SNARKs

As we will have different tradeoffs depending on the PCS we choose, SNARKs can be categorized into:

  • Non-universal (circuit-specific trusted setup).
  • Universal (Requires only one trusted setup which can be used in every circuit, and the pre-processing is transparent).
  • Transparent (no trusted setup, larger proof size).

The PLONK Interactive Oracle Proof

PLONK is a polynomial IOP where a Prover can convince a Verifier that for a public circuit C and statement x, he has a witness w such that C(x,w) = 0.

This means that the PLONK IOP can be used together with a PCS to construct a universal SNARK for general circuits.

The PLONK pre-processing does not require a trusted setup, which means that:

  • if we match the PLONK IOP with a PCS that does not require a trusted setup, then the final SNARK doesn’t have a trusted setup.
  • If we combine PLONK with a PCS that does require a trusted setup, then the final SNARK will need a trusted setup.

Depending on the PCS we combine PLONK with, we will end up with different tradeoffs, depending on the properties and cryptographic assumption of the selected PCS.

For example:

  • PLONK + KZG (pairings, trusted setup) → Aztec
  • PLONK + Bulletproofs (no pairings) → Halo2
  • PLONK + FRI (hashing) → Plonky2

Arithmetization

As we previously mentioned, we need to perform an arithmetization to efficiently construct proofs. To do this, we encode the entire execution trace of the circuit into a table that lists the inputs and output of every gate.

Because we want to commit to polynomials, we want to interpolate a polynomial that encodes the entire execution trace. This means that we need to encode all the inputs and all the wires into some polynomials. The Prover uses FFTs to compute the coefficients of the polynomial, and the degree of the polynomial is proportional to the number of gates. For example, if the encoding gives us 12 constraints, the correspondent polynomial has a degree at most 11. The fact that the degrees of the polynomials are equal to the number of the elements of the vector minus one is what makes PLONK so efficient.

Example of a circuit that proves the knowledge of a secret value W that makes the result of the expression equal to 45. Public inputs are A and B.

Lagrange Polynomials and interpolations

There are two different ways of representing a polynomial:

  • Coefficient representation (the polynomial is defined by providing its coefficients).
  • Point-value representation (the polynomial is defined by providing the evaluation of the polynomials at some points).

FFTs and NTTs allow switching between these two different representations, and we can go back and forth between these representations via a linear transformation.

As any arithmetic circuit described via vectors can be transformed into a Lagrange-base representation, we prefer representing the vectors as a linear sum of Lagrange polynomials.

The reason is that when using Lagrange interpolation, our global parameters are the Lagrange coefficients of the polynomial and this gives us an efficient point-value representation for the polynomial.

Lagrange polynomials allow us to represent vectors via polynomials while making sure that any relation that applies to the vectors also holds true on the polynomials. This implies that once we have a way to evaluate gates using vectors, we can use Lagrange polynomials to represent those vectors as polynomials.

As we already know that the value of Lagrange polynomials is going to be either 1 or 0 on a specific set of points, we also know that once we multiply a vector for these specific polynomials we obtain a polynomial that when evaluated at each of those specific points will be equal to an element of our vector.

This means that the vector arithmetic can be mapped directly into the equivalent polynomial evaluation, allowing us to evaluate a large number of arithmetic gates at once. And this is great to obtain efficiency and succintness.

Multiplicative Subgroups, Roots of Unity and Vanishing Polynomial

The roots of unity in the realm that in the real numbers are 1 and -1, because 1²=1 and -1²=1.

Anyways, when constructing SNARKs we need to work on prime fields, and therefore we need to use modular arithmetic.

To be more specific, the special points in which the Lagrange polynomials evaluate to either 1 or 0 are determined by defining a multiplicative subgroup that acts as the root of unity. In order to construct a proof of knowledge we need to have a very large amount of roots of unity , and our evaluations are verified if they are equal to 0 modulo the vanishing polynomial.

If the arithmetic expression that we want to check on the vectors holds true, the polynomials we are evaluating are perfectly divisible for the vanishing polynomial. This implies that if the vectors on which we want to check the relations have millions of elements in them, we can evaluate millions of mini-equations with one single equation. We can do this by checking if the polynomials are 0 modulo the vanishing polynomial of the root of unity. This is pretty cool.

Proof Verification

Once the Prover encodes the computation trace into polynomials, the Verifier needs to verify that the polynomial is a commitment to a valid computation trace by verifying the inputs, gates, copy constraints and outputs.

1. Verify the inputs

The Verifier needs to check if the polynomial that the Prover committed to correctly encodes the inputs of the circuit. To check that the polynomial encodes the correct inputs, the Prover performs a zero test on the vanishing polynomial.

2. Verify the gates evaluations

The Verifier needs to check that each gate operation is evaluated correctly, which implies making sure that addition gates are processed as additions and multiplication gates are processed as multiplications. For example, if a gate represents a sum, the Verifier needs to check that the sum has been correctly performed.

The Verifier can do this by using a Selector polynomial which has been computed at the pre-processing phase and only depends on the gates of the circuit (and not on the inputs). The Selector polynomial evaluates to either 0 or 1 depending on if the gate is an addition gate or multiplication gate. By using the Selector polynomial, we can combine the addition and multiplication equations into one single equation where the Selector can activate or deactivate the multiplication or addition gates (depending on if its value is 0 or 1). The Selector polynomial improves the efficiency of our SNARK, and we can perform the verification with another zero test.

3. Verify the copy constraints

Once we have verified that the single gates operations have been performed correctly, we need to verify that the relationships between these gates are correct. In the previous step we have just verified a bunch of isolated gates: we still did not describe any meaningful circuit, as we didn’t connect any wire. To obtain a SNARK we need to wire the gates and make sure that the output of a gate is the input of another, and verify all these equalities. In other words, for every gate we need to make sure that its output value is “copied” into the input of the correspondent gate. Checking the relationships between all the gates is a tricky part for the construction of a universal SNARK.

Enforcing these Copy Constraints is a complex operation, and the solution that PLONK provides is the reason why it became a such a successful universal SNARK construction.

To check the copy constraints, PLONK applies a permutation on the indices of the gates and encodes the permutation vector into a polynomial. This Wiring polynomial performs a rotation to all the constraints ,and allows the Verifier to verify that the wiring is implemented correctly. This works, because when defining a polynomial that implements a rotation of all these equalities the invariance under rotation makes all these equalities satisfied. The wiring polynomial only depends on the circuit (and not on the inputs), and by having mapped the inputs and outputs via a permutation function, we can enforce a permutation check. This is a smart way to encode the wiring constraints in the circuit.

4. Verify the output

The Verifier needs to verify that the output of the last gate is 0, as this implies that the execution trace is valid.

Note: The reason why PLONK’s pre-processing does not require a trusted setup is because anyone can compute transparently the Selector and the Wiring polynomials, as there are no secrets involved in these commitments.

Wrapping it up

Let’s quickly recap what are the required steps when using PLONK to construct a SNARK:

  1. Setup procedure. The Prover runs the setup procedure to pre-process the circuit by committing to the Selector and Wiring polynomials. Anyone can check that these polynomials were computed correctly, as the setup procedure is transparent.
  2. The Prover encodes the computation trace of the circuit into a polynomial and commits to it.
  3. The Prover sends the commitments to the Verifier
  4. The Verifier evaluates the proof by checking that:
  • Inputs are correct. He does this by proving that a specific polynomial is identical to 0 on the set of inputs.
  • Gates are correct. He does this by proving that a specific polynomial is identical to 0 on the set of gates.
  • Wires are correct. He does this by proving that a specific polynomial is equal to 0 for a permutation-check.
  • The output of the circuit is equal to the output of the last gate, which is equal to 0.

The verifications of this last step correspond to three “0 tests” and one evaluation test. If successful, these checks convince the Verifier that the polynomial commitment provided by the Prover is the commitment to a valid computation trace. This implies that the prover has a valid witness such as that C(x,w) = 0.

Extensions and optimizations

PLONK gives a short constant proof and a fast verifier algorithm. Because PLONK only requires three “0 tests” and one evaluation proof, the resulting SNARK is quite small. The main challenge is reducing the Prover time, as the Prover needs to compute the polynomial that encodes the computation trace explicitly (which implies that the memory consumption for the Prover grows linearly to the size of the circuit).

There are several PLONK optimization available:

  • TurboPlonk adds more arithmetic in the verification equations. Instead of just checking that the Prover satisfied addition and multiplication gates, it also checks that a set of boolean gates is satisfied. The Prover runtime is not increased because the additional work is done in the pre-processing step, therefore no additional polynomial commitments are needed.
  • HyperPlonk replaces the set of points in the finite field with the boolean hypercube, meaning that the polynomial is a multilinear polynomial and the computation trace is encoded on the vertices of this multidimensional hypercube.
  • Plonky2 allows us to use custom gates instead of only using addition and multiplication gates. Usually, we want to use cryptographic functions such as elliptic curves and hash functions, and Plony2 allows us to create custom gates to perform these specific operations. This means that we can represent an elliptic curve operation with just one custom gate, instead of a lot of addition and multiplication gates. This is great, as we would like to fit these group operations into one row instead of having multiple rows, so that our system becomes efficient. This optimization becomes crucial especially when we need to use a specific component multiple times in the circuit. Plonky2 allows us to come up with custom gates to speed up our circuit. While R1CS doesn’t give us “free” gates, with Plonky2 we can embed functionalities in custom gates basically without overhead but at the cost of an increased proof size. This happens because R1CS deals with quadratic constraints (which can be considered lower level), while with the recursion of Plonky2 we can choose a higher degree. By choosing the recursion depth and the degree of the polynomial, we can add more expressiveness to the constraints, allowing us to construct the circuit with fewer gates. This is one of the aspects that makes PLONK good at binary decomposition and managing elliptic curves primitives. Plonky2 uses FRI as a commitment scheme, meaning that the final SNARK is transparent.
  • Plookup ensures that some gates belong to a predefined list, so that we can speed up the running time of the Prover by having fewer rows.
  • Goblin Plonk allows efficient recursive proof composition by using cycles of curved and delegate expensive non-native group operations.

Using KZG as Polynomial Commitment Scheme

One of the Polynomial Commitment Scheme that is usually combined with the PLONK IOP is the KZG / Kate one. This PCS is quite fast and has interesting properties, but requires a trusted setup and pairings.

When running the setup procedure to obtain the global parameters we need to trust who ran the setup procedure to delete the toxic waste. KZG relies on a trusted setup, and Multi-Party Computation is often used in trusted setups so that the need for trust is minimized.

The Prover uses the global parameters to commit to the polynomial by using its coefficient form. The commitment is binding commitment but not hiding (as we have revealed some information about the polynomial).

Because the proof is the commitment to the quotient polynomial, the Prover needs to compute the quotient polynomial and commit to it. He commits to it by committing to a single group element, and this makes proof to have a constant size and not dependent on the degree of the polynomial. The fact that the proof size is constant (and does not depend on the circuit size), is the main reason why the KZG scheme is so popular.

The KZG scheme has also interesting proprieties:

  • We can generalize KZG to commit also to multivariate polynomials (polynomials with multiple variables), and not only univariate ones.
  • KZG supports batch proofs. If the Prover needs to commit to several polynomials, he can batch those proofs into a single group element (by opening multiple polynomials at multiple points).
  • KZG supports linear-time commitments. This means that the time for the Prover to commit to the polynomial is linear in the degree of the polynomials.

The downsides of using KZG are:

  • The need of a trusted setup.
  • The global parameters size grows linearly in the size of the degree of the circuit.

Conclusion

PLONK introduced a remarkable innovation in the ZK space, and there is currently a lot of reserach ongoing to optimize it and find the most convenient combinations between this IOP and the available PCSs.

References

Papers
Plonk | Plonky2 | Plookup | Hyperplonk | Goblin Plonk | KZG

Lectures and educational content
Plonk explained by Dan Boneh
Plonk explained by Zac Williamson
Plonk explained by Adrian Hamelink
Zero Knowledge Podcast episode on Plonk
The origin of Plonk explained by Ariel Gabizon

--

--