This is a series of articles.Part 3, PDF version.

We have paved our way with a simple yet sufficient example involving most of the *zk-SNARK* machinery, and it is now possible to advance the scheme to execute zero-knowledge programs.

# Computation

Let us consider a simple program in pseudocode:

Algorithm 1:Operation depends on an input

————————————————————————————————————————————————————————————————————functioncalc(w, a, b)

ifwthen

returna × b

else

returna + b

endifendfunction

From a high-level view, it is quite unrelated to polynomials, which we have the protocol for. Therefore we need to find a way to convert a program into the polynomial form. The first step then is to translate the program into the language of math, which is relatively easy, the same statement can be expressed as following (assuming *w *is either 0 or 1):

Executing calc(1, 4, 2) and evaluating *f *(1*, *4*, *2) will yield the same result: 8. Conversely calc(0, 4, 2) and *f*(0*,*4*,*2) would both be resolved to 6. We can express any kind of finite program in such a way.

What we need to prove then (in this example), is that for the input (1*, *4*, *2) of expression *f*(*w,a,b*) the output is 8, in other words, we check the equality:

# Single Operation

We now have a general computation expressed in a mathematical language, but we still need to translate it into the realm of polynomials. Let us have a closer look at what computation is in a nutshell. Any computation at it is core consists of elemental operations of the form:

Two operands (i.e., values) are being operated upon by an operator (e.g., +,*–,*×*,*÷). For example for operands 2 and 3 and operator “multiplication” these will resolve to 2 × 3 = 6. Because any complex computation (or a program) is just a series of operations, firstly we need to find out how single such operation can be represented by a polynomial.

## Arithmetic Properties of Polynomials

Let us see how polynomials are related to arithmetic operations. If you take two polynomials *f*(*x*) and *g*(*x*) and try, for example, to multiply them *h*(*x*) = *f*(*x*) × *g*(*x*), the result of evaluation of *h*(*x*) at any *x *= *r* will be the multiplication of results of evaluations of *f*(*r*) and *g*(*r*). Let us consider two following polynomials: *f*(*x*) = 2*x*² – 9*x *+ 10 and *g*(*x*) = – 4*x*² + 15*x – *9. Visualized in the form of graph:

For *x *= 1 these will evaluate to: *f*(1) = 2 – 9 + 10 = 3, *g*(1) = – 4 + 15 – 9 = 2. Let us multiply the polynomials: *h*(*x*) = *f*(*x*) × *g*(*x*) = – 8*x*⁴ + 66*x*³ – 193*x*² + 231*x – *90. Visually multiplication can be seen as:

If we examine evaluations at *x *= 1 on the resulting polynomial *f*(*x*) × *g*(*x*) we will get: *h*(1) = – 8 + 66 – 193 + 231 – 90 = 6, hence the values at *x *= 1 of *f*(*x*) and *g*(*x*) has multiplied, and respectively at every other *x*.

Likewise if we add *f*(*x*) and *g*(*x*) we will get –2*x*² + 6*x *+ 1 which evaluates to 5 at *x *= 1.

Note: evaluations at other x-s were also added together, e.g., examine x = 2, x = 3.

If we can represent operand values as polynomials (and we indeed can as outlined) then through the arithmetic properties, we will be able to get the result of an operation imposed by an operand.

# Enforcing Operation

If a prover claims to have the result of multiplication of two numbers how does verifier checks that? To prove the correctness of a single operation, we must enforce the correctness of the output (result) for the operands provided. If we look again at the form of operation:

The same can be represented as an *operation polynomial*:

where for some chosen *a*:

*l*(*x*) — at*a*represents (evaluates to) the value of the left operand*r*(*x*) — at*a*represents the value of the right operand*o*(*x*) — at*a*represents the result (output) of the operation

Therefore if the operands and the output are represented correctly for the operation by those polynomials, then the evaluation of *l*(*a*) * ***operator** * r*(*a*) = *o*(*a*) should hold. And moving *output* *polynomial* *o*(*x*) to the left side of the equation *l*(*a*) * ***operator** * r*(*a*) – *o*(*a*) = 0 is surfacing the fact that the *operation polynomial* *l*(*x*) * ***operator** * r*(*x*) – *o*(*x*) = 0 has to evaluate to 0 at *a*, if the value represented by the *output polynomial* * o*(*x*) is the correct result produced by the **operator** on the values represented by *operand polynomials l*(*x*) and *r*(*x*). Henceforth *operation polynomial *must have the root *a *if it is valid, and consequently, it must contain cofactor (*x – a*) as we have established previously (see factorization section), which is the *target polynomial *we prove against, i.e., *t*(*x*) = *x – a*.

For example, let us consider operation:

It can be represented by simple polynomials *l*(*x*) = 3*x*, *r*(*x*) = 2*x*, *o*(*x*) = 6*x*, which evaluate to the corresponding values for *a *= 1, i.e., *l*(1) = 3;* r*(1) = 2;* o*(1) = 6.

Note: The value of “a” can be arbitrary.

The operation polynomial then will be:

Which is visualised as:

It is noticeable that the operation polynomial has (*x – *1) as a co-factor:

Therefore if the prover provides such polynomials *l*(*x*)*, r*(*x*)*, o*(*x*) instead of former *p*(*x*) then the verifier will accept it as valid, since it is divisible by *t*(*x*). On the contrary if the prover tries to cheat and substitutes output value with 4, e.g., *o*(*x*) = 4*x*, then the *operation polynomial *will be 6*x*²* *– 4*x *= 0:

Which is not have a solution *x *= 1, henceforth *l*(*x*) × *r*(*x*) – *o*(*x*) is not divisible by *t*(*x*) without remainder:

Hence such *inconsistent operation *will not be accepted by the verifier (as described in the factorization section).

# Proof of Operation

Let us modify our latest protocol to support a single multiplication operation proof. Recall that previously we had proof of knowledge of polynomial *p*(*x*), but now we deal with three *l*(*x*)*, r*(*x*)*, o*(*x*). While we could define *p*(*x*) = *l*(*x*) × *r*(*x*) – *o*(*x*) there are two counterargument. Firstly, in our protocol, the multiplication of encrypted values (i.e., *l*(*s*) × *r*(*s*)) is not possible in the proving stage, since pairings can only be used once and it is required for the “polynomial restriction” check. Secondly, this would leave an opportunity for the prover to modify the structure of polynomial at will but still maintain a valid cofactor *t*(*x*), for example *p*(*x*) = *l*(*x*) or *p*(*x*) = *l*(*x*) – *r*(*x*) or even *p*(*x*) = *l*(*x*) × *r*(*x*) + *o*(*x*), as long as *p*(*x*) has root *a*. Such modification effectively means that the proof is about a different statement, which is certainly not desired.

That is why the evaluations of polynomials *l*(*s*), *r*(*s*), *o*(*s*) have to be provided separately by the prover. This means that the *knowledge of polynomial *must be adjusted. In essence what a verifier needs to check in encrypted space is that *l*(*s*) × *r*(*s*) – *o*(*s*) = *t*(*s*)*h*(*s*). While a verifier can perform multiplication using cryptographic pairings, the subtraction (– *o*(*x*)) is an expensive operation (would require to find inverse of* g*ᵒ⁽ˢ⁾ ) that is why we move *o*(*x*) to the right side of the equation: *l*(*x*)*r*(*x*) = *t*(*x*)*h*(*x*) + *o*(*x*). In encrypted space verifier’s check translates to:

Note: recall that the result of cryptographic pairings supports encrypted addition through multiplication, see section on pairings.

While the *setup *stage stays unchanged, here is the updated protocol:

Such protocol allows to prove that the result of multiplication of two values is computed correctly.

One might notice that in the updated protocol we had to let go of the *zero-knowledge *component. The reason for this is to make the transition simpler. We will get back to it in a later section.

## Multiple Operations

We can prove a single operation, but how do we scale to prove multiple operations (which is our ultimate goal)? Let us try to add just one another operation. Consider the need to compute the product: *a *× *b *× *c*. In the elemental operation model this would mean two operations:

As discussed previously we can represent one such operation by making operand polynomials evaluate to a corresponding value at some arbitrary *x*, for example 1. Having this the properties of polynomials does not restrict us in representing other values at different *x*, for example 2, e.g.:

Such independence allows us to *execute *two operations at once without “mixing” them together, i.e., no interfering. The result of such polynomial arithmetic will be:

Where it is visible that the operation polynomial has roots *x *= 1 and *x *= 2. Therefore both operations are *executed *correctly.

Let us have a look at example of 3 multiplications 2 × 1 × 3 × 2, which can be executed as follows:

We need to represent those as operand polynomials, such that for operations represented by *x *∈ {1*, *2*, *3} the *l*(*x*) pass correspondingly through 2*, *2 and 6, i.e., through points (1*, *2)*, *(2*, *2)*, *(3*, *6), and similarly *r*(*x*) ∋ (1*, *1)*,*(2*, *3)*,*(3*,*2) and *o*(*x*) ∋ (1*, *2)*, *(2*, *6)*, *(3*, *12).

However, how do we find such polynomials which passes through those points? For any case where we have more than one point, a particular mathematical method has to be used.

## Polynomial Interpolation

In order to construct *operand *and *output polynomials *we need a method which given a set of points produces a *curved *polynomial in such a way that it passes through all those points, it is called *interpolation*. There are different ways available:

- Set of equations with unknowns
- Newton polynomial
- Neville’s algorithm
- Lagrange polynomials
- Fast Fourier transform

Let us use the former for example. The idea of such method is that there exists a unique polynomial *p*(*x*) of degree at most *n *with yet *unknown coefficients *which pass through given *n *+ 1 points such that for each point {(*xᵢ*, *yᵢ*)}, *i* ∈ [n+1], the polynomial evaluated at *xᵢ* should be equal to *yᵢ*, i.e. *p*(*xᵢ*) = *yᵢ* for all *i*. In our case for three points it will be polynomial of degree 2 of the form:

Let us *equalize *the evaluated polynomial for each point of the *left operand polynomial *(green) and solve the system of equations by expressing each coefficient in terms of others:

Therefore the *left operand polynomial *is:

Which corresponds to the following graph:

We can find *r*(*x*) and *o*(*x*) in the same way:

## Multi-Operation Polynomials

Now we have operand polynomials which represent three operations, let us see step-by-step how the correctness of each operation is verified. Recall that a verifier is looking for equality *l*(*x*) × *r*(*x*) – *o*(*x*) = *t*(*x*)*h*(*x*). In this case, because the operations are represented at points *x *∈ {1*, *2*, *3} the target polynomial has to evaluate to 0 at those *x*-s, in other words, the roots of the *t*(*x*) must be 1, 2 and 3, which in elementary form is:

Firstly, *l*(*x*) and *r*(*x*) are multiplied which results in:

Secondly, the *o*(*x*) is subtracted from the result of *l*(*x*) × *r*(*x*):

Where it is already visible that every operands multiplication corresponds to a correct result. For the last step a *prover *needs to present a valid cofactor:

Using long division we get:

With *h*(*x*) = – 3*x *+ 4 a *verifier *can compute *t*(*x*)*h*(*x*):

It is now evident that *l*(*x*) × *r*(*x*) – *o*(*x*) = *t*(*x*)*h*(*x*) which is what had to be proven.