This is a series of articles.Part 4, PDF version

With the multi-operation polynomials approach introduced in part 4, we can prove many operations at once (e.g., millions and more), but there is a critical downside to it.

If the “program,” execution for which is being proved, uses the same *variable*, either as an operand or as output, in different operations, for example:

The *a* will have to be represented in the *left operand polynomial *for both operations as:

Nevertheless, because our protocol allows prover to set any coefficients to a polynomial, he is not restricted from setting different values of *a *for different operations (i.e., represented by some *x*), e.g.:

This freedom breaks consistency and allows prover to prove the execution of some other program which is not what verifier is interested in. Therefore we must ensure that any variable can only have a single value across every operation it is used in.

Note: variable in this context differs from the regular computer science definition in a sense that it is immutable and is only assigned once per execution.

## Single-Variable Operand Polynomial

Let us consider a simple case (as with the current example) where we have only one variable (e.g., *a*) used in all left operands represented by the *left operand polynomial* *l*(*x*). We have to find out if it is possible to ensure that this polynomial represents the same values of *a* for every operation. The reason why a prover can set different values is that he has control over each coefficient for every exponentiation of *x*. Therefore if those coefficients were constant, that would solve the variability problem.

May us have a closer look at polynomials containing equal values. For example examine two polynomials representing equal values for the two operations correspondingly (i.e., at *x *= 1 and *x *= 2), where the first polynomial contains value 1 and the second contains value 2:

Notice that the corresponding coefficients are proportional in each polynomial, such that coefficients in the second are twice as large as in the first, i.e.:

Therefore when we want to change all the values simultaneously in a polynomial we need to change its proportion, this is due to arithmetic properties of polynomials, if we multiply a polynomial by a number, evaluations at every possible *x *will be multiplied (i.e., scaled). To verify, try to multiply the first polynomial by 3 or any other number.

Consequently, if a verifier needs to enforce the prover to set the same value in all operations, then it should only be possible to modify the proportion and not the individual coefficients.

So how coefficients proportion can be preserved? We can start by considering what is provided as proof for the *left operand polynomial*. It is an encrypted evaluation of *l*(*x*) at some secret *s*: *gˡ *⁽ˢ⁾, i.e., it is an encrypted number. We already know from the “restricting a polynomial” section how to restrict a verifier to use only the provided exponents of *s *through an *α*-shift, such that homomorphic multiplication is the single operation available.

Similarly to restricting a single exponent, the verifier can restrict the whole polynomial at once. Instead of providing separate encryptions and their *α*-shifts

the protocol proceeds:

Prover needs to respond with the same *α*-shift and because he cannot recover *α *from the proving key the only way to maintain the shift is to multiply both encryptions

by the same value. Therefore prover cannot modify individual coefficients of *l*(*x*), for example if *l*(*x*) = *ax*² + *bx *+ *c* * *he can only multiply the whole polynomial at once by some value *v*: *v⋅*(*ax*² + *bx *+ *c*) = *v⋅ax*² + *v⋅bx *+ *v⋅c *. Multiplication by another polynomial is not available since *pairings*, and *α*-shifts of individual exponents of *s *are not provided. Prover cannot add or subtract either since:

We now have the protocol, but how *operand polynomial l*(*x*) should be constructed? Since any integer can be derived by multiplying 1, the polynomial should evaluate to 1 for every corresponding operation, e.g.:

This allows a prover to *assign *the value of *a*:

**Remark 4.1** *Since verification key contains encyrpted α it is possible to add (or subtract) an arbitrary value* *v*′ *to the polynomial, i.e.:*

*Therefore it is possible to modify the polynomial beyond what is intended by the verifier and prove a* *different statement. We will address this shortcoming in a further section.*

## Multi-Variable Operand Polynomial

We are now able to singularly set value only if all left operands use the same variable. What if we add another one *d*:

If we have used the same approach we would not be able to set the value separately for each variable, and every distinct variable will be multiplied altogether. Hence such restricted polynomial can support only one *variable*. If we examine properties of polynomials, we will see that adding polynomials together adds distinct evaluations of those polynomials. Therefore we can separate the *operand* *polynomial* * l*(*x*) into *operand variable polynomials*

such that *variables a *and *b *are *assigned *and restricted separately similarly to the previous section and then added together to represent variables of all left operands. Because we add *operand variable* *polynomials *together, we need to ensure that only one of all the *variables *is represented for each operation by the *operand polynomial*.

Using the arithmetic properties we can construct each *operand variable polynomial *such that if *variable *is used as an operand in the corresponding operation then it evaluates to 1, otherwise to 0. Consecutively 0 multiplied by any value will remain zero and when added together it will be ignored. For our example the variable polynomials must conform to evaluations:

In graph form:

Consequently we can set the value of each variable separately and just add them together to get the operand polynomial, for example if *a *= 3 and *d *= 2:

*Note: we are using subscript next to a value to indicate which variable it represents, e.g., *3*ₐ* *is a* *variable a instantiated with value *3*.*

Let us denote such composite *operand polynomial *with an upper-case letter from now on, e.g.,

and its evaluation value as *L*, i.e., *L *= *L*(*s*). This construction will only be effective if each *operand variable polynomial *is restricted by the verifier, the interaction concerning left operand shall be altered accordingly:

Note: L

(s)and αL(s)represent all variable polynomials at once and since α is used only in evaluation of variable polynomials, the prover has no option but to use provided evaluations and assign same coefficients to original and shifted variable polynomials.

As a consequence the prover:

- is not able to modify provided
*variable polynomials*by changing their coefficients, except “assigning” values, because prover is presented only with encrypted evaluations of these polynomials, and because necessary encrypted powers of*s*are unavailable separately with their*α*-shifts - is not able to add another polynomial to the provided ones because the
*α*-ratio will be broken - is not able to modify operand polynomials through multiplication by some other polynomial
*u*(*x*), which could disproportionately modify the values because encrypted multiplication is not possible in pre-pairings space

*Note: if we add (or subtract) one polynomial, e.g., lₐ*(*x*)*, to the other, e.g.,*

*that is not really a modification of the polynomial ld*(*x*)*, but rather changing of the resulting coefficient* *of the la*(*x*)*, because they are summed up in the end:*

While the prover restricts the use of polynomials, there is still some freedoms which are not necessary to counteract:

- it is acceptable if the prover decides not to add some of the assigned variable polynomials
*lᵢ*(*x*) to form the operand polynomial *L*(*x*) because it is the same as to assign the value 0:

- it is acceptable if the prover adds same variable polynomials multiple times because it is the same as to assign the multiple of that value once, e.g.:

This approach is applied similarly to the right operand and output polynomials *R*(*x*), *O*(*x*).

# Construction Properties

There are multiple additional useful properties which are acquired as a side-effect of such modification.

## Constant Coefficients

In the above construction, we have been using evaluations of *unassigned variable* *polynomials *1 or 0 as a means to signify if the variable is used in operation or not. Naturally, there is nothing that stops us from using other coefficients as well, including negative ones, because we can *interpolate *polynomials through any necessary points (provided that no two operations occupy same *x*). Examples of such operations are:

Therefore our program can now use constant coefficients, for example:

Algorithm 2:Constant coefficients

————————————————————————————————————————————————————————————————————functioncalc(w, a, b)

ifwthen

return 3a × b

else

return 5a ×2b

endifendfunction

These coefficients will be “hardwired” during the setup stage and similarly to 1 or 0 will be immutable. We can modify the form of operation accordingly:

Or more formally, for variables *vᵢ* ∈ {*v*₁*, v*₂*, …, vₙ*}:

where subscripts *l, r* and *o *are indices of a variable used in operation.

Note: constant coefficient for the same variable can be different in different operations and operands/outputs.

## Addition for Free

Considering the updated construction, it is apparent that in polynomial representation every *operand* expressed by some distinct *x *is a sum of all *operand variable polynomials *such that only single *used* variable can have a non-zero value and all others are zero. The graph demonstrates it best:

We can take advantage of such construction and allow to add any number of necessary *variables *for each operand/output in operation. For example in the first operation, we can add *a *+ *c* * *first and only then multiply it by some other operand, e.g., (*a *+ *c*) × *b* = *r *, this can be represented as:

Therefore it is possible to add any number of present variables in a single *operand*, using arbitrary coefficients for each of them, to produce an operand value which will be used in a corresponding operation, as needed in a respective program. Such property effectively allows changing the operation construction to:

Or more formally, for variables *vᵢ* ∈ {*v*₁*, v*₂*, …, vₙ*} and operand variable coefficients

the construction is:

Note: each operation’s operand has its own set of coefficients c.

## Addition, Subtraction and Division

We have been focusing on multiplication operation primarily until now. However, in order to be able to execute general computations, a real-life program will also require addition, division, and subtraction.

**Addition** In previous section we have established that we can add variables in context of a single operand, which is then multiplied by another operand, e.g., (3*a *+ *b*) × *d* = *r* , but what if we need just addition without multiplication, for example, if a program needs to compute a + b, we can express this as:

Note: because our construction requires both a constant coefficient and a variable (c ⋅ v) for every operand, the value of

1is expressed as c₁⋅ v₁, and while c₁=1can be “hardwired” into a corresponding polynomial, the v₁is a variable and can be assigned any value, therefore we must enforce the value of v₁through constraints as explained in a further section.

**Subtraction** Subtraction is almost identical to addition, the only difference is a negative coefficient, e.g., for a – b:

**Division** If we examine the division operation

we would see that the result of the division is the number we need to multiply divisor by to produce the factor. Therefore we can express the same meaning through multiplication: `divisor x result = factor`

. Consequently, if we want to prove the division operation a / b= r , it can be expressed as:

Note: the operation’s construction is also called “constraint” because the operation represented by polynomial construction does not compute results per se, but rather checks that the prover already knows variables (including result), and they are valid for the operation, i.e., the prover is constrained to provide consistent values no matter what they are.

Note: all those arithmetic operations were already present; therefore modification of the operation’s construction is not needed.

# Example Computation

Having the general operation’s construction, we can convert our original algorithm 1 into a set of operations and further into polynomial form. Let us consider the mathematical form of the algorithm (we will use variable *v *to capture the result of evaluation):

It has three multiplications, and because the operation construction supports only one, there will be at least 3 operations. However, we can simplify the equation:

Now it requires two multiplications while maintaining same relationships. In complete form the operations are:

We can also add a constraint that requires *w *to be binary, otherwise a prover can use any value for *w* rendering computation incorrect:

To see why *w *can only be 0 or 1, we can represent the equation as *w*² – *w *= 0 and further as (*w – *0)(*w – *1) = 0 where 0 and 1 are the only solutions.

These totals to 5 variables, with 2 in the left operand, 4 in the right operand and 5 in the output. The operand polynomials are:

where each *variable polynomial *must evaluate to a corresponding coefficient for each of 3 operations or to 0 if the variable isn’t present in the operation’s operand or output:

Consequently the cofactor polynomial is *t*(*x*) = (*x – *1)(*x –* 2)(*x –* 3), which will ensure that all three operations are computed correctly.

Next we leverage polynomial interpolation to find each *variable polynomial*:

Which are plotted as:

We are ready to prove computation through polynomials. Firstly, let us choose input values for the function, for example *w *= 1*, a *= 3*, b *= 2. Secondly, calculate values of intermediary variables from operations:

After, we assign all values involved in the computation of the result to the corresponding *variable* *polynomials *and sum them up to form operand and output polynomials:

and in the graph form these are:

Summed up to represent operand and output values in corresponding operations:

We need to prove that *L*(*x*) × *R*(*x*) – *O*(*x*) = *t*(*x*)*h*(*x*), therefore we find *h*(*x*):

In a graph form it is represented as:

Where it’s visible that polynomial *L*(*x*) × *R*(*x*) – *O*(*x*) has solutions *x *= 1, *x *= 2 and *x *= 3, and therefore *t*(*x*) is its cofactor, which would not be the case if we used inconsistent values of variables.

That is how the knowledge of variable values for a correct computation execution is proven on the level of polynomials. A prover is then proceeding with a cryptographic portion of the protocol.