Outsourced Verifiable Computation

Akshay Pai
Delta Force
Published in
6 min readJul 1, 2020

Cloud computing has become an indispensable tool in the IT industry. The ability to offload heavy computation is especially useful as the world transitions from Personal Computers to smartphones and tablets. And to top this off, the cost of cloud computing has seen a dramatic drop over the years. Services such as DigitalOcean now allow a user to spin up a server for as less as $5/month.

Source of image: https:/techpolicyinstitute.org

Today, even the most secretive computations are being offloaded to a remote server. With such dependence on cloud computing, we need to take a moment to think about the adverse effects in store for us if a particular remote server begins to malfunction and provides incorrect results. And this is not as uncommon as one would expect. A remote server might provide incorrect results due to several reasons; from corrupted data sent as input to simple misconfiguration of server.

So how do we ensure that the results sent back by a server are valid? Well… One simple solution to this would be to recompute the results locally and verify that the result is valid. But as you’ve probably guessed, this defeats the purpose of offloading computation. So what we’re looking for, is a protocol where verification of computation is inexpensive in comparison to actually running the computation. This is what Verifiable Computation is all about.

Solution?

Well there’s good news and bad news. The bad news is that the solution isn’t really straightforward and can get pretty mathematical. But the good news is, all of this is extremely cool and definitely worth your time. So I’ve decided to break this article down into several parts. In this one, we will go through a rough overview on how the process works and pick up on more details in the upcoming parts.

Overview of the solution

A typical problem in this domain can be split into three steps.

  1. Client C who offloads computation to a Prover P. The Client sends over a function F and x as inputs to P.
  2. P computes the results y such that y = F(x)
  3. P then returns the result y and with a “witness file” w to a Verifier V who verifies using w and y that the Prover has indeed computed results correctly.

Let’s walk through a rather simplified example which we will later extend.

The first task at hand is to rewrite the following program in a way that is easier to represent mathematically. Cryptography mainly deals with transformation on numbers and having a ASCII text file with code doesn’t really give us much to work with. We therefore attempt to encode the execution of this function in a way that is easier to work with. There are several ways one can go about doing this, from boolean / arithmetic circuits to constraint systems. We’ll be covering this in future articles so it’s completely alright if you treat them as buzz words for now.

Just to recall, the XOR gate has a high output when both the inputs differ from each other. That is, A != B . The following is the truth table of the XOR gate.

XOR Truth Table

XOR between two bits A and B can also be rewritten as a function as shown

Observe that this function behaves exactly like the Truth Table

As discussed, we’ll be converting our program into something else which is easier to work with. In our case, we’re going to convert it into an arithmetic circuit as shown below.

Spoiler: This isn’t actually how state of the art techniques convert into arithmetic circuits. But this helps me explain for now.

If you follow the input wires to each gate, you’ll notice that the circuit does the same thing as the equation above. So now, the Prover converts the program into this circuit and runs the input x provided by the Client and returns the output of the execution along with a “certificate”. What does this certificate look like? The certificate for this execution is a valid assignment to the input wires. So if the Client provides input as (A = 1, B = 0), the Prover returns output O = 1 along with a certificate that could look like (A = 1, B = 0, Z1 = 1, Z2 = 0, O = 1). So all that the Verifier has to do is plug in the values into the circuit and check if this is a valid assignment to the circuit.

However, this is rather impractical. For any program with decent complexity, the size of the certificate is going to be extremely large. In addition to this, the Verifier has to basically rerun the program to ensure that every variable in the certificate has the right value.

Therefore, we further encode this circuit into a polynomial.

Now I’m sure you are wondering “Since when is converting a function to a polynomial a good idea?” But bear with me. Polynomials have quite a few useful properties that we’d like to make use of (more on this later).

And how do we encode a function to a polynomial? Consider the following function.

For now, let’s not bother ourselves with the construction of this function and only focus on a key attribute of this function. Which is, L = 0 if and only if (A, B, Z1, Z2, O) form a valid assignment to the arithmetic circuit we constructed above. How? For L(t) to equate to 0, we require:

  1. Z1 = A+B (so that the first part of the equation turns out to be 0)
  2. Z2 = A*B (so that the second part of the equation turns out to be 0)
  3. O = Z1 - Z2 (so the third part of the equation turns out to be 0)

Take a few minutes here. Refer to the circuit diagram that we drew earlier and see as to why the three conditions must be satisfied.

Any function L constructed using an invalid assignment, will lead to a non zero polynomial. So what we’ve effectively done is converted the problem of “Checking if a program was executed correctly “ to “Check if a polynomial is a zero polynomial”.

So far in this article, we’ve seen a very general outline of the steps involved in any VC (Verifiable Computation) protocol. The last few years have seen a number of different protocols that are both secure and practical (somewhat).

These protocols can be divided into two categories;
1. Proof based protocols
2. Argument based protocols

Proof based protocols are interactive in nature (require multiple rounds of back and forth communication between prover and verifier for a verifier to be convinced about the validity of a proof). A “proof” is stringent in terms of completeness of the protocol. They assume a powerful super-polynomial prover and therefore the protocols tend to be impractical for real world use cases.

Argument based protocols could be both interactive and non-interactive (zero rounds of back and forth communication. Prover provides a “certificate” after running the computation and this “certificate” is able to convince any verifier without requiring any communication with the prover). An “argument” refers to a less-stringent definition of a proof which assumes that the prover to be in polynomial time at best.

We’ll focus the upcoming parts on a subset of Argument based protocols which are able to construct SNARK proofs which stands for “Succinct Non-Interactive Argument of Knowledge”. Let’s break this down

Succinct
Proofs which are succinct allow verifiers to verify in a couple of milliseconds. That is, the size of the proof “certificate” is a few hundred bytes even for a verification of a function with millions of instructions.

Non-Interactive
As discussed earlier, non-interactive proofs do not require back and forth communication between prover and verifier. Once the “certificate” is generated by the prover, any verifier can verify if the prover has honestly run the computation

Argument of Knowledge
SNARKs are argument based proofs. Argument based proofs are efficient (we’ll see more of this in the later parts)

There are several protocols which are able to generate SNARK proofs. From the next article, we’ll discuss one of these in detail along with an example.

Conclusion
That’s all for now! I’ll be uploading the second part of this series soon enough. Stay tuned for the next article if you found this one useful. Thank you for reading!

--

--