Relativistic Zero-knowledge — Part 1

Guillaume Drevon
Sikoba Network
Published in
8 min readAug 7, 2019

This post is about relativistic zero-knowledge, a very recent result from Claude Crépeau, a professor at McGill University in Montreal, and his team. He presented it during the first Luxembourg Zero-Knowledge Days, a conference organized by Sikoba Research at the end of June 2019. The article had been submitted just a couple of weeks previously, and it is probably not yet published. Anyway, the construction is so simple and elegant that I wanted to share it here. It is published with Claude’s kind permission.

Before I try to explain it, we will need some background on Zero-Knowledge proofs—that is the subject of this part, part 1. If you are already familiar with zero-knowledge, you will have to wait for part 2 to learn more about Claude’s new protocol!

Prof. Claude Crépeau @ 1st lux zero-knowledge days

Interactive Zero-Knowledge proofs were invented in the mid-80s by Goldwasser, Micali and Rackoff. They defined what they call an interactive proof system as a pair of computers: a prover and a verifier. These computers participate in a protocol such that the prover can convince the verifier that a statement is true. At the end of the protocol, the verifier accepts or rejects the statement. The protocol must satisfy these two properties:

  • Completeness: the verifier accepts the statement with probability greater than 0.9 if it is true.
  • Soundness: the verifier accepts the statement with probability less than 0.1 if it is false

The notion of probabilistic proof was groundbreaking at that time. The idea is that, by repeating the protocol, one can get a probability as close to 1 as desired.

In 1986, Goldreich, Micali and Wigderson provided a protocol to prove that a graph is 3-colourable. The picture below shows a coloured graph that is a set of nodes connected by some of their edges.

A coloured graph

A 3-coloured graph is coloured in such a way that the two nodes at every edge are of different colours. This is obviously not the case of the one in the picture. Although it does not seem difficult with a toy example, finding a way to colour a graph so that it is 3-coloured is not that easy, as you can see with the following graph.

Graph of Vietnam’s provinces, courtesy of Prof. Claude Crépeau

P vs NP

In fact, 3-colouring is an NP-complete problem. According to the Cook-Levin theorem, any interesting problem can be reformulated as an instance of the 3-colouring problem. Here interesting means NP problems: they are the ones where you are able to check in practice (i.e. in polynomial time) whether a solution is valid. For instance, checking if a coloured graph is 3-coloured is easy: you simply look at all the edges one by one. On the other hand, finding a 3-colour solution for an arbitrary graph is much more difficult; it is an interesting problem 😊

The running time of an algorithm refers to the minimum number of elementary steps the algorithm needs to perform in order to resolve a problem. We speak of polynomial time if the running time is always less than some polynomial P(n), where n is the problem size. For example, for sorting n integers, the running time is bounded by P(n) = A*n² for some value of A. This definition seems a bit weird at first sight because we just state that a polynomial exists, but such a polynomial could have an arbitrarily large degree. It turns out that, in practice, any polynomial-time algorithm has low constants and leads to an efficient algorithm. This is because an algorithm requiring huge constants would probably refer to a problem so complex that it has not even been considered yet. Note that, on the other hand, anything larger than polynomial time is obviously too big. For instance, solving the Traveling Salesman Problem for only 60 cities yields more possibilities than the (estimated) number of atoms in the universe. There are ways to reduce the number of possibilities: simply noting that any path can be taken in reverse reduces them by a factor of 2. We do not know, however, if a polynomial-time algorithm for this problem exists. It is another NP-complete problem.

In the following, when we speak about a computationally bounded party, we mean somebody who can run polynomial-time algorithms, which is the case of anybody with access to computers. An unbounded party, on the other hand, would be able to compute any algorithm. Much of cryptography is based on computationally hard problems that cannot be broken by computationally bounded parties but can, in many cases, be broken by unbounded parties. Problems that cannot be broken even by an unbounded party are much more secure, and this is exactly what this whole post is about.

Commitments

The protocol for the G-M-W 3-colouring proof is the following:

  • The verifier asks for a random edge and then checks if the colours are the same.
  • He rejects if they are different and requests another random edge until he reaches the desired probability.

However, this protocol is a bit too simple, since the prover can always answer blue and red, whatever the real colouring is, so, in order to make this protocol sound, we need to use commitments.

A commitment is a very useful cryptographic building block whose roots originate from public-key cryptography. It is a way to seal a concealed value and reveal it later, like putting a message in a safe box. You can reveal the message later by providing the key to the safe. For instance, a signature is a form of commitment; it does hide the message (because of the encryption), but the signature is bound to the message (because of the hashing). Binding and hiding are the two formal properties of a commitment, and there are several commitment schemes with various advantages. In particular, some are perfectly hiding, so it is impossible to recover the message, even with infinite power, and others are perfectly binding, so it is impossible to reveal another message than the one the commitment is for, even with infinite power. It is not possible to be perfectly binding and hiding at the same time, as these two properties are contradictory.

A physical implementation of a perfectly binding and computationally hiding commitment (one can try all combinations to open the safe)

Going back to the 3-colouring protocol, we ask the prover to commit to all the colours at the beginning. When the verifier requests an edge, the prover reveals the two commitments of that edge. If the prover does not have a 3-coloured graph, there is a small probability that the revealed commitments will be the same colour.

The commitment used by Goldreich, Micali and Wigderson is based on quadratic residues modulo n. They use the fact that it is difficult to know whether a number is a quadratic residue modulo n (i.e. it has a square root mod n) if n is the product of two large prime numbers whose factorization is unknown (to the verifier). Given n, the product of two secret prime numbers, and y, a non-quadratic residue, the commitment c of a bit b (0 or 1) is:

c = r²y, where r is a random number used to hide the bit.

Bit 0 will be a quadratic residue, and bit 1 will be a non-residue; however, the verifier cannot tell them apart because he is computationally bounded. On the other hand, this commitment is perfectly binding because the commitment is either a quadratic residue or not.

Zero-Knowledge

We have built a protocol that can provide a proof of 3-colouring, but it seems a lot of effort for nothing. If the verifier asks for all of the edges, he will be completely convinced, and, moreover, he will have the full 3-colouring of the graph. So why doesn’t the prover simply provide the full 3-colouring?

The interesting application of these interactive proof protocols is when they have another property: zero-knowledge.

  • Zero-Knowledge: The verifier does not learn anything except that the statement is true

Formally, it is defined as where a simulator exists that can generate protocol transcripts that are statistically identical to a real transcript. To make our protocol zero-knowledge, the prover simply randomly permutates the colours at each step before committing to them. Because they are random, the verifier does not learn anything by looking at the colours of the chosen edge except that they are different. But still, since the prover commits to all the colours at each step, there is always a (small) probability of revealing a bad edge if the prover does not have 3-colouring.

At the end, the verifier is convinced by the protocol, but he has no idea of what the 3-colouring is. In particular, he is not able to transfer the proof, and he cannot convince somebody else that the prover knows that there is 3-colouring because if he shows the transcript of the protocol he followed with the prover to somebody else, it will be indistinguishable from a simulated one. Indeed, simulating a transcript is very easy since you simulate the questions and the answers at the same time.

Nobody’s perfect

As we have seen with the commitments, not all properties can be true at the same time. We have given the ‘perfect’ definitions, but there are other flavours. For instance, computational soundness means that the protocol is sound for computationally bounded provers, and computational zero-knowledge means that a computationally bounded verifier cannot distinguish a real transcript from a simulation, so it is sound for computationally bounded verifiers. Perfect zero-knowledge and perfect soundness are incompatible (except for trivial problems). The protocol we have seen is only computational zero-knowledge, and this is because of the commitment. Indeed, an unbounded verifier could reveal the commitment and learn the solution.

In 1986, Gilles Brassard, David Chaum and Claude Crépeau changed the way commitment is done in the protocol. They reversed the commitment and got a perfectly hiding and computationally binding commitment, which gives a perfect zero-knowledge protocol while being computationally sound, provided that the prover is computationally bounded. This is called an ‘argument of knowledge’.

The commitment is similar to the one given before but reversed. Here, the verifier provides a public n (the product of two secret prime numbers) and z, a quadratic residue modulo n. The prover commits to bit b by providing:

c = r²z, where r is a random number.

Now the commitment is always a quadratic residue, so it is perfectly hiding. The prover knows the square root in one case or the square root divided by z in the other case, which makes it computationally hard to change his mind.

Conclusion

This first post has given you an introduction to different flavours of zero-knowledge proofs and their properties. Part 2 will be about the protocol proposed by Prof. Claude Crépeau and his team last month, which is a protocol with perfect zero-knowledge and soundness. If you have read this post carefully, you should know that this is supposed to be impossible!

--

--