Pseudorandom Correlated Generator (PCG)

luke
zkPass
Published in
5 min readMar 29, 2023

--

This is the fourth post in a series of interactive zero-knowledge (IZK) blogs. Take a look at the previous posts in the series:

The primary objective of this blog is to construct a pseudorandom correlated generator using GGM tree constructions.

A pseudorandom generator is a tool that allows us to generate a long random sequence from a short seed. In this article, we will discuss how to construct a PCG and utilize it to build a sharing sparse vector, e. Why is it necessary to build this sparse vector? Please wait for my upcoming blog to find out.

GGM Construction

The GGM construction [GGM86], named after its inventors Oded Goldreich, Shafi Goldwasser, and Silvio Micali, is a well-known and extensively utilized method for constructing Pseudo-Random Generators (PRGs) from any one-way function.

First introduced in a seminal paper by Goldreich, Goldwasser, and Micali in 1986, titled “How to Construct Random Functions,” the GGM construction is based on the idea of repeatedly applying a one-way function to a seed value to generate a long stream of pseudorandom bits.

One of the significant advantages of the GGM construction is that it allows for the generation of pseudorandom bits of arbitrary length from a relatively small seed value. This is particularly useful in scenarios where a large amount of random-looking data is required, but storage or transmission resources are limited.

In summary, GGM construction is a powerful and widely used technique for building PRGs from one-way functions. It has numerous important applications in cryptography and computer security and serves as a fundamental building block for many cryptographic primitives, including symmetric encryption algorithms and digital signature schemes.

Puncturable Pseudorandom Function

The puncture function, also known as the point function or single-point function, is defined such that y is zero almost everywhere except at α. Please refer to Figure 0 for the definition.

Fig 0. Puncture function.

Why did we introduce this function all of a sudden? Do you remember our objective for today? We aim to construct a sharing sparse vector e. In the rest of this article, I will demonstrate how we can use the GGM tree and point function to build vector e.

Sharing the secret s between two parties can be achieved by assigning s0 and s1 to each party, respectively.

Fig 1. Secret sharing.

And we can also share the point function.

Fig 2. Function Secret Sharing.

To provide a more intuitive understanding of point function secret sharing, we can think of the black box as representing 0 and the white box as representing 1. The two parties agree on the same value almost everywhere, except for position α where they disagree.

Fig 3. Function secret sharing as a truth table.

If we set f = f0 ⊕ f1, we know that f = [0,0,0,0,….0,1,….,0], which is a unit vector with only position α holding a value of 1. By creating t instances of f with different positions for α and adding them together, we can obtain a sparse vector e!

PCG for Correlated OT

Now it’s time to show you how to construct point function secret sharing by GGM tree.

Fig 4. Generate the tree from seed s.

As shown in Figure 4, G is a length-doubling pseudo-random generator(PRG). To be more specific, the PRG G: {0,1}^κ → {0,1}²κ takes a random seed s of κ bits as input and generates 2κ pseudorandom bits as output.

We denote the first κ bits of the output as G0(s) and the second κ bits as G1(s), such that G(s) → G0(s) || G1(s). To provide a simple analogy, you can think of G as a sha256 hash, which takes 128 bits of input and outputs 256 bits.

Fig 5. Point function secret sharing.

To share the point function, we will follow three steps:

  1. Bob selects a seed s and uses G to construct the GGM tree, as shown in Figure 5.
  2. Alice uses base OT to obtain all the leaves of the tree except for one position at each level, as depicted in Figure 5.
  3. At the end of steps 1 and 2, Alice and Bob share a unit vector, as shown in Figure 6.
Fig 6. unit vector shared by 2 parties.

4. Repeat t times(HW(e)=t) and add them together we will get the sparse vector e as in Fig 7.

Fig 7. Add all shared unit vectors to get the sparse vector e.

Conclusion

Since the GGM tree is a binary tree, we can generate an N-length vector e using only t* Log(N) base OT operations.

For instance, if we consider e to have one million elements and a Hamming weight (HW) of 10, traditional OT would require one million data exchanges. However, using GGM, we only need Log(2)10⁷ * HW = 20 * 10 = 200 OTs, reducing communication costs by a factor of 5,000. This result is truly remarkable.

zkPass Official Links:

Website | Twitter | Discord | Medium | Github

--

--