Black Hole Sum: A Study in Blind Computation

Building a Peer-to-Peer Privacy Preserving Rating System on Blockchain

Hadas Zeilberger
ConsenSys Web3Studio
18 min readJan 14, 2019

--

This article describes an approach to a hard problem in Web3 application design. We have opened up a Github repo to collaborate on the implementation, which we describe in some detail below. Please come over and help out, if you find this pattern intriguing.

https://pixabay.com/en/users/insspirito-1851261/

Building on the decentralized web is hard. At Consensys Web3Studio, our job is to chart the new territory of Web3, and feel out the boundary between what is currently possible and what is not. Often, when trying to construct a new architecture to use with dApps, we hit a wall. “Wait”, someone will say, “If we did that, we would need a trusted third party, so it may as well be centralized”, or “hold up — if we did that then everybody’s information is leaked — that could never work in production”.

Blockchain’s transparency is great for preventing double spending and facilitating a cryptocurrency-driven economy. But when trying to build the security architecture for a dApp, this same transparency can seem like a barrier.

For cases like these, cryptography comes to the rescue with schemes that allow us to selectively show some information while hiding other information (such as the use of public key cryptography to sign transactions on Ethereum, or the use of zero knowledge proofs to facilitate honest trades).

But some of the cryptographic findings needed to give Web3 that extra push are still in progress. One such topic is blind computation:

Blind computation is the ability to use completely hidden values to obtain a result that isn’t hidden, and from which we can extract information without the use of a trusted third party. This could open up a lot of opportunities for practical applications of Web3.

Imagine the possibilities! We could fuel AI with completely hidden values, so that we can still learn about society and train bots without sacrificing individual privacy. We could construct verifiable and privacy-preserving e-voting systems and ensure that nobody ever interferes in our elections again. We could even just conduct surveys over platforms such as Facebook, without any of our individual information being leaked.

This last case, the case of private surveys, is the focus of this article.

We became interested in it after a brainstorming session focused on effective self organization. Consensys has a flat hierarchy, and successful cooperation is hard, so we are constantly racking our brains for ways to streamline team processes.

One process is getting feedback. Getting feedback is terrifying. Giving feedback is more terrifying. So we had an idea to construct a peer to peer rating system for employees in a company. Rating systems are great because they allow people to get a sense of how they are performing (great for self awareness, great for productivity). But rating systems are also awful, because they lure our minds towards obsessive, self deprecating and perfectionist tendencies (horrible for self awareness, horrible for productivity). We wanted to keep the good parts of the rating system, while throwing away the bad parts.

We focused on figuring out how someone can have a general sense of what their employees think of them, without knowing the specifics of their rating.

For example, if I see I have three low scores and two high scores, I may try to guess who slighted me and who likes me. I might compare this score to scores of previous weeks and painstakingly scan my memory for the things I did wrong. On the other hand, the good parts —being reflective and positively self observant — come from having a sense of how I’m doing compared to previous weeks. In that scenario, all I know is that I’m doing worse than last week. Since I don’t know any details, I’ll probably take a minute to think over the past week, realize I didn’t sleep as much as usual, and resolve to sleep more in the future. I don’t have enough information to obsess over my performance, but I do have enough information to know where I stand.

In the architecture we were hoping to construct, the person being rated cannot see individual scores. She/he cannot see which of their coworkers actually participated. She/he will not even see the final total sum of the scores. The only thing visible to the person being rated is whether or not their score has increased or decreased from last time. Additionally, none of the other participants can see anything, except for the score that they, themselves, submitted.

At this point —our team had one of those familiar moments. “Wait”, someone said, “If the scores are encrypted, we need a trusted third party to decrypt the result — might as well be centralized”! “True”, someone else responded, “and if they aren’t encrypted everyone will see them because blockchain is transparent”! And so started the conversation about how cryptography could save us.

Cryptography wasn’t able to completely save us, but we were able to keep everything except for the bit where only the direction of the change is visible.

To know only the direction of a change(whether it has increased or decreased) requires a public comparison between encrypted sums. The result of this comparison can only be shown to the ratee, and sums themselves have to be hidden from everyone, even the ratee. Therefore, we would need an encryption scheme that reveals order (but to only one person), while keeping all information about its underlying plaintext hidden to absolutely everybody. In addition to all this, the encrypted sum would need to be constructed from multiple, decentralized parties. We don’t currently know of any such protocol.

Nonetheless, with a mix of cryptography and some added human rules, we were able to construct a system that meets most of our goals for a useful peer-to-peer rating system.

Here’s How it Works

Consider the following scenario:

  1. A group of untrusted people each submit an encrypted input. This input is not an absolute value reflecting my opinion of you. Rather, it is a relative value, that can be either positive or negative, reflecting a change in my opinion since the last time I participated. For example, instead of you rating me a 2/10 on this article, you wait until I publish my second article, and then give me a “+3” because you think it’s 3 points better than this one (and you only gave me 2 points for this one). Then, you read my third article, and you hate it, so you send me a “-4”, to show that I’ve gone down 4 points in your eyes.
  2. Each person submits a proof that their input was in the correct range. This ensures that everyone is playing by the rules without revealing any individual submissions.
  3. The person being rated submits some input that randomizes the final output, so that even though the final sum can be seen by everyone, it is nonsensical to all but the ratee. We ensure that the ratee is the only one who submits such a value by the range proofs mentioned in step 2.
  4. The ratee calculates the decryption key and decrypts the final sum. She/he displays the final sum to everyone, so that they can confirm it was calculated correctly, and that everybody submitted legal inputs.

Getting to the Solution: Introduction to Functional Encryption

Functional Encryption is the idea that we have a function f, and some inputs x,y,z that we do not know. Even though we do not know the input, we are still able to compute f(x,y,z). This solves the problem of blind computation exactly. The only problem with Functional Encryption is that, so far, it has only been proven to work with inner products. As a refresher, if x and y are two vectors, where x=(x₁,x₂,x₃,x₄,x₅) and y=(y₁,y₂,y₃,y₄,y₅) then the inner product, <x,y>, is equal to x₁y₁ + x₂y₂ + x₃y₃ + x₄y₄ + x₅y₅. As you might be able to guess, this protocol is actually perfect for our use case, as we do want a sum of our ratings. x will be the list of ratings (xᵢ is the rating of person i). y defines the function (f(x) = <x,y>). If y = (1,1,1,1,1), then this would give the sum of all the components of x. It is up to the implementers of the protocol to choose values for y that make sense for their rating system (more on that later).

A common example used to demonstrate the usefulness of functional encryption on inner products is as follows: A school principal wants to average a student’s score so that the grades remain private. In this case x is a list of the student’s grades from different assignments. y is a list of the weights associated with each assignment. And the goal is to find some algorithm that can take an encrypted x, and a publicly known y and output <x,y> = x₁y₁ + x₂y₂ + x₃y₃ + x₄y₄ + x₅y₅ without revealing x . This can then be used to find the student’s average without revealing any of the student’s individual scores.

Decentralized Multi Client Functional Encryption

The protocol we decided to implement is called Decentralized Multi-Client Functional Encryption. It is a functional encryption protocol that is both multi-client and decentralized. The multi-client descriptor refers to the fact that the different components of x all come from different sources.

This is significant because it means that each of the components is encrypted with it’s own encryption key. Despite this, we are still able to bring all the cipher texts together and compute an answer for f(x)!

Multi-client functional encryption, however, requires a trusted third party to know the encryption keys of the participants so that it can generate a decryption key that can decrypt the final sum.

Decentralized multi-client functional encryption manages to also get rid of the trusted third party. In decentralized multi client functional encryption, each person in the system gets to generate their own partial decryption key. The final decryption, that’s used for f(x), is the sum of all these decryption keys. Even though the decryption key is an aggregation from multiple sources that don’t trust each other, it still works to compute f(x). The only caveat is a one time interactive setup phase, which we implement using additive secret sharing so that no trusted third party is needed.

Below is a summary for the protocol as written in the paper. We skim over many details, as they can just as easily be viewed directly from the source. Our main goal in outlining the protocol is to give some context for the sections that follow it.

  1. Protocol Setup: This protocol uses an asymmetric bilinear map between groups. The security of the protocol is proven under the SXDH assumption, which requires that, given a bilinear map G₁ * G₂ → G₃, the XDH assumption is hard in both G₁ and G₂. Therefore we need two groups where the XDH assumption holds, and a bilinear map between the groups. We also need two hash functions, one for each group. We also decide on a vector y. All of this information is made public.
  2. Decryption Key Setup: Before each person can generate a decryption key, they must first generate a 2 * 2 matrix such that everybody’s matrices add up to 0.
  3. Generate partial decryption key: Each person generates their own partial decryption key, using their 2*2 matrix from step 2. They send the decryption key to the decryptor.
  4. Cipher Text Setup: Before encrypting their rating, each person must first generate a random integer secret key, sᵢ.
  5. Encrypt your input: Each party encrypts their input with their secret key. They send their cipher text to the decryptor.
  6. Compute final decryption key: The decryptor adds the partial decryption keys together to get the final decryption key.
  7. Compute Inner Product: The decryptor uses the decryption key, along with the cipher texts to finally output <x,y>.

While nothing in the paper mentions anything about blockchain, it’s decentralized nature makes it seem like a perfect fit, at least at first glance. That being said, there are a few things that need to be addressed.

Adjusting the protocol for our purposes

Below are some human adjustments made to the protocol. We call them human because they do not directly relate to cryptography. These details are more about how we can view results differently so that this protocol can be used as a rating system.

Who is the decryptor?

The first claim made about decentralized multi-client functional encryption is that it can be done without a trusted third party. So who is the decryptor mentioned in the protocol? There does need to be some mechanism that actually does the computation itself. But this protocol is designed so that the entity doing this decryption cannot see what the individual values are. Additionally, It’s okay for the cipher texts and the decryption keys to be totally out in the open, as they completely mask the values they are hiding. Furthermore, the final sum can be computed from only the cipher texts and decryption keys. There is no extra information (such as a secret key) needed to decrypt the final sum. Therefore, it is something that a smart contract could do. This is something that will be explored in the implementation.

A way to avoid negative numbers in a finite cyclic group

In our scenario, it’s possible to input a negative score (e.g., -2 means you went down by 2). But, since we’re working in a finite group, the values cannot be negative. To rectify this, we make some arbitrary number public in the setup phase of the protocol. Let’s arbitrarily choose 60. Then, each person needs to add their rating to 60. So, if I want to send a rating of 3, I actually send the value of 60 + 3 = 63. Alternatively, if I want to send a rating of -3, I actually send the value of 60–3 = 57. Say there are 10 participants. Then, at the end, we know we have to take the final sum of the scores and subtract 600 from it. This would give the desired sum.

Choosing our values for y

It is up to the implementor of the protocol to choose the values for the components of y. If y = (1,1,1,1,…,1) then the final answer will be the sum of the components of x (which is the sum of all the ratings). We can also get creative with values of y. Maybe some employees should have a more weighted rating because they work more closely with the person being rated.

The only problem with this is that if the numbers are not chosen carefully, this could potentially leak information about individual scores. To fix this, we can give each person a custom number to add their score to, instead of having everybody add their score to 60. We just need to do it so that xᵢ*yᵢ has the same possible set of values for each person i. This would also require customizing the set of ratings that each person is allowed to choose from. As it turns out, this could actually could be a good thing.

Making Collusions Confusing: Different ranges for different raters

There is always the problem that people may collude and try to sabotage someone by collectively giving them very low scores. There is no way to cryptographically prevent people from talking to each other off chain. We can confuse them though. One potential way is to have each rating be the accumulation of several rounds of rating. In each round, a range and a weight is chosen for each person right before the protocol begins. A person will not have prior knowledge of what their weights and ranges will be. The meanings of the ratings could also change on each round. Instead of rating someone on the difference in how they performed (from low to high), we could judge something more qualitative. For example, we could “rate” someone on how quiet or talkative they are. A lower score could be linked to talkative while a higher score is linked to quietness. Since it is not clear which of these qualities is better, the low numbers would lose their meaning of badness, and the high numbers would lost their meaning of goodness. This makes it harder for people to collude.

Adding Entropy

The final sum will be made public to everybody. In order to ensure that only the ratee sees the sum, the ratee will submit some random number well outside the range of the other ratings. The only requirement is that this random number is small enough that the discrete log will still be able to be computed (the last step requires finding a discrete log as the ratings and weights are assumed to be small enough). Once the final sum is computed, the ratee can privately subtract the value she/he submitted to find out what her/his rating is.

This ends the overview of the architecture. The rest of the article focuses on ensuring it actually does what we claim it does (i.e. is both secure from dishonest parties and decentralized). Feel free to stop here if these details aren’t of interest to you, but make sure to watch the repository for upcoming progress.

Cryptographic Considerations: Ensuring Complete Decentralization and Verifiability

There are a couple of steps that need to be added to the protocol to ensure that it is both verifiable and completely decentralized. First of all, we need to determine how to implement the interactive setup phase for generating partial decryption keys. To do this, we will use additive secret sharing. Secondly, we need a way to verify that each person’s input is completely legal. This will be done using bulletproofs, along with a technique stolen from zkLedger.

The Decryption Key Setup: A Interactive Zero Sum Game that is Safe Enough and Quick Enough

Our goal for the decryption key setup is for each participant to generate a random 2*2 matrix but with the requirement that all the matrices add up to 0. We need to find a way to do this without leaking information about any one person’s 2*2 matrix.

We do this using additive secret sharing. The protocol, described below, allows for each person to generate their 2*2 matrix in such a way that n corrupt players would have to collude with each other in order to learn about anyone’s secret matrix. This protocol is done for each of the four indices in the 2*2 matrix. For simplicity’s sake, I will describe the protocol as if each person only needs to generate one integer. Then, I will clarify and remind the reader that it must be done for all four indices of the 2*2 matrix.

First we decide how paranoid we want to be. By this, I mean that we get to choose how many corrupt players will be tolerated in the system. We will call this number n.

Suppose Bob is a member of the group of raters. Bob wants to ensure that each index of his 2*2 matrix is kept secret. For each index, Bob generates n integers. He encrypts and sends each of these n integers to one other person in the group. Everybody else in the group does this as well. At the end of this stage Bob has sent out n integers. He has also received n-1 integers, one from each of n-1 other people. Bob takes all the integers he sent to others and adds them all together. Then, he takes all the integers he received and subtracts each of them from this sum. Say he sent out three integers, a,b, and c. And say he received two integers, x and y. Then his final value would be a+b+c-x-y.

Everybody else does this as well. Eventually, each person’s value is put together into one final sum (in the computation of the final decryption key). When this happens, each integer that was thrown into the mix will be both subtracted and added from the final sum. This works out because when Bob sends out an integer to Alice, he adds x to his final sum because he sent it out. Meanwhile, Alice subtracts it from her final sum because she received it.

Obviously, every integer that’s sent out must be received by somebody. Therefore, each integer that’s added to the final sum will also be subtracted from the final sum, ensuring that the final sum does equal 0. Moreover, each person only knows one summand of everybody else’s final value. Since this is a finite cyclic group, that is not enough to leak any information about their final secret value. Therefore, the final value will only be known if n people collude.

This protocol is done for each of the 4 indices in the 2*2 matrix. Bob would actually generate 4*n different integers and send each person 4 integers. Each of the 4 integers corresponds to one of the indices in the 2*2 matrix.

Multi party computation can have high computation and communication costs. We need to take into consideration the time to encrypt each integer with ECIES, and the communication costs needed for each participant to send 4 integers to n different people. One thing I still need to verify is if this can be implemented as a one time setup without compromising security.

Verifying Input: Zero Knowledge Range Proofs

In order to make sure that everybody is playing by the rules, we need a way to verify that each person’s input is in the correct range. Because it is not possible to verify the range of input directly from the decentralized multi-client functional encryption (DMCFE) protocol, range proofs must be submitted separately. Of course, we need a way to prove that each person committed the same value for the range proof as they did for the DMCFE protocol. To accomplish this, we use bulletproofs to verify that each input is in the correct range, and then we use an auditing technique used by zk-ledger.

Both bulletproofs and zk-ledger are based on Pederson commitments. Essentially, each person would submit a Pederson commitment of their value. They would also submit a bulletproof that their value is in the correct range. Thirdly, they submit a token. The product of these tokens will allow an auditor to determine if the sum of the values submitted via Pederson commitments is the same as the sum of the values submitted via DMCFE (this technique is described in section 4.2 of zk-ledger).

Security Considerations: Is this protocol still secure in the real world?

This next section doesn’t deal with how to implement the protocol, but rather with the question of should we? In other words, does the security analysis presented in the paper still hold up in the real world? We argue that it does.

This paper’s security analysis uses something called the random oracle model (ROM) to prove that this protocol is secure against adversaries. In this model, they replace the hash functions used in the protocol by a random oracle, which is an ideal black box function that spits out completely random values with the constraint that any given input will always have the same output.

The problem with the random oracle model is that it is impossible to actually build a random oracle in real life. While it is a very useful tool for formally proving that a protocol is secure against an adversary, it is hard to prove that the protocol will still be secure in practice, when the random oracle will necessarily have to be switched back with the hash function. In the best case, a random oracle is to formalize the security of an already secure protocol, but when abused, it can be used to formally prove the security of a protocol which has no secure implementation.

There do exist widely used schemes that have only been proven in the ROM (ie, hope is not lost). That being said, it isn’t easy to ascertain whether or not a protocol still has legs when you switch out the RO for a hash function. Quoted from a paper introducing ROM from ’93, “instantiating the oracle with h is only a heuristic whose success we trust from experience.” Well I don’t have experience but I do have determination, so I combed through the security analysis hoping to gain some sense of whether or not we should move forward with the implementation.

Security Considered: Giving Ourselves the Green Light

To give some context around why we gave ourselves the go-ahead with this protocol, here’s a brief explanation of the proof technique used in the security analysis. The basic idea is to create an adversary, A, who sends two values to a challenger to be encrypted, and then has to determine which of the two values were actually encrypted. If the probability of A figuring this out is not much bigger than 1/2 (which is what it would be if A knew nothing), then we can feel good about this algorithm being secure.

The proof assures us that with a random oracle, the adversary, A will not be able to tell encrypted values apart from each other. Therefore, we need to consider what parts of the proof depend on the behavior of the random oracle, and then figure out what bad things would happen if that random oracle was switched out for a hash function.

The proof is done by creating a series of games (as described in this extremely helpful resource), and then showing that the adversary’s advantage is basically the same for each game. The first game is the one described above. The final game is essentially a coin flip, because the challenger encrypts the first of the two messages no matter what, and then independently flips a coin whose value A has to guess. The goal is to show that A has just as much advantage in the first game as she/he does in the final game.

The transition between the games relies mainly on the randomness of the user-generated secret key. To transition between games, they essentially replace the secret key in the cipher text with seemingly arbitrary random numbers that end up either getting cancelled out additively (becoming zero terms), or multiplicatively (becoming a 1 multiplier). By doing this, they are able to reduce a cipher text encrypting some message(whose identity is variable — i.e it could be the first message or the second message) to a cipher text that is definitely encrypting the first message (the coin flip scenario).

The output of the random oracle is used in these equations, but not because it is random. It is used because we know that its output has certain properties that will allow some terms to cancel additively and some to cancel multiplicatively. The only time we care about the behavior of the random oracle is when constructing this output. In this case, they use Decisional Diffie Hellman Assumption to show that if the random oracle were to output something of the form abP (where P is the generator of the group), it would look the same as if the Random Oracle put out a totally random number (even if aP and bP are known in advance). The flexibility in switching between totally random, and something of the form abP, allows us to know certain information about the output which is able to be used in the formal proof.

To summarize, the behavior of the random oracle is only ever used so we can know for sure that its output has certain properties. Moreover, these properties are unrelated to randomness.

Onward to Implementation!

We are working on implementing this for Ethereum. The repository is currently pretty empty, but describes an architecture and lists some resource we may potentially use. Pull requests and suggestions are welcome.

Work in Progress

This article will undergo ongoing upgrades. Help make it better. Thoughtful comments, suggestions and corrections are not only welcome — they will be gratefully read, considered, and followed-up.

--

--