MPC Techniques Series, Part 4: Beaver’s Trick

By Jesper Buus Nielsen, Professor and Chief Cryptographic System Designer, Partisia

This blog post will introduce a cool MPC trick, known as Beaver Triples. It was Invented in 1991 by Don Beaver [1], it has multiple uses in MPC. In this blog I will explain how to use it for preprocessing and how to get an “almost asynchronous” MPC with the same security enjoyed by synchronous MPC. I will also explain the difference between synchronous and asynchronous MPC.

The Trick

Consider doing MPC among n servers using secret sharing. Assume the secret sharing can tolerate that up to t servers collude. A secret s is represented on the network by each server holding a share of s. We denote this by [s]; Think of s as sitting in a box which no t colluding server can look inside or manipulate, but which enough servers can collectively open. Beaver’s trick works for any such secret network representation as long as it has a few extra properties listed below. One example of such a representation is Shamir secret sharing which was explained in the previous blog.

  • Additive homomorphic: From secret representations [a] and [b] the servers can compute a secret representation [a+b] without communicating (and therefore also without leaking a or b). We write [a+b] = [a]+[b]. For Shamir secret sharing this is done by just adding shares locally.

The non-interactive opening property importantly allows an asynchronous opening protocol. Assume that “enough” partial openings is t+1. Then we can run an MPC with a total of n = 2t+1 servers. That means that if all servers agree to open [a] then at least n-t = t+1 honest servers will send correct partial openings and therefore all honest servers will eventually receive t+1 correct partial openings and can then learn a.

It could even be enough to run with n = t+1 servers. If all servers are correct they send correct partial openings and then every server can open the box. If a server is incorrect and sends an incorrect partial opening or does not send anything, then the servers cannot open the box, but they will never open it to an incorrect value a’≠a. In this case the opening protocol is correct but does not guarantee output delivery.

Beaver’s trick allows one to “preprocess” a multiplication. By doing a multiplication of random values using the (inefficient) multiplication protocol you bring yourself in a position where you can later multiply two given values without running the (inefficient) multiplication protocol.

Assume that in a preprocessing phase run prior to the MPC the servers computed three secret representations [a], [b], [c], where a and b are random and unknown to all parties and c=ab. This is what is called a Beaver Triple. A random value like [a] could be computed by each server creating its own random [ai] and then summing up t+1 of these from different servers. This guarantees that at least one honest server contributed a summand, hence the sum is unknown and random. The value [c] can be computed as [ab] = [a]×[b] using the (inefficient) multiplication protocol.

Assume that at some point during an MPC the server computed two secret representations [x] and [y] and want to securely compute [z] where z=xy. They can do this without using the multiplication protocol as follows:

  • Open [a+x] to let all servers learn A = x+a.

Notice that z = xy + ay — ya — ba + ab = xy as we wanted. So the protocol is correct. The protocol is also privacy-preserving. Namely, since a and b are random values, the values A = x+a and B = y+b do not leak information on x and y. For this to be true the computation needs to be done in a so-called finite field, for instance by computing modulo a prime, but this is not important for the story here.

It turns out that each Beaver Triple should be used at most once. If the same triple is used for two different multiplications, then security is lost. So each subsequent multiplication using Beaver’s trick “consumes” one triple.

Advantages of Input Independent Preprocessing

A main usage of Beaver’s trick is for input independent pre-processing. MPC is communication intensive and therefore can be slow. It turns out that a lot of the communication goes into multiplying secret representations.

Beaver’s trick can be used to amortize the communication complexity. Before the MPC starts the servers compute a large number of Beaver triples [a], [b], [c] in parallel. It turns out that because they are computing a large batch of independent and identical random objects there are a number of optimisations which can be applied to save on communication. We will not go into details about the optimisation here, the interested reader can find a description of optimisation in chapter 8 in Cramer et al 2015 [4]. Later these Beaver Triples can then be used for multiplying other values, but now with lower communication complexity as Beavers trick only uses openings of secret representations and the homomorphic properties which do not cause communication.

Input independent preprocessing can also make it easier to handle corrupted servers. One trick is known as Player Elimination (see Przydatek 2000, [2]) and goes as follows. Before the preprocessing is run each party commits to the randomness it is going to use. You can imagine putting a hash of it on a blockchain. If cheating is detected, then we can ask all parties to broadcast (on a blockchain say) their randomness and the messages they claim to have received from other servers. This is secure as the inputs of the MPC have not been used yet in the pre-processing phase. We can then use these views to compute what all servers should have sent given their randomness and incoming message and then find two servers which disagree on what was sent between them and kick them both out. This way we got rid of at least one corrupted preprocessing server and at most one honest server. So we maintain an honest majority. Namely, if we have n servers and at most t < n/2 corrupt servers, then it also holds that t-1 < (n-2)/2. One can then retry the preprocessing with the remaining servers. Eventually we get rid of all corrupt servers and the pre-processing terminates.

Synchronous versus Asynchronous MPC

The player elimination trick above used a blockchain to synchronize on what happened when a fault occurred. We ask all parties to reveal their view of the protocol and if they do not do that we can just assume that they are the corrupted server. This of course requires us to use time. We need to give the servers enough time to post their views of the protocol before we call them corrupted. In MPC a protocol where we use timeouts and are willing to call servers corrupted if they miss a timeout are called synchronous. Synchronous protocols are dangerous in the sense that if an honest server accidentally misses a timeout, then we deem it “corrupted”. And MPC protocols typically do not protect the privacy of “corrupted” parties. Such a party might have its input leaked or at least not have its input contributed to the computation. Therefore the timeouts in synchronous MPC protocols have to be very long (minutes) such that honest servers will never miss a timeout and accidentally become “corrupted”. This should hold even under worst-case network conditions. This means that such protocols are always as slow as the worst case even when the network is fast. This is bad for a protocol with many rounds, like going layer-by-layer through an arithmetic circuit as MPC protocols do. You end up with a long timeout per round.

Consider then the asynchronous opening of our secret representations. Each server simply sends a partial opening and once the honest ones arrive the box can be opened. Here no timeouts are needed. We call this an asynchronous protocol. As soon as the server receives enough partial openings it can open the box and proceed. Hence, when the network is fast, so is the protocol.

Asynchronous MPC is much faster than synchronous MPC in particular when the protocol has many rounds of communication.

So it would be natural to always use asynchronous protocols. Alas, this is not possible without a loss of security and efficiency.

In fact we know that a truly asynchronous protocol never making use of timeouts can tolerate at most t < n/3 corrupted servers if it is to guarantee output. For synchronous protocols the bound is t < n/2. So synchronous protocols can tolerate more corruptions.

Another advantage of synchronous MPC is that we know how to construct them with much less communication in terms of bits sent than the asynchronous ones. Using timeouts simply gives us more design possibilities for optimising the protocols.

Synchronous MPC has higher security and has less communication than asynchronous MPC.

There is no clear choice of what type of protocol to use. Luckily it turns out we can to some extent get the best of both worlds, as described below.

When it comes to the number of corruptions tolerated, it turns out that the barrier is multiplication. We can make secret representations [s] with n = 2t+1 where we can add and open asynchronously even if t servers are corrupted. However, if we want a secret representation [s] where we can compute a multiplication asynchronously, then we can tolerate at most t < n/3 corruptions. If we use a synchronous protocol we can do multiplication and tolerate t < n/2 corruptions.

This means that if we want to tolerate t < n/2 corruptions we need to use a synchronous protocol. However, it turns out we do not need to be synchronous all the time. Only when we multiply. And we can use Beaver’s trick to push multiplication to the preprocessing phase!

Consider the following “almost asynchronous” protocol from Damgård et al 2009 [3].

  1. Use a synchronous protocol for input independent pre-processing which tolerates t < n/2 corruptions to generate a lot of Beaver Triples.

The synchronous Step 1 can be done efficiently using a number of techniques like somewhat homomorphic encryption, oblivious transfer, or secret sharing based protocols. Since all the Beaver triples are computed in parallel, the round complexity for generating all of them is the same as that of generating a single one. So we do not get hit as hard by the synchrony in the preprocessing. Also, it does not matter as much if a server becomes “corrupted” by accidentally missing a timeout as no secret inputs were used yet in this phase.

For Step 2 consider a party P which is to input x secretly to the MPC. It can be done asynchronously as follows. Take a random secret representation [r] from the preprocessing and open it towards P. Then have P broadcast its adjustment value x-r. From [r] and x-r all servers compute [x] = (x-r)+[r].

The above input phase is almost completely asynchronous, but it turns out that a single synchronization point is needed to be put in to agree on who broadcasted their adjustment values before the timeout. After this no timeouts are ever used again. For Step 3 note that Beaver’s trick only uses opening and addition of secret representations. So Step 3 can also be asynchronous for t < n/2. The same holds for opening the outputs.

Conclusion

We presented how Beaver’s trick allows one to push all multiplications of an MPC to an input independent preprocessing phase. This can be used with great advantage both for efficiency and security.

  • We can generate a lot of multiplication in parallel with less communication (in bits) than when doing them in sequence as we need them during the MPC.

References and further reading

[1] Donald Beaver. Efficient Multiparty Protocols Using Circuit Randomization. CRYPTO 1991.

[2] M. Hirt, U. Maurer, and B. Przydatek. Efficient secure multi-party computation. ASIACRYPT 2000.

[3] Ivan Damgård, Martin Geisler, Mikkel Krøigaard, Jesper Buus Nielsen. Asynchronous Multiparty Computation: Theory and Implementation. Public Key Cryptography 2009.

[4] Ronald Cramer, Ivan Damgård, Jesper Buus Nielsen: Secure Multiparty Computation and Secret Sharing. Cambridge University Press 2015.

About the author
Jesper is Professor in Computer Science at Aarhus University and also one of the co-founders of Partisia and Sepior. He is one of the top cited and publishing researchers in secure multiparty computation. Jesper’s primary research areas are secure multiparty computation, distributed consensus, and universal composability.

Partisia Blockchain

Partisia Blockchain Foundation

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store