Byzantine Generals Turn to Bayes

The agreement is inferred from their degree of mutual trust.

Pierre Decoodt
Geek Culture
9 min readJul 16, 2022

--

Image by author.

In a previous article, I describe a quantum protocol to solve the Byzantine General’s problem for four parties, with the ability to unmask up to two traitors.

One fact not taken into account was that the parties could have different opinions on the loyalty of the others. In a problem of coordination in distributed computing systems, a faraway server may fail more often than the receiving components. Or on the contrary, the server could fail very rarely while the other components are prone to errors. In another area, blockchain validation, each party can choose to base prior beliefs on proof-of-work or proof-of-stake.

In the quantum solutions to the Byzantine agreement, the commanding general provides, along with the message, a list of indexes for the lieutenants to use in a traitor hunt. In ideal noiseless quantum systems, these indexes correspond to perfectly correlated values in possession of the parties.

However, noise is unavoidable in quantum systems (see this recent review, part 1 and part 2, by Zlatko Minev). When a highly entangled state is measured repetitively, the distribution of observed classical results is governed by the quantum state vector but is affected by the noise.

For a hypothetical noiseless quantum system, things are simple. During the traitor hunt, no error is made by an honest lieutenant who received the incorrect order from the commanding general. A traitorous lieutenant repeatedly makes errors. With noise, a loyal lieutenant begins to make errors and a traitor makes even more. As the noise increases, the distinction between loyal and traitor becomes increasingly difficult. One way to solve this problem is to use Bayesian inference, which will be demonstrated below. It will also be shown that the extended form of Bayes’ theorem makes it possible to take into account the prior beliefs of the lieutenants.

The quantum solution to the three-general Byzantine agreement, also coined by Adán Cabello as the “liar detection problem”, has been demonstrated on a real photonic quantum system. Recently, Zoltán Guba et al demonstrated its feasibility on noisy intermediate-scale quantum (NISQ) devices.

Here is a demonstration of a Bayesian quantum four-general solution running on the ibm-oslo system that was recently added to the IBM Quantum compute resources. The number of shots (copies) was 8192. Unlike the 5-qubit systems that I tried previously, the 7-qubit ibm-oslo allowed traitor detection in all possible scenarios. The main software was the open-source Qiskit package for programming in Python. Let’s quickly review the different phases described in my previous article, emphasizing important points for the final Bayesian Byzantine agreement.

Distribution of entangled qubits

Image by author.

Qubits quintuplets are entangled, sent through quantum channels, and measured locally. The commanding general receives two qubits, and each lieutenant receives one qubit. After measurement, the parties each have an ordered list of bitstrings. A “Byzantine mitigation” is applied. This reduces the differences in noise level for the two possible Alice’s orders. It consists of flipping each bitstring with an even index in the lists.

Sure, we can get rid of all this quantum stuff. The distribution of the results could be entrusted to an independent agency, which would send equivalent simulated perfect lists via secure classical channels. However, an eavesdropper can intercept and tamper with the data. Or a mole within the agency could copy Alice’s list and pass it on to traitorous lieutenants.

The quantum part of the protocol avoids this danger. The price to pay is noise: for some of the quintuplets, the expected correlation is not reached. This is where the Bayesian approach will show its effectiveness.

Check entanglement

Image by author.

Remember that at this stage, each list is only known to its owner. Alice’s list is approximately ⅓ of ‘00’ (retreat), ⅓ of ‘11’ (attack), ⅓ of ‘01’ or ‘10’ (trap). A lieutenant’s list is made up of approximately fifty-fifty ‘1’ (retreat) and ‘0’ (attack).

The parties devote a fraction of the classical results drawn at random to a check. This constitutes the ‘trial data set’. Each party keeps secret her/his remaining ‘game data set’ for later.

The parties verify that the bitstring distribution in the trial data reasonably approximates the ideal distribution for a hypothetical noiseless system. A possible metric for this purpose is the classical Hellinger fidelity. If this check fails, the protocol is aborted.

Histogram of classical results obtained in ibm-oslo. Comparing the distribution of the trial data to the ideal distribution, the Hellinger fidelity is 49% …and it worked! Image by author.

Additionally, the trial data set allows for an estimation of the errors per number of exchanges to be expected later during the traitor hunt. The parties simulate each possibility. They obtain a table of incidences that defines the parameters of hypergeometric distributions linked to each case. Thanks to the Byzantine mitigation, the values can be pooled for the two possible Alice’s orders.

Table: the number of errors per number of exchanges expected from an analysis of the trial data set. The samples come from a set of 4094 copies drawn from 8192 obtained on ibm_oslo. Image by author.

Send order

Image by author.

Alice sends the order (attack or retreat) accompanied by a matching set of indexes Sₗ. A set has length N by convention, with N ≪ number of copies in the game data set. If Alice is honest, she should send the same set to all parties. If she is a traitor, she will send one of the lieutenants a discordant order, with its matching set.

Bayesian Byzantine Agreement

Image by author.

The lieutenants communicate to each other the order they claim they have received. Two lieutenants with the same claim won’t face each other in the hunt. Instead, they will take turns drawing and sending indexes of the list received from Alice. If certain indexes are not common to the two lists, they signal an inconsistency and the protocol is aborted.

Two lieutenants with discordant claims engage in a game where players take turns to send indexes. A lieutenant who refuses to play reveals that he is lying.

The indices sent must point in the list of the recipient to the expected value for the sender’s claim. Each player counts the number of errors x in N runs made by the opponent.

A traitor does not have access to a correct index set for the claim made. He must draw in the larger set S, the set complementary to Sₗ. Otherwise, he betrays himself. The traitor excludes the bad candidates, the indexes in S not corresponding to the claim. However, the number of errors x will anyway be higher than for loyalty: the traitor has no way of knowing which of the remaining indexes correspond to a trap in the original Alice’s list.

At the end of a game, players use a Bayesian approach to interpret the outcome. The initial degree of belief of a lieutenant consists of the statistical odds oₐ in favor of the commanding general being a traitor instead of the facing lieutenant. The odds against is oₗ with oₗ × oₐ = 1. Then, the associated a priori probability is P(A) = oₐ / (oₐ + 1). The a priori probability that the facing lieutenant is the traitor is P(¬A) = 1 - P(A).

Let’s now denote P(B), the probability of observing x errors in N runs. From the trial data set, the loyal lieutenant gets from the table of incidences the number of errors nₗ (if facing a traitor) and nₐ (otherwise) to be expected for respectively Mₗ and Mₐ exchanges. Having observed x errors, this lieutenant uses the open access scipy stats discrete hypergeom module to estimate the conditional probabilities :

  • P(B|A) = hypergeom(Mₐ, nₐ, N).pmf(x)
  • P(B|¬A) = hypergeom( Mₗ, nₗ, N).pmf( x).

The loyal lieutenant uses Bayesian inference to compute the posterior probability:

P(A|B) = P(B|A) × P(A) / { P(B|A) × P(A) + P(B|¬A) × P(¬A) }

A loyal lieutenant concludes that the opponent is loyal for P(A|B) > ½, or a traitor for P(A|B) < ½.

Here follow two examples of traitor hunts. Can you pause your reading and find how many traitors and who in each example? Assume the odds ratio is 1:1 in all games. The solution will be presented in the next section.
Hint: You don’t have to calculate probabilities. Just follow logical reasoning.

First example of a traitor hunt. Entanglement on ibm_oslo. Image by author.
Second example of a traitor hunt. Entanglement on ibm_oslo. Image by author.

Spoilers from here: solutions for both examples will be revealed!

In the first example, Bob and Dave, testing each other, recognize that they are both honest. But Dave, testing Carol, discovers she is a traitor. Carol’s conclusion that Dave is honest is irrelevant to the final agreement where traitors have no say. Would Bob and Dave each follow the different order sent by Alice? Certainly not, because they know that she is also a traitor because contrary orders have arrived. But here’s an elegant solution for a four-general protocol: to be an effective traitor, Alice only has to deceive one lieutenant, and here it is Bob because the liar Carol claimed to have received the same order as Bob. Thus, with Dave, Bob will follow the true command Dave received from Alice.

In the second example, when Carol tests Bob and Dave, she discovers that both are traitors. When Bob and Dave test Carol, they found her to be honest (again irrelevant traitor sightings). If Carol has no preconceived ideas, she will follow the order received by Alice as requested in Byzantine fault tolerance.

A possible problem can be illustrated by another scenario in this latter example. Carol develops overconfidence in Bob and Dave. Influenced by her two colleagues, she becomes very suspicious of Alice. Let Alice order an attack. Bob and Dave will manage to get Alice out by lying to Carol. Indeed, using the values of P(A) ≫ P(¬A), Carol infers that Alice is the traitor. The three lieutenants begin a retreat. This case can easily be simulated by changing the prior probabilities in my Jupyter notebook (note that if Alice was a traitor and all lieutenants were innocents, the same agreement would be reached and would be correct this time).

There are also mirror cases where too much trust in the commander can spell bad luck for an innocent lieutenant.

These problems can be avoided by recursive Bayesian updating, which should be considered if such protocols are applied in a distributed computing system.

Conclusion

In a proof-of-concept, the successful implementation of the Bayesian quantum solution to the four-general Byzantine agreement has been achieved using a seven-qubit NISQ system.

This solution takes into account the degree of trust of the lieutenants between themselves and towards the commanding general. The performance benefits from a detailed noise analysis with the simulation of all possible cases when checking the entanglement. During this phase, a table of error occurrences is drawn up and will help traitors be detected. The effect of noise is therefore minimized and Bayesian Byzantine agreement can be achieved with greater confidence.

An advantage of a protocol with more than two lieutenants is to reach a consensus when the commanding general is faulty. This occurs even if a lieutenant is also faulty.

I acknowledge the use of IBM Quantum services for the testing on hardware. The views expressed are those of the author, and do not reflect the official policy or position of IBM or the IBM Quantum team.

--

--

Pierre Decoodt
Geek Culture

Aficionado of science, especially quantum computing, artificial intelligence, Bayesian statistics, and ethology