A Gentle Intoduction to Certified Randomness

Derya Karl
13 min readSep 15, 2023

--

Let’s step into a time machine and visit the ancient Byzantine Empire, which was one of the most well-protected cities during medieval times. Constantinople couldn’t be conquered thanks to its strong Theodosian walls, and the clever use of the Golden Horn chains. Imagine standing before the towering Constantinople walls — not just stone and mortar, but living symbols of protection guarding the empire’s wealth and secrets. It might felt very secure and protected to be live behind these walls. Now, shift your gaze to the present — the digital realm. and think that those historic walls find a digital counterpart in our modern security measures. Think of them as the invisible fortresses forged by encryption and authentication, ensuring our data and conversations remain private and secure.

Now, let’s leap ahead to 1453 — a year of dramatic change. The ambitious Ottoman Empire takes the stage to conquer Constantinople aka to quantum computing workforce to take digital world today. Envision the Ottoman leaders as brilliant scientists and builders, much like modern-day knights. Their weapon? Quantum computers, algorithms and applications powerful enough to reshape our understanding of computation.

Constantinople walls in medieval time generated by Leo.ai

Just as the Constantinople walls stood firm, today’s security mechanisms protect our digital landscape. But here’s where the Ottoman analogy emerges: just as the Ottoman Empire crafted strategies such as dragged ships over the hill to Golden Horn, using smart artilery engineered cannon to fire to breach those ancient walls, today’s quantum scientists utilize the extraordinary properties of quantum mechanics properties, qubits and entanglement to challenge classical computing norms.

Throughout history, there has always been a struggle to find a balance between security and progress. The Ottoman Empire’s pursuit of domination can be compared to the current digital security revolution, which is being driven by quantum computing. Today’s cybersecurity professionals and quantum experts are like modern-day knights who use their specialized knowledge and innovative solutions to protect the virtual world.

Digital walls of web generated by Leo.ai

Quantum randomness plays a critical role in creating stronger security measures by providing a unique yet essential correlation. To safeguard our digital assets, we must stay vigilant and open to new technologies. Let us dive into the world of uncertainty and discover new ways to defend our digital properties.

What is Quantum Randomness?

Well as a Computer Scientist maybe I’m not the most qualified one to explain the nature of randomness but be patient with me while I deep dive into one of the best use cases in the digital world. Quantum randomness is a fascinating concept that differs from the randomness we experience in our everyday lives. Initially, it was thought that quantum randomness was similar to classical randomness in the early days of quantum mechanics. However, Heisenberg’s famous thought experiment using a microscope demonstrated that measuring a quantum state irreversibly disrupts it, just like how genuine states are concealed at a microscopic level in classical randomness. This revelation is truly captivating!

Over time, scientists conducted experiments and discovered that quantum randomness is fundamentally different from classical randomness. These experiments, known as Bell’s inequalities, demonstrated that the quantum realm exhibits non-locality. A phenomenon where things can be connected in ways that defy our conventional understanding. This non-locality gives rise to quantum randomness and ensures that information cannot travel faster than light, upholding the principles of causality that Einstein deeply cared about.

This meme created by me influenced from dialog reference back to Albert Einstein, “I, at any rate, am convinced that [God] does not throw dice. ”Einstein, stop telling God
what to do.” Niels Bohr

Nowadays, many physicists accept that inherent randomness is an essential characteristic of nature itself within the realm of quantum physics. However, we seldom delve into why this is so or how quantization and discreteness contribute to quantum randomness. No other scientific explanation has yet replaced Heisenberg’s microscope.

So, why should we care about quantum randomness now? Well, quantum theory reveals a universe where even the tiniest particles exist in a constant state of uncertainty. It’s not just fascinating from a scientific standpoint; it opens doors to a world of remarkable technological potential.

Think about it — using quantum randomness to create inherently secure systems and enhance our security measures. We can even verify that this randomness is genuine. It sounds almost too good to be true, but here’s the catch: We must ensure that what we’re capturing is truly random and not a result of clever quantum tricks.

With this in mind, let’s uncover why quantum bits, or qubits, are crucial for unlocking the full potential of quantum randomness.

Why do we need Quantums Bits?

Generating quantum bits, or qubits, has never been more crucial than it is today. It’s essential to ensure that our digital world remains secure and trustworthy. Why is it so important, you ask? Well, the answer is quite simple: in our highly connected world, where digital data is exchanged at lightning speed, we need to have a way to create cryptographically certified random bits. These bits are not just random, but they are proven to be so, making them the gold standard for securing our digital transactions and communications.

Some might think that generating random bits is easy, and they are not entirely wrong. You can create them with a quantum computer or even a Geiger counter next to some radioactive material. Quantum mechanics itself tells us that these bits are fundamentally unpredictable. However, the challenge lies in convincing someone, especially over the Internet, that these bits are genuinely random and haven’t been tampered with.

Schrödinger’s cat living in alternative universe generating random bits :)

This is not just a theoretical concern. In 2013, the world discovered that a widely used pseudorandomness standard Dual_EC_DRBG, had been compromised by the NSA. Therefore, ensuring thauthenticity of random bits has serious real-world implications.

Can we certify Randomness?

Yes! Certified randomness first asked by Roger Colbeck, in his 2006 Ph.D. thesis and it holds significant practical implications, particularly with the rise of proof-of-stake cryptocurrencies like Ethereum. In such systems, lotteries determine who gets to add the next block to the blockchain. These lotteries must be conducted honestly and without bias, without relying on a central authority. Certified randomness also finds application in non-interactive zero-knowledge proofs, financial audits, and electoral processes.

Now, here’s where it gets really exciting! Since 2009, researchers have been exploring a remarkable avenue: using measurements on entangled particles to create physically certified randomness. Imagine this — by observing the outcomes of these measurements and finding violations of the Bell/CHSH inequality, we can be certain that these outcomes are genuinely unpredictable. It’s as if the particles themselves are contributing to a cosmic lottery!

Dices are rolling infinite universe with uncerntainty generated with Leonardo.ai

Bell/CHSH-based certified randomness protocols have already been tested in experiments and are even being considered for use in the NIST Randomness Beacon, which generates 512 random bits per minute. This is a game-changer for enhancing the security and trustworthiness of our digital world.

The realm of generating and verifying randomness

When it comes to quantum computing, there are multiple ways to generate and certify random bits. Three notable examples are Boson Sampling, the Bell CHSH game, and Random Circuit Sampling, each with its own set of principles, applications, and certification processes. It’s worth noting that there are many other methods available to generate randomness, and the choice of which method to use depends on the specific needs of the situation.

Boson Sampling:

Boson Sampling is a fascinating technique that utilizes the incredible phenomenon of quantum interference to generate random outcomes. By exploiting the properties of indistinguishable photons, the interference patterns that result are completely unpredictable and impossible to simulate using classical methods. This unpredictability is what guarantees the randomness of measurement outcomes, making Boson Sampling an incredibly valuable tool in various applications, including cryptography.

Experimental lossy boson sampling set up:Credit: arXiv:1801.08282 [quant-ph]

However, it’s important to note that implementing Boson Sampling can be a daunting task, requiring specialized optical components and precise control over photon sources and routing. Achieving the necessary level of accuracy and attention to detail can be challenging. To ensure the randomness of the outcomes produced by Boson Sampling, experimental results are compared to classical simulations and subjected to statistical tests. These tests are critical in verifying the unpredictability and randomness of the outcomes, ensuring that Boson Sampling remains a reliable and effective tool in many different areas.

Bell CHSH Game:

The Bell CHSH game is a fascinating test that explores quantum entanglement between two distant parties, Alice and Bob. They measure entangled qubits based on chosen settings, which is similar to a magician’s duel. The game certifies randomness indirectly by demonstrating the unpredictability of quantum measurement outcomes, revealing correlations that defy classical explanations. This makes it primarily a research tool, serving as a testbed for quantum phenomena, illustrating the unpredictability of quantum entanglement. Performing CHSH experiments necessitates entangled qubit sources and precise control over measurement settings. Violations of Bell inequalities serve as evidence of the unpredictability of quantum measurement outcomes, indirectly certifying their randomness.

Random Circuit Sampling:

The technique of Random Circuit Sampling entails the application of quantum gates to an initial state in a random manner, akin to a quantum magician’s performance. This computational feat poses a significant challenge for classical simulations, as the complexity and unpredictability of quantum state evolution lead to certification. This approach is well-suited for generating certified random bits that can be used for secure communication and other applications that demand high-quality random numbers . Nevertheless, implementing it on a large scale can be intricate, as is often the case with other quantum experiments.

In 2019 Google used Random Circuit Sampling with 53 qubit and Sycamore chip for experimenting Quantum Supremacy which they claimed quantum algorithms run faster than classical ones with the results. ( btw : IBM still not agrees with it )

Note : Boson Sampling and Random Circuit Sampling directly generate and certify randomness through quantum interference and computational complexity, while the Bell CHSH game indirectly certifies quantum measurement unpredictability due to entanglement. The choice among them hinges on the specific use case and requirements.

Generating truly random numbers is a big challenge because it relies on unpredictable natural processes. It’s important to understand the significant uses of genuinely random bits, including certified authenticity. However, the models, methods and devices used for this purpose can sometimes have errors and be vulnerable to manipulation by others.

Randomness and Statistical Zero Knowledge

This is really cool one! In the world of certified randomness, regular pseudorandom functions (PRFs) fall short when dealing with quantum adversaries. PRFs only make the device’s output appear pseudorandom, not genuinely random. For certified randomness, we demand more. We need a pseudorandom function that, when used instead of a truly random one, produces output that’s statistically identical to a uniformly random distribution (in other words, as random as it gets).

To achieve this high level of security, we can create a pseudorandom function that’s just as good as the real deal, especially for handling tricky quantum protocols like QSZK. QSZK involves quantum computers and a concept known as “quantum statistical zero-knowledge protocols.” These protocols include a quantum verifier, a prover, and a simulator that can replicate their interactions without needing secret information.

In simpler terms, we can develop a super-strong pseudorandom function that’s almost as good as pure randomness. This is crucial for dealing with complex quantum processes and ensuring our certified randomness is exceptionally secure.

CHSH/Bell Inequlity in Action

If you are a good follower, you will remember in my The Magic of Entanglement in Quantum Information Theory post I mentioned CHSH game and how it change the quantum field magically (well- mastered physic). Now, I want to demonstrate code in action, well finger crossed ! Let’s start,

Disclaimer, I’m using Qiskit Lab workbook with my code:

Imagine Alice and Bob are given each one part of a bipartite entangled system. Each of them then performs two measurements on their part in two different bases. Let’s call Alice’s bases A and a and Bob’s B and b. What is the expectation value of the quantity

⟨𝐶𝐻𝑆𝐻⟩=⟨𝐴𝐵⟩−⟨𝐴𝑏⟩+⟨𝑎𝐵⟩+⟨𝑎𝑏⟩?

Now, Alice and Bob have one qubit each, so any measurement they perform on their system (qubit) can only yield one of two possible outcomes: +1 or -1. Note that whereas we typically refer to the two qubit states as |0⟩ and |1⟩these are eigenstates, and a projective measurement will yield their eigenvalues, +1 and -1, respectively.

Therefore, if any measurement of A, a, B, and b can only yield ±1

the quantities (𝐵−𝑏)

and (𝐵+𝑏)

can only be 0 or ±2

And thus, the quantity 𝐴(𝐵−𝑏)+𝑎(𝐵+𝑏) can only be either +2 or -2, which means that there should be a bound for the expectation value of the quantity we have called

|⟨𝐶𝐻𝑆𝐻⟩|=|⟨𝐴𝐵⟩−⟨𝐴𝑏⟩+⟨𝑎𝐵⟩+⟨𝑎𝑏⟩|≤2.

Now, the above discussion is oversimplified, because we could consider that the outcome on any set of measurements from Alice and Bob could depend on a set of local hidden variables, but it can be shown with some math that, even when that is the case, the expectation value of the quantity 𝐶𝐻𝑆𝐻 should be bounded by 2 if local realism held.

But what happens when we do these experiments with an entangled system? Let’s try it!

The first step is to build the observable

𝐶𝐻𝑆𝐻=𝐴(𝐵−𝑏)+𝑎(𝐵+𝑏)=𝐴𝐵−𝐴𝑏+𝑎𝐵+𝑎𝑏

where 𝐴,𝑎
are each one of {𝐼𝑋,𝐼𝑍}
for qubit 0 and 𝐵,𝑏
are each one of {𝑋𝐼,𝑍𝐼}

for qubit 1 (corresponding to little-endian notation). Paulis on different qubits can be composed by specifying order with a Pauli string, for example instantiating a SparsePauliOp with the ‘ZX’ argument implies a measurement of ⟨𝑋⟩on q0 and ⟨𝑍⟩on q1 .

This tensor product (combining operations on different qubits) can be explicitly stated using the .tensor() method. Additionally, combining operations on the same qubit(s) uses the compositional product with the .compose() method.

For example, all these statements create the same Pauli operator:

from qiskit.quantum_info import SparsePauliOp

ZX = SparsePauliOp('ZX')
ZX = SparsePauliOp(['ZX'], coeffs=[1.]) # extendable to a sum of Paulis
ZX = SparsePauliOp('Z').tensor(SparsePauliOp('X')) # extendable to a tensor product of Paulis
ZX = SparsePauliOp('XZ').compose(SparsePauliOp('YY')) # extendable to a compositional product of Paulis

Create an operator for CHSH witness:

#Define the Pauli strings for the CHSH operator terms
term_1_pauli = 'ZX'
term_2_pauli = 'XZ'
term_3_pauli = 'YZ'
term_4_pauli = 'ZY'
#Create SparsePauliOp instances for the terms
term_1_op = SparsePauliOp(term_1_pauli)
term_2_op = SparsePauliOp(term_2_pauli)
term_3_op = SparsePauliOp(term_3_pauli)
term_4_op = SparsePauliOp(term_4_pauli)
#Define coefficients for the CHSH operator terms
coeff_1 = 1.0
coeff_2 = 1.0
coeff_3 = 1.0
coeff_4 = -1.0
#Apply coefficients to the terms
term_1_op *= coeff_1
term_2_op *= coeff_2
term_3_op *= coeff_3
term_4_op *= coeff_4
obsv = term_1_op + term_2_op + term_3_op + term_4_op

Next we want to test the 𝐶𝐻𝑆𝐻 with creating Entangled Qubit Pairs

observable on an entangled pair, for example the maximally-entangled Bell state
|Φ⟩=1/2√(|00⟩+|11⟩)

which is created with a Hadamard gate followed by a CNOT with the target on the same qubit as the Hadamard. Due to the simplifaction of measuring in just the 𝑋 and 𝑍-bases as discussed above, we will rotate the Bell state around the Bloch sphere which is equivalant to changing the measurement basis as demonstrated in the Warmup section.
This can be done by applying an 𝑅𝑦(𝜃)

is a Parameter to be specified at the Estimator API call.

This produces the state
|𝜓⟩=12√(cos(𝜃/2)|00⟩+sin(𝜃/2)|11⟩)

from qiskit.circuit import Parameter
theta = Parameter('θ')qc = QuantumCircuit(2)
qc.h(0)
qc.cx(0, 1)
qc.ry(theta, 0)
qc.draw('mpl')

Next we need to specify a Sequence of Parameters that show a clear violation of the CHSH Inequality, namely

|⟨𝐶𝐻𝑆𝐻⟩|>2.

Let’s make sure we have at least three points in violation.

import numpy as np
from qiskit import QuantumCircuit, Aer
from qiskit.opflow import Z, X, StateFn
from qiskit.providers.aer import AerSimulator
#Create a QuantumCircuit
qc = QuantumCircuit(2)
qc.h(0)
qc.cx(0, 1)
qc.ry(theta, 0)
#Generate parameter values using a list of lists
angles = [[ph] for ph in np.linspace(0, 2 * np.pi, 10)] # Decreased the number of angles
#Run the Estimator
job = estimator.run([qc] * len(angles), observables=[observable] * len(angles), parameter_values=angles)
exps = job.result().values
#Calculate CHSH inequalities for each point
chsh1_est = [exp for exp in exps]
chsh2_est = [exp for exp in exps]
#Check for violation
threshold = 2
chsh1_violations = [abs(val) > threshold for val in chsh1_est]
chsh2_violations = [abs(val) > threshold for val in chsh2_est]
#Print the angles, CHSH results, and violations
print("Angles:", np.array(angles)[:, 0])
print("CHSH Inequality 1:", np.array(chsh1_est))
print("CHSH Inequality 2:", np.array(chsh2_est))
print("CHSH Inequality 1 Violations:", chsh1_violations)
print("CHSH Inequality 2 Violations:", chsh2_violations)

When you run the code , you should see at least 3 points outside the red dashed lines. This shows us violation in action.

Please hide your swords, I know. You have lots of questions and there are lots of quantum mechanics terms which is hard to grasp in the first place which I am still learning on the way too. This code and work encourages practical usage of Bell Inequality so I hope you get excited as much as I’m to play and see the results in the real quantum hardware.

Conclusions

Thank you for staying with me until the end. To support the analogy I introduced at the beginning of this post, we find ourselves in a siege, building and designing brilliant algorithms and protocols to breach these digital walls. Just as it took 53 days of siege to conquer Constantinople, it may require more time in our era, but rest assured, it will happen within our lifetime. Google’s demonstration of Quantum Supremacy and the practical applications emerging today are clear indicators of the fruitful results we are beginning to witness.

Milestones!

I am pleased to inform you that I have recently been awarded the Starter Research Grant from the Starknet Foundation for my work on Certified Randomness. This ongoing research is aiming to solve the challenges of Blockchain Consensus, a problem that has long plagued the space. My relentless commitment lies in bringing quantum cryptography solutions to our daily lives to enhance digital and physical security. In this transformative quantum era, it is vital that we work together, and I am open to collaborations with research scientists and developers who share my passion. Let us unite and create a meaningful impact together.

Resources

*Aaronson, S., & Arkhipov, A. (2011). The computational complexity of linear optics. Theory of Computing, 9(4), 143–252*Broome, M. A., Fedrizzi, A., Rahimi-Keshari, S., Dove, J., Aaronson, S., Ralph, T. C., & White, A. G. (2013). Photonic boson sampling in a tunable circuit. Science, 339(6121),*Neville, A., Sparrow, C., Clifford, P., Johnston, E., Birchall, P. M., Montanaro, A., & Laing, A. (2017). Classical boson sampling algorithms with superior performance to near-term experiments. Nature Physics, 13(12)*Tillmann, M., Dakić, B., Heilmann, R., Nolte, S., Szameit, A., & Walther, P. (2013). Experimental boson sampling. Nature Photonics, 7(7),*Bell, J. S. (1964). On the Einstein Podolsky Rosen Paradox. Physics Physique Физика, 1(3),*Aspect, A., Dalibard, J., & Roger, G. (1982). Experimental test of Bell inequalities using time‐varying analyzers II. Physical Review Letters, 49(25)*Bouland, A., Brandão, F. G. S. L., & Svore, K. M. (2019). Quantum supremacy and the complexity of random circuit sampling. In Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing*Arute, F., Arya, K., Babbush, R., Bacon, D., Bardin, J. C., Barends, R., … & Chen, Y. (2019). Quantum supremacy using a programmable superconducting processor. Nature

--

--

Derya Karl

Computer Scientist/ Building @ownprotocol /Founder @Sirius Quantum Solutions. prev Co-Founder @zkPassport Consultant @Apple Operations @IDscan