The 2nd Quantum Revolution

Quantonation
Quantonation, Quantum Investors
5 min readMay 17, 2018

Genuine quantum effects at the heart of the so called 2nd Quantum Revolution rely on only a few important concepts. It doesn’t take a PhD in Physics to understand them.

In the early 60s, when the laser was invented it was seen as a solution to an unknown problem. The laser and the transistor, another one of the “1st Generation” Quantum Technologies, are now omnipresent in our lives. These technologies, and many others, are founded in quantum mechanics, a theoretical framework developed in the 1920s. They exploit merely two facets of the quantum description at the atomic scale: the wave — particle duality and the uncertainty principle governing the probabilistic nature of the quantum equations.

The other key and profoundly counter-intuitive aspects of quantum theory — entanglement and superposition — remained obscure to us for more than 50 years. But where there were paradoxes, now there are opportunities with disruptive “2nd Generation” Quantum technologies involving the generation and use of quantum states and resources for commutations, computing and sensing.

The quantum theory is indeterministic because the theory makes predictions about probabilities of events only. This aspect of quantum theory is in contrast with a deterministic classical theory such as that predicted by the Newtonian laws. In the Newtonian system, it is possible to predict, with certainty, the trajectories of all objects involved in an interaction if one knows only the initial positions and velocities of all the objects. In reality, we never can possess full information about the positions and velocities of every object in any given physical system. Incorporating probability theory then allows us to make predictions about the probabilities of events and, with some modifications, the classical theory becomes an indeterministic theory. Thus, indeterminism is not a unique aspect of the quantum theory but merely a feature of it, a crucial one.

Interference is another feature of the quantum theory. It is also present in any classical wave theory: constructive interference occurs when the crest of one wave meets the crest of another, producing a stronger wave, while destructive interference occurs when the crest of one wave meets the trough of another, canceling out each other. In any classical wave theory, a wave occurs as a result of many particles in a particular medium coherently displacing one another, as in an ocean surface wave or a sound pressure wave, or as a result of coherent oscillating electric and magnetic fields, as in an electromagnetic wave such as light. The strange aspect of interference in the quantum theory is that even a single “particle” such as an electron can exhibit wave-like features, as in the famous double slit experiment. This quantum interference is what contributes wave–particle duality to every fundamental component of matter.

The 1927 Solvay Conference. The brightest minds came together to discuss the world of physics and chemistry. Key contributors to the debate about the weird aspects of Quantum Theory were there.

Uncertainty is at the heart of the quantum theory. Uncertainty in the quantum theory is fundamentally different from uncertainty in the classical theory. The archetypal example of uncertainty in the quantum theory occurs for a single particle. This particle has two complementary variables: its position and its momentum. The uncertainty principle states that it is impossible to know both the particle’s position and momentum to arbitrary accuracy. This principle even calls into question the meaning of the word “know” in the previous sentence in the context of quantum theory. We might say that we can only know that which we measure, and thus, we can only know the position of a particle after performing a precise measurement that determines it. If we follow with a precise measurement of its momentum, we lose all information about the position of the particle after learning its momentum. In quantum information science, protocols for quantum key distribution exploit the uncertainty principle and statistical analysis to determine the presence of an eavesdropper on a quantum communication channel by encoding information into two complementary variables.

The superposition principle states that a quantum particle can be in a linear combination state, or superposed state, of any two other allowable states. This principle is a result of the linearity of quantum theory. The superposition principle has dramatic consequences for the interpretation of the quantum theory — it gives rise to the notion that a particle can somehow “be in one location and another” at the same time. The loss of a superposition can occur through the interaction of a particle with its environment. Maintaining an arbitrary superposition of quantum states is one of the central goals of a quantum communication protocol.

The last, and perhaps most striking, quantum feature is entanglement. There is no true classical analog of entanglement. Entanglement refers to the strong quantum correlations that two or more quantum particles can possess. The correlations in quantum entanglement are stronger than any classical correlations in a precise, technical sense. Schrödinger (1935) first coined the term “entanglement” after observing some of its strange properties and consequences. Einstein, Podolsky and Rosen immediately presented an apparent “EPR paradox” involving entanglement that raised concerns over the completeness of the quantum theory. That is, they suggested that the seemingly strange properties of entanglement called the uncertainty principle into question (and thus the completeness of the quantum theory) and furthermore suggested that there might be some “local hidden-variable” theory that could explain the results of experiments.

An apparatus for performing a Bell test. A source emits a pair of entangled photons 𝜈1 and 𝜈2. Their polarizations are analyzed by polarizers A and B. A Bell test consists of measuring the correlations in polarization detection and comparing the results with Bell’s inequalities.

It took about 50 years to resolve this paradox. John Bell did so by presenting a simple inequality, now known as “Bell’s inequality” (1964) and Alain Aspect experimentally implemented it (1981). Bell showed that any two-particle correlations that satisfy the assumptions of the “local hidden-variable theory” must be less than a certain amount. He then theoretically showed how the correlations of two entangled quantum particles violate this inequality, and thus, entanglement had no explanation in terms of classical correlations but is instead a uniquely quantum phenomenon.

With the experimental verification of Bell’s inequalities, the profoundly counter- intuitive quantum theory was at last verified. No physical theory of local hidden variables (in the spirit of Einstein) can ever reproduce all of the predictions of quantum mechanics.

Coincidence polarization measurements of two photons are out of the range predicted by Bell’s inequalities. Aspect et al. 1982

Well, it was not so simple actually … it happened that there were some loopholes but the debate was put to rest with further experiments in 2015.

To learn more

Aspect, A. (2015). Closing the door on Einstein and Bohr’s quantum debate. Physics, 8, 123.

--

--

Quantonation
Quantonation, Quantum Investors

An Investment Fund dedicated to Deep Physics and Quantum Technologies