Alain Aspect, John Clauser, Anton Zeilinger and Bohr’s Correspondence Principle: A Myth Dispelled.

Wes Hansen
11 min readDec 7, 2022

--

Recently Alain Aspect, John Clauser, and Anton Zeilinger were awarded the Nobel Prize in Physics for basically demonstrating that the quantum violates the Bell inequality; in fact, it was Aspect and Clauser who conducted the experiments based on John Bell’s famous paper, On the Einstein, Podolsky, Rosen Paradox. The Einstein, Podolsky, Rosen Paradox was outlined in the famous EPR paper, Can Quantum-Mechanical Description of Physical Reality be Considered Complete?. The problem here is that the situation is being grossly misrepresented by the popular science media and the media in general: technically speaking, Aspect and Clauser demonstrated the non-locality of the quantum realm, but they did NOT prove Einstein’s fundamental position wrong, as has been stated by them both and widely disseminated in the media.

Einstein’s fundamental position was simply that the quantum mechanical wavefunction, Schoedinger’s equation, was an incomplete description of physical reality, which is stated clearly in the title to the EPR paper. Einstein felt certain that Quantum Theory was a statistical theory describing statistical ensembles, rather than individual particles. He called entanglement “spooky action at a distance,” an allusion to the “spooky action at a distance” inherent to Newtonian gravity. The idea is simply that, in the THEORY, there is no mechanism for the mediation of the gravitational force, in the Newtonian case, and the mediation of the quantum information in the quantum case. Neither Aspect nor Clauser proved this wrong! In fact, Einstein has recently been completely vindicated by the foundational work of Ulf Klein and one has to really wonder why the same media misrepresenting this situation is not reporting on Professor Klein’s work.

Ulf Klein, the only image available.

In what follows I use actual Latex code, since I do not know the proper way to embed Latex in a Medium draft.

Dirac’s famous book [1] on quantum theory states that “…classical mechanics may be regarded as the limiting case of quantum mechanics when tends to zero.”

[T]he attitude of the scientific community with regard to this point — which is extremely important for the interpretation of quantum theory (QT) as well as for more general questions such as the problem of reductionism — is somewhat inconsistent. On the one hand, Dirac’s dictum — which has been approved by other great physicists — is considered to be true. On the other hand it cannot be verified.

Ulf Klein, What is the limit 0 of quantum theory?

The idea that quantum mechanics has a classical limit was first articulated by Neils Bohr in the form of his correspondence principle. This idea, though not proven mathematically, was supported with a great deal of confidence when Bohr used it to derive the relation between the Rydberg constant and a number of other constants: the electron mass, the electron charge, Planck’s constant, velocity of electromagnetic radiation in vacuum, and permittivity. And in addition to this, to anyone accustomed to working with mathematical limits, it’s a completely rational idea.

Planck’s constant, h (ℏ = h/2π), similar to the Rydberg constant and most, if not all, others, was discovered empirically. Max Planck was investigating black-body radiation using a hohlraum, a hollow metal cavity with a small hole in one side, which is heated. He developed a distribution of the frequencies, ω, of electromagnetic waves emerging from the small hole and found that he could explain this distribution with the postulate that energy is transferred from the matter in the walls of the cavity to the radiation in discrete bundles, E = hω, where h = 6.6262 \times 10^{-34} J sec.

If we add to this the previously known energy-momentum relation for massless particles, E = ρc, and the previously known wavelength-frequency relationship for electromagnetic radiation, λω = c, we have discrete bundles of momentum, ρ = hλ. This relation was extended to massive particles theoretically by Louis de Broglie and confirmed for electrons in some scattering experiments shortly thereafter by Clinton Davison and Lester Germer.

(Incidentally, it is these matter waves, not “probability waves,” which enable electron microscopes.)

In addition to these relations, we have the Heisenberg uncertainty relation, △x△ρ ≥ h/4π.

So the logical thought has always been that, when you let h (or ℏ) go to △h, an infinitesimal, energy and momentum become continuous valued and uncertainty becomes negligible, hence, a discrete, uncertain, probabilistic theory — quantum mechanics, becomes continuous, certain, and deterministic — Newtonian mechanics, in the limit as h tends to zero.

But Ulf Klein set aside these long cherished beliefs — dogmas, and actually analyzed the situation rigorously — pedantically, and found that this is NOT the case. There are three papers in the series with a fourth recently posted to the arxiv; I will summarize two of the three here.

In his first paper, linked to above and published in the American Journal of Physics in 2018, he begins by discussing two types of limit relations in classical physics. The first he calls the standard limit relation, which is characterized by the relation between special relativity and Newtonian mechanics. In this relation, both theories have the same mathematical structure but a new fundamental constant appears which constrains the dynamics of the covering theory. In the limit as this constant goes to infinity, the covering theory — special relativity, reduces to the limit theory — Newtonian mechanics. The second he calls the deterministic limit relation, which is characterized by the relation between probabilistic Newtonian mechanics and the standard, deterministic model. In this relation, the theories have different mathematical structures, no new constants appear, and the theories belong to fundamentally different epistemological categories. Nonetheless, the probabilistic version — the covering theory, can be reduced to the standard model — the limit theory, by proper assignment of initial values, i. e. eliminating uncertainty.

Using these clear-cut relations, he derives the standard limit of quantum mechanics, discovering that it is a probabilistic theory of classical mechanics he calls Probabilistic Hamilton-Jacobi theory (PHJ) (this turns out not to be pedantically true). He then shows that Newtonian mechanics is the deterministic limit of PHJ. Naturally, one assumes, then, that Newtonian mechanics is the combined limit of quantum mechanics, but this is NOT, in general, true; it only is the case for three special potentials. The reason is interesting. The probability density is dependent on the position vector, and this dependence is independent of the limiting process. This implies that the position vector describes a position expectation value, independent of the limiting process, hence, “the existence of equations of motion of Newtonian mechanics for position expectation values is a necessary condition for the existence of deterministic potentials.” This leads to the realization that all mathematically acceptable deterministic potentials are linear combinations of these three special potentials: force-free states; harmonic-oscillator states; constant-force states.

These derivations were made based on a characterization of a theory as the set of its predictions, so they hold for all formulations of quantum mechanics. In his discussion, Professor Klein briefly analyzes Feynman’s path integral formalism and shows how Dirac went wrong in his famous book, concluding that his results “provides an argument in favor of the statistical interpretation,” i. e. the EPR paper is correct and quantum mechanics describes ensembles rather than single events. He greatly strengthens and clarifies this argument in his third paper in the series, From probabilistic mechanics to quantum theory, which was published by Springer in Quantum Studies: Mathematics and Foundations, in 2020 (open-access online in 2019).

The second main opinion, in particular promoted by Bohr in discussions with Einstein, claims that QT is a “complete” theory for individual particles. This “individuality interpretation” is more common than the ensemble interpretation despite the obvious fact that QT makes probabilistic predictions (this fundamental discrepancy is the origin of all the ongoing discussions). The prevailing opinion seems to be that the question has already been decided and one should not call into doubt the chosen path.

To understand QT means to clarify its relation to classical physics. But what exactly means “classical physics” ? If we accept Bohr’s individuality interpretation, we will identify “classical physics” with classical mechanics, the classical theory of individual particles. On the other hand, if we prefer Einstein’s ensemble point of view, we will identify “classical physics” with a probabilistic theory of classical particles. Our objects to study are then statistical ensembles of particles rather than individual particles.

Ulf Klein, From probabilistic mechanics to quantum theory

This paper is rather technical relative to my understanding of physics and probability theory, so I’ve had to go through it several times; each time I do, I become more enamored with it. He begins by analyzing the degrees of freedom of classical mechanics, probabilistic mechanics, and quantum mechanics, introducing I to denote a finite approximation to an uncountably infinite number of degrees of freedom. With this, a dynamical system which requires 2n, n ∈ Z+, numbers for its description in classical mechanics requires 2I^n numbers for its description in quantum mechanics and 2I^(2n) numbers for its description in probabilistic mechanics, leading to his working hypothesis: “quantum mechanics is a configuration space version of the probabilistic description of classical particles.”

In the probabilistic classical description, the entities being described are not individual trajectories but, rather, ensembles of such, distinguished by their states q_0, ρ_0, at a time t_0. The individual trajectories in the ensemble are deterministic, indeterminism entering due to uncertainty with regards to just which q_0, and ρ_0 one is dealing with. In other words, as I understand things, it seems as though a sample space is constructed composed of canonical equations in the Hamiltonian form (Hamiltonians describe observables), one set each for every possible q_0 and ρ_0. To this is added a time-dependent probability density which follows the Liouville equation. This is all expressed in the Lagrangian formalism and then transformed into the Eulerian formalism, which “represents the most convenient way to study classical ensembles,” and “there is no way to describe individual particles any more.”

In the Eulerian formulation, you have a probability density derived from the Lagrangian probability density and you have a function for calculating expectation values; these are the “basic building blocks” of the probabilistic theory under construction, a theory he calls Hamilton-Liouville-Lie-Kolmogorov theory (HLLK). To these two building blocks he adds a classical action which describes the deterministic evolution in phase space, a redundancy introduced to facilitate the formulation of a theory where the decoupling between the deterministic action and the probability density, key to classical probabilistic mechanics, can break down; this decoupling is what distinguishes classical probabilistic mechanics from quantum mechanics and it is, I feel certain, necessary due to entanglement (the nonlocality that Aspect and Clauser actually proved).

Everything up to here has been based on the Hamiltonian observable with its time parameter, so the next step is to extend HLLK to arbitrary observables with arbitrary parameters. In its completely general form, HLLK seems to correspond to the class of all possible experimental arrangements, each associated with an observable defined relative to a parameter, i. e. a parameterized observable. Immediately following this generalization, he compares the general HLLK to both time-independent and time-dependent probability theory, noting that this comparison indicates a need for a reduction of the multi-parameter structure of the general HLLK. But prior to engaging in this reduction process, he shows that if one wishes to form a new state-variable, which is a function of both the probability density and the classical action describing the deterministic physics, then one is led immediately to the complex algebra. This is the break-down of the decoupling previously mentioned and, when carried out in the completely general HLLK, the basic equations of the HLLK strongly resemble the Schroedinger equation, i. e. the general HLLK and quantum mechanics share the same mathematical structure.

He uses physical considerations, the postulate that a “multi-parameter theory, taking the dependence of all densities ρ_A(ω,α) on the corresponding parameters α into account, does probably not correspond to anything realized in nature,” to reduce the mult-parameter HLLK to the single time-parameter HLLK. Hence, for all observables A ≠ H and α ≠ t, only stationary solutions not depending on the parameter are taken into account. The result is exactly analogous to quantum mechanics, the structural similarities being: a classical counter-part to Schroedinger’s equation, which describes flow in phase space; a classical counter-part to the commutation relations defined on operators; a classical counter-part of Born’s rule for calculating probabilities. He then derives these quantum mechanical correlates by invoking “a principal of Nature:”

All fields of physics, including the complex-valued wave functions of QT, are functions of q, t only and do not depend on the generalized momenta ρ. This seems intuitively clear, because the variables q, t are directly measurable coordinates of our every day’s “real” space–time environment. The ρ’s, on the other hand, are derived quantities which cannot be measured directly. Thus, the coordinates q, t are most fundamental (they are of course not unique, but all possible coordinate systems must be derivable from them).

He uses simple “quantization rules” to project away the momenta and the result is a derivation of these key components of quantum mechanics defined on configuration space. In his conclusion he briefly discusses a more complex projection procedure involving momentum fields which yields spin 1/2 particles; this is what is contained in his most recent post to the arxiv, A reconstruction of quantum theory for spinning particles.

It is amazing to me, all that Professor Klein has clarified here. With regards to the quantum foundations, I would argue that these results are of greater importance than Bell’s famous paper (linked to above) and the PBR paper: quantum mechanics is a substructure of an extended version of classical probabilistic mechanics with a coupled probability density and action (necessitated by entanglement); complex amplitudes are necessitated by this coupling and the requirement of linearity; Planck’s constant only becomes meaningful upon the reduction to configuration space, indicating the existence of a deeper, more fundamental, and more complete theory.

As Professor Klein points out in all of his papers, these results strongly support Einstein’s original argument that Quantum Theory is a predictively incomplete ensemble theory, a statistical theory. And Einstein’s belief, stated at the conclusion of the EPR paper, was that a more complete theory was possible and that this theory would provide a mechanism for the mediation of quantum information implicit in nonlocality.

No story in space–time can tell us how nonlocal correlations happen; hence, nonlocal quantum correlations seem to emerge, somehow, from outside space–time.

Nicolas Gisin, Quantum Nonlocality: How Does Nature Do It?

Perhaps it has something to do with superluminal signaling. Recently I was reading a paper, a PhD thesis by a young physicist in training from New Zealand called Quantum Entanglement in Time, and at the very end he speculates on using the Lorentz factor as a spacetime entropy measure. I was playing around with this and realized that the square of the Lorentz factor is hidden there within the velocities of the de Broglie particle/wave construct, and one of those velocities is superluminal! Richard Feynman gives a very straightforward treatment in his Lecture 48.

Let v_w designate the phase velocity, let v_g designate the velocity of the envelope wave, and let v_p designate the velocity of the particle. Then from the de Broglie relations, λ = hρ and ν = Eh, together with v_w = λν, Feynman derives, v_w = Eρ. Then using the energy-momentum four vector, E^2 = c^2ρ^2 + m^2c^4, he shows clearly that v_w > c. He then proceeds to show that v_g = c^2ρ/E, hence, v_g = (c^2mv_p)/(mc^2) = v_p. But then, from v_g = c^2ρ/E and v_w = E/ρ, we get v_gv_w = c2 or, v_w = c^2v_g, which tells by how much v_w exceeds c.

Consider the Lorentz factor, √1 − (v^2)/(c^2), relative to the difference in the de Broglie velocities, v_w − v_g = c^2/v_g − v_g = c^2 − v_g^2, so 1/v_w(v_w − v_g) = 1 − (v_g^2)/c^2! The Lorentz factor is hidden there in the relation between these de Broglie velocities!

This article is related to this one:

Something is Wrong in the State of Quantum Electro-Dynamics; Something is Wrong in the State of Quantum Chromo-Dynamics | by Wes Hansen | Dec, 2022 | Medium

--

--