Read this and tell me in the comments if you will invest in my Quantum Computation SPAC
The key element is here in Intro to quantum computing: Qubits, superposition, & more, the Section How does a quantum computer work? (emphasis theirs):
Quantum computers are based on quantum superposition. Superposition allows quantum objects to simultaneously exist in more than one state or location. This means that an object can be in two states at one time while remaining a single object. This allows us to explore much richer sets of states.
Quantum computers use the entanglement of qubits and superposition probabilities to perform operations. These operations can be manipulated so that certain probabilities are increased or decreased, which leads us to the correct and incorrect answers we’re looking for.
Quantum superposition comes about due to the complex algebra, by which I mean the field of complex numbers. The important element here is the dependence on so-called quantum superposition. And, of course, they are also finding that something called quantum contextuality plays a key role, where non-local correlations — entanglement, is a special case of quantum contextuality. But notice that non-local correlations as currently defined requires superposition also, in that the spin-states, for example, are in superposition until collapse at which point the entangled qubits are anti-correlated. So, let’s examine quantum superposition, where I shall play the role of advocatus diaboli (devil’s advocate).
I’ll begin where most should, with the Compton Effect, which will come back into the discussion later. The link immediately above is to the hyperphysics site hosted by Georgia State University, but I am referencing R. Shankar’s textbook, Fundamentals of Physics II: Electromagnetism, Optics, and Quantum Mechanics. To successfully work the exercises in Professor Shankar’s textbooks, you need to know multivariable calculus and be familiar with the standard processes for solving differential equations. But even if you don’t know the requisite math, these textbooks are well worth just reading through. From his Section 19.3.2, pages 414 and 415 (emphases mine):
Now for Compton’s 1927 experiment, which provided very direct evidence of photons. Imagine shining X-rays, i. e. light of some λ (or wave number k = 2π/λ) along the x-axis on a free and static electron, as shown in the left half of Figure 19.4. (In reality the electron is bound to an atom. However, the incident X-ray photons have so much energy that the intitial electron may be treated as free and at rest.)
The electron scatters the light into some direction and recoils in some other direction, as shown in the right half of the figure. Forget the electron and just observe the scattered light. The light scattered in a direction θ relative to the x-axis is found to have a wavelength λ’ obeying
λ’ − λ = 2πℏ/mc(1−cosθ)
Technically speaking, with the Copenhagen Interpretation, which informs quantum computation, the electron bound to an atom exists as a probability wave ψ described by Schoedinger’s Wave Equation until observed or measured, this process not being very well defined or understood. For that matter, if you listen to the experts, diffraction experiments indicate that even the atom exists as such a probability wave until observed or measured. So, how does a photon scatter off of a probability wave and in such a precise manner, this is a good question for the devil’s advocate and one which he cannot answer.
Ignoring that for the moment, let’s consider operationally how the double-slit, or single-slit for that matter (it demonstrates diffraction as well), works. From Professor Shankar’s pages 417–419 (emphases mine):
The double-slit experiment for electrons aimed at testing de Broglie’s hypothesis is designed in pretty much the same way as for photons, but with some obvious and inevitable differences. First, the source of electrons is different — it could be some electrode that boils off electrons with negligible kinetic energy K. These are then accelerated to some fixed momentum p by allowing them to fall through a potential V such that
K = p^2/2m = eV.
[A] velocity filter may be used to ensure that all electrons reaching the slits have the same p and hence the same de Broglie wavelength. All this was simply accomplished in the case of light by a monochromatic source.
To detect electrons, you replace the photographic film with a row of electron detectors.
[T]hese detectors can amplify a single electron that hits them into an avalanche that leads to a macroscopic current.
[N]ow the surprise is not that the electron hits only one detector, depositing all its charge, energy, and momentum there. It is supposed to that; it is after all a particle with localized attributes. What is surprising is that when two slits are open, you get the interference pattern.
This is what leads to the crazy idea that electrons and particles in general, even atoms and molecules according to a recent winner of the Nobel Prize, exist in some smeared out state which can interfere with itself and contain essentially an infinite amount of information (finite but quite large). Furthermore, if we determine which slit the electron passed through, the interference pattern disappears, presumably because the act of “observing” the electron causes the smeared out entity to collapse. This is all very mysterious of course. From Professor Shankar’s page 431 (emphasis his):
Suppose we do not buy this notion that an electron does not go through a particular slit. We place a glowing lightbulb right after the two slits as shown in Figure 19.8. Whenever an electron makes it past the slits, we will see for ourselves which slit it went through. Then there can be no talk about not going through a definite slit or not having a definite trajectory. Every electron that registered a click at a detector is then classified as having passed through S_1 or S_2, as having followed a definite trajectory.
Indeed this is what will happen if every electron that was picked up by the detector was also seen on its way to the detector. But once in a while some electrons may make it to the detector without being observed near the slits. So in addition to electrons labeled as coming via S_1 or S_2, there is a third species of electrons: those which were not observed, which slipped by. The reasonable assumption that they too would behave like the others we saw is wrong. They profoundly alter the distribution.
Okay, but how, exactly, is the “which slit” information collected? He talks about a “glowing lightbulb,” so if you guessed Compton Scattering, then pat yourself on the back. They maintain a laser beam across the slits, where a laser is monochromatic so every photon has the same wavelength λ. They use a photographic plate or film to detect the impact of photons from the beam after they pass the slits, much the same as they do with the photon double-slit. As he explains on page 409, each pixel on the plate/film receives a quantized energy and momentum from the impacting photon and from this is determined the wavelength λ’. From the Compton relation above is determined the scattering angle θ and this enables them to infer the location of impact, when the photon, a localized entity, scattered off of the electron, another localized entity. But as he states above, the electron recoils from the photon impact and this represents a momentum exchange between electron and photon and this momentum exchange alters the trajectory of the electron enough that the interference pattern washes out.
I’m sorry, but waves do not scatter off of waves, rather, they interfere constructively or destructively. Electrons take a definite trajectory through one slit or the other. Another immediate question for the devil’s advocate. Assuming the electron takes every path, i. e. exists as a smeared out probability wave from the cathode source to the detector, then why doesn’t it collapse when it interacts with the double-slit barrier? They assume the interference pattern is caused by the probability wave interacting with the barrier, so why does that NOT cause collapse but interacting with the detector does? Or why does that NOT cause collapse but interacting with a “which slit” photon does? This is very mysterious.
Okay, but we’re still left with the question, what creates the interference pattern? Because we DO see a pattern indicating interference when the intensity of the beam, the source, is reduced to the point that only one particle at a time interacts with the double-slit barrier.
In a previous article of mine, Alain Aspect, John Clauser, Anton Zeilinger and Bohr’s Correspondence Principle: A Myth Dispelled. | by Wes Hansen | Dec, 2022 | Medium, I summarize the recent work in the quantum foundations by Ulf Klein, describing how he derives Quantum Theory (QT) from a theory of Probabilistic Classical Mechanics he calls Hamilton-Louiville-Lie-Kolmogorov theory (HLLK). Here is the key takeaway from that answer, but I encourage you to read the entire thing:
It is amazing to me, all that Professor Klein has clarified here. With regards to the quantum foundations, I would argue that these results are of greater importance than Bell’s famous paper and the PBR paper: quantum mechanics is a substructure of an extended version of classical probabilistic mechanics with a coupled probability density and action (necessitated by entanglement); complex amplitudes are necessitated by this coupling and the requirement of linearity; Planck’s constant only becomes meaningful upon the reduction to configuration space, indicating the existence of a deeper, more fundamental, and more complete theory.
In other words, QT is a statistical theory, which is not surprising considering the expectation values of QT correspond directly to the statistical mean and the uncertainty from QT correspond directly to the statistical standard deviation. It differs from ordinary probabilistic mechanics, statistical mechanics, in that the probability density is coupled to the action, which is why the complex algebra is necessary.
More recently, in The Phase of the Schoedinger Wave Function and Spin: An Improper Transformation | by Wes Hansen | Jan, 2023 | Medium , I summarize the latest work of Professor Klein in which he uses a more complex projection involving momentum fields to derive QT for spin 1/2 particles and to show how it strongly supports the semi-classical Maxwell-Dirac Theory of David Hestenes, where Professor Hestenes uses Oliver Consa’s Helical Solenoid Model extension of his own Zitter Model and the toroidal solutions to Maxwell’s equations discovered by Ranada to unite Maxwell and Dirac at the foundational level. The key takeaways there are:
In the Zitter model the electron is a point charge which traces out a helical orbit centered on a streamline through spacetime. In Professor Hestenes’ Geometric Algebra language, electron spin is represented by a 1-vector summed with a 2-vector, this last, geometrically speaking, being an oriented plane. The helix in the image is the electron’s field (“pilot” wave) and the 2-vector is perpendicular to the helical axis. Spin is a measure of helical orientation. From [2] (emphasis mine):
Basic features of the zitter model can now be summarized as follows:
- The spacetime history of electron is a lightlike helix.
- Electron mass (≈ zitter frequency) is a measure of helical curvature.
- Electron phase (≈ zitter angle) is a measure of helical rotation.
- Electron spin is a measure of helical orientation.
- Electron zitter generates a static magnetic dipole and rotating electric dipole!
Here is an image of the two possibilities corresponding to spin +1/2 and spin — 1/2:
Okay, so the transformation is actually a composition of a three-dimensional glide-rotation, which is proper and represents a phase shift, and a two-dimensional plane reflection, which is improper. Hence, a phase shift of 2π actually corresponds to an improper isometry which, when composed with its inverse, also a 2π phase shift, becomes proper, i. e. preserves or restores helical orientation (spin). This explains the 4π phase factor.
To answer the second question, the origin of the anomalous moment, in Oliver Consa’s Helical Toroidal Electron Model [3] there is a g-factor entirely dependent on the geometry, which he calls the helical g-factor,
g′ = √1 + (rN/R)^2
where R is the radius of the torus, r its thickness, and N ∈ Z^+ the number of turns. Then, from pages 84 and 85:
“In calculating the angular momentum, the rotational velocity decreases in the same proportion as the equivalent radius increase, compensating for the helical g-factor. However, in the calculation of magnetic moment, the rotational velocity decreases by a factor of g′, while the equivalent radius increases by a factor approximately equal to g′ squared. This is the cause of the electron’s anomalous magnetic moment.”
Finally, in his Section 9.2, Professor Klein checks the limit of the derived Pauli-Schroedinger equation as the reduced Planck’s constant goes to zero and finds that not all spin variables are destroyed. From pages 27 and 28:
“The survival of spin variables in the case ℏ=0 is no surprise in our theory, since we have identified the vertical components of the momentum field as origin of quantum spin. In all works in which Eqs. (152), (153) were derived so far, the starting point was the quantum mechanical Pauli-Schoedinger equation (151), which was then rewritten, using a representation like (75) (see [38], [26], [61]). The limiting case ℏ=0 was rarely dealt with in those theories, see however [61]. One reason for this might be that this limiting case is not compatible with the prevailing interpretation of spin as a purely quantum mechanical phenomenon. Due to this interpretation, all spin variables (or the corresponding terms in a Lagrangian function) should disappear from the theory in the limit ℏ=0. The fact that this is not the case led Yahalom to the conclusion that the Pauli theory ‘has no standard classical limit’ [61]. In fact, one could have concluded from this fact that spin cannot be a purely quantum mechanical phenomenon.”
His idea seems to be that quantum spin has its origins in his theory QA, which is in the borderland between the classical and the quantum; it’s an ensemble phenomenon with origins in these momentum fields or, more precisely, the “Clebsch potentials” defined on these momentum fields. What he fails to take into consideration here is that in Professor Hestenes Zitter Model the momentum decomposes the spin into vector component and a bivector component. From [2], page 7:
“The momentum determines an intrinsic decomposition of the spin into a spatial part is specifying the tube cross section and a temporal part mr specifying the temporal pitch of the helix.”
If you take Plack’s constant to zero, all that disappears is the bivector component describing the spatial part; the vector component survives. This spatial component disappears for the same reason position/momentum uncertainty does. From Professor Hestenes’ paper Quantum Mechanics from Self-Interaction, page 9:
“If we wish to localize a free electron, the zbw implies that the best we can do is confine it to a circular orbit of radius r = ℏ/mc with a fixed center. Therefore, the x-coordinate of the electron in the orbital plane will fluctuate with a range Δx = ℏ/mc. At the same time, since the electron travels at the same time, and since the electron travels at the speed of light with a zeropoint kinetic energy mc^2/2, the x-component of its momentum fluctuates with a range Δp_x = mc/2. Thus, we obtain the minimum uncertainty relation
ΔxΔp_x = ℏ/2.
We now see the uncertainty relations as consequences of a zero-point motion with a fixed zero-point angular momentum, the spin of the electron. This explains why the limiting constant ℏ/2 in the uncertainty relations is exactly equal to the magnitude of the electron spin.”
And this returns us to our question as to where the interference pattern originates.
Here’s the pattern we see:
The image in the bottom right-hand corner below is the Fourier transform of the double-slit barrier:
Note also the single-slit Fourier transform, how it exactly matches the single-slit diffraction envelope. These Fourier transforms are the frequency representations of the actual objects, the single-slit and double-slit barriers, respectively. ALL spacetime objects have a frequency representation analogous to this.
From William Tiller’s book written for the lay public, PsychoEnergetic Science: A Second Copernican-Scale Revolution (emphasis mine):
Using this particular duplex-space perspective, one can see an entirely different explanation for the very famous Young’s double slit experiment from the era of the classical mechanics paradigm. The conventional, single-space explanation (the old space and time explanation) saw the result as an interference of the light waves entering the two parallel slits and providing constructive/destructive superposition of these waves behind the slits. In that model, the slit structure itself contributes nothing but the two, parallel gap openings. This duplex-space perspective says that the slit structure itself, without the light waves, already has an R-space substance interference pattern existing around the slit regions of the D-space structure. The present hypothesis is that it is this R-space pattern that guides the light into its maxima and minima D-space intensity locations behind the slits.
From David Hestenes’ paper, Zitterbewegung Structure in Electrons and Photons, pages 21 and 22 (emphases mine):
This provides a promising mechanism for quantized momentum transfer in diffraction. For we know that quantized states in QM are determined by boundary conditions on the phase. Successful calculation of diffraction patterns along these lines would provide strong evidence for the following claim: the vacuum surrounding electromagnetically inert matter is permeated by a vector potential with vanishing curl. Remarkably, the same mechanism would explain the extended Aharonov-Bohm (AB) effect [66]. Evidently, then, the causal agents for diffraction and the AB effect are one and the same: a universal vector potential permeating the vacuum (or, Aether, if you will) of all spacetime, much as proposed by Dirac [67].
[A]ccordingly, we conclude that diffraction is “caused” by the vacuum surrounding material objects. In other words, diffraction is refraction by the vacuum!
Professor Tiller states in his book that one can think of his R-space as the aether, so they are both essentially saying the same thing, although Professor Hestenes models the aether as a conserved fluid. Either way, these explanations seem much more plausible than the Copenhagen explanation, which doesn’t provide an explanation for the observed momentum exchange. From Professor Hestenes once again:
We still have the problem of identifying a plausible mechanism for momentum exchange between each diffracted particle and the slits, a causal link which is missing from all accounts of diffraction by standard wave mechanics or by Pilot Wave theory. Note that momentum transfer is observable for each scattered particle, whereas the diffraction pattern conserves momentum only as a statistical average. Evidently the only way to account for this fact is by reducing diffraction to quantized momentum exchange between each particle and slit. To that end, [62] provides a detailed analysis of optical diffraction patterns explained by photon momentum exchange.
Duane was the first to offer a quantitative explanation for electron diffraction as quantized momentum exchange [63]. A more general argument using standard quantum mechanics has been worked out by Van Vliet [64, 65]. These explanations suffer from the same disease as Old Quantum Mechanics in failing to account for the density distribution in the diffraction pattern.
This not only undermines so-called quantum computation, it also shows that it’s been a hussle all along. This is not really surprising since it originated with Richard Feynman and it is well known that Feynman frequented the strip joints, hanging out with the pimps and the dope dealers. Two relevant articles from SciAm: Richard Feynman, Sexism, and Changing Perceptions of a Scientific Icon, which SciAm removed from its blog, and When Scientists Sin, which, hypocritically, mentions the cold fusion episode. There is no bigger fraud in science than Michael Shermer, the author of that last. From, Preliminary survey on cold fusion: It’s not pathological science and may require revision of nuclear theory:
For the total articles found, this work listed a total of 5249 publications, including conference articles, conference presentations, journal articles, and patents. Amongst those, 2202 are experimental works reporting results of experiments, being 1921 successful and 281 unsuccessful. In total, there are 375 distinct research groups involving 3460 researchers. There is, indeed, cooperation between research groups, but they are rather rare, most work is done by isolated groups.
And many of the early failures were either fraudulent or due to incompetence.
There is no such thing as quantum computation and it is a massive fraud. Consider the supposed 1,000 qubit quantum computer developed by NASA and DWave: it’s just one bead in the shell game. Especially if you consider this latest from the financial sector: Quantum computing firm D-Wave to IPO via $1.6bn SPAC merger: Company looks to become third listed quantum firm via SPAC deal. That’s from 2022 and I would just suggest you Google “Are SPACs scams” before you rush to invest. And, of course, all of the government regulators and politicians are either in on it or asleep at the wheel.