A Critique of Interpretations of Quantum Mechanics
Quantum mechanics is supposedly the theory that underlies the universe. It proves there are cats that can be both dead and alive, fundamental randomness to the universe, that the very act of observation can alter an experiment, that particles can take many different paths simultaneously, that even an objective reality may not exist, so on and so forth.
It is so mystical, and truly shows that nature defies all reason.
…or does it?
Upon further inspection, it becomes clearer and clearer that the physics community has been entirely deceiving the public, and this is not something you need a physics degree to even understand. I will demonstrate to you throughout this article that none of these claims are to be believed, and that many physicists have largely become religious mystics, dabbling in bizarre fairy tales that have no connection with reality, while passing them off as fact to their audience.
Fundamental Randomness
Let’s start with the first claim. This claim is that quantum mechanics is fundamentally random and that it proves determinism is false. The physicist Michio Kaku claims that this even proves free will exists!
So are they telling the truth? I mean, it seems rather bizarre to claim something you cannot predict must necessarily be random. We already know that a sufficiently chaotic system can appear unpredictable while being deterministic. This was already proven by Henri Poincaré. The a similar effect is the avalanche effect, which is the basis of many cryptographic algorithms, which is sort of like a mathematical blender that can convert one set of numbers to another that seemingly has no relation to the first and appears uniformly random, yet is entirely deterministic. A lot of modern cryptography specifically relies on being able to create seemingly random distributions that are in reality deterministic.
The claim really comes down to a misrepresentation of Bell’s theorem. The video below discusses Bell’s theorem in an attempt to “prove” hidden variables are not real. The term “hidden variables” refers to some initial state that could be used to deterministically predict the final state, which are a requirement if the outcome is not random.
At first, this video might seem to be convincing, however, if this video is correctly interpreting Bell’s theorem, then why did John Bell not think so? John Bell himself was a big fan of hidden variable theories and even contributed the development of a hidden variable theory. Really, just let me quote this video, to show Bell’s attitude to people like MinutePhysics.
why then had Born not told me of this “pilot wave?” If only to point out what was wrong with it? Why did von Neumann not consider it? More extraordinarily, why did people go on producing “impossibility” proofs? When even Pauli and Heisenberg could produce no more devastating criticism of Bohm’s version than to brand it as “metaphysical” and “ideological?” Why is the pilot wave picture ignored in text books? Should it not be taught, not as the only way, but as an antidote to the prevailing complacency? To show that vagueness, subjectivity, and indeterminism, are not forced on us by experimental facts, but by deliberate theoretical choice?
— John Bell, “On the Impossible Pilot Wave”
Now, I am not here to defend this “pilot wave” theory. However, Bell’s response to people attempting to bury this theory is the direction this article is taking, that there seems to be a clear attempt to misrepresent and cover up certain alternative ideas in order to make quantum mechanics seem more bizarre than it actually is.
So, what does Bell’s Theorem show? Bell’s theorem shows that quantum mechanics can be used to produce statistical correlations that could not possibly occur unless there are nonlocal effects going on. The term “nonlocality” here refers to effects that do not have to travel through a medium, they do not “travel” at all, two particles can interact with each other as if they are one and the same object, instantly, even if they are millions of kilometers apart.
This theorem was largely a response to a paper sometimes referred to as the EPR paper. This paper presents a seeming paradox that quantum mechanics seems to predict faster than light effects. Let me try to explain the seeming paradox in a very simple way.
Let us assume that fundamental randomness is real. It should then be possible to take a qubit and put it through a logic gate that transforms its state so that it is neither 0 nor 1, but something indeterminate. Now, imagine you have a second qubit in the 0 state, and you combine the first qubit with the second into another logic gate called the CNOT gate. This gate will flip the second qubit’s state if the first qubit is a 1, and will not flip it if it’s a 0.
Recall that we said the first qubit is in an indeterminate state and the CX gate will flip the second based on the value of the first. This means whether or not the second qubit’s value gets flipped is also indeterminate. Additionally, they are correlated. If the first qubit is a 1, then the second qubit gets flipped from a 0 to a 1. If the first qubit is a 0, then the second qubit does not get flipped and remains a 1.
This means that even though both qubits become indeterminate, their indeterminacy is related to one another, because the two qubits are guaranteed to always have the same state. Even though the states are still indeterminate and fundamentally random, their randomness is related to one another where knowing the state of one qubit will immediately tell you the state of the other, because they are always identical.
The simple algorithm for this can be shown below. This is what is known as quantum assembly. Here, qubits are always assumed to start in the 0 state. The first instruction applies the H gate to the first qubit which places it into an indeterminate state. The second instruction is the CX gate mentioned before. This constructs what is known as a Bell pair.
h q[0];
cx q[0], q[1];
What Einstein and others in the EPR paper pointed out (although they did not talk specifically of qubits but it is the same essence) is that such a pair inherently implies nonlocal effects if we are to believe that these qubits really do not have definite states until you make a measurement of them at the end of the experiment and are really indeterminate.
Let’s say you decide to travel a thousand kilometers away bring one of these qubits in the Bell pair but leave the other home. Once you reach your destination, you decide to make a measurement of the value of your qubit. Once you have made this measurement, its state then suddenly transitions from an indeterminate state to a determinate one.
The nonlocality comes from the fact that, because the two states are correlated, you also simultaneously know the exact state of the qubit that you had left at home a thousand kilometers away. The qubit at home therefore must also have, at the same time you measured the first, transitioned from an indeterminate state to a determinate state. Somehow, by making a measurement on one qubit, you can suddenly affect another qubit instantly no matter how far it is away.
Einstein thought this implied there may be some mistake in the mathematics, and this is what later lead to the development of Bell’s theorem. Bell’s theorem provides certain kind of statistical test that would show certain correlations that would not be achievable unless these kinds of nonlocal effects were actually real. I will not go into much detail on the mathematics behind this, but I recommend reading up on the CHSH inequality if you are interested, as it is the simplest form that is fairly easy to understand.
Later, Bell tests confirmed Einstein was wrong and that these effects are actually real. It is difficult to prove whether effects are truly nonlocal or are just really really fast that you do not notice the time delay. However, upper bounds on the speed have placed it at at least 10,000 times the speed of light, so physicists often just assume it is nonlocal. It could be caused by local factors as well, but such superdeterministic theories often have to posit things more complex that nonlocality itself, but they are still a possibility to keep in mind.
So, what is the point here? Einstein was proven wrong and indeterminacy is real? Actually, no, the point is rather different. The takeaway here that indeterminacy is proven is a non-sequitur from everything stated so far.
Recall that we went into this with the assumption of indeterminacy, and still came out with nonlocality. This means that nonlocality remains even in a nondeterministic framework. This is one of the odd parts of the MinutePhysics video. To quote it directly…
And this correlated behavior persists no matter how far away the photons and filters are from each other, even if there’s no way for one photon to influence the other. Unless, somehow, it did so faster than the speed of light. But that would be crazy.
This is a non-sequitur because the “crazy” conclusion of superluminal effects do not go away if you accept indeterminacy, “fundamental randomness.” The effects still remain. This “crazy” idea is in fact exactly what had motivated John Bell to take interest in Bohm’s “pilot wave” hypothesis. This idea from Bohm tries to explain quantum mechanics deterministically while also including superluminal effects.
This is the essence of what Bell’s theorem shows. It has no relevance at all to whether or not hidden variables exist. You cannot just assume fundamental randomness to get out of superluminal action. It happens whether or not hidden variables or real or not. What Bell’s theorem is about is nonlocality, not hidden variables. The only thing it tells us is that if a hidden variable theory is ever discovered, it must also be nonlocal.
The claim that there are no hidden variables just does not logically follow from Bell’s theorem. John Bell himself did not even think so.
It is important to always consider the principle of parsimony, or sometimes called called Occam’s razor, when considering how to interpret the results of experiments. You always want to have as few assumptions as possible. If all the assumptions in classical physics can be used to explain quantum mechanics, then we have to introduce no new assumptions. However, if some assumptions do not apply to quantum mechanics, this means we have to use a different set of rules between the two theories which leads to a more complex worldview.
Given that all of classical mechanics is local and deterministic, interpeting Bell’s theorem as demonstrating nonlocality and nondeterminism violates the principle of parsimony given that the second assumption is simply not required or implied by the theorem at all. Simply interpreting it as demonstrating nonlocality only, as John Bell himself did, is the most parsimonious, the simplest, conclusion that can be drawn.
Psi-Onticism
Quantum mechanics often makes a lot of references to the “observer.” In the case with the qubit placed into an indeterminate the actual state is not treated as “determined” until the observer actually looks at it. This is known as the Copenhagen interpretation and is the view the plurality of physicists endorse.
Albert Einstein had pointed out that such a view is inherently incompatible is realism. If, for example, an atom is predicted to decay with a percent probability, it will be in an indeterminate state until observed. This means that the atom can never actually be said to decay until someone looks at it, that it cannot be assigned a definite any sort of definitive moment of decay in the absence of observers.
“Now we raise the question: Can this theoretical description be taken as the complete description of the disintegration of a single individual atom? The immediately plausible answer is: No. For one is, first of all, inclined to assume that the individual atom decays at a definite time; however, such a definite time-value is not implied in the description by the wave-function. If, therefore, the individual atom has a definite disintegration- time, then as regards the individual atom its description by means of the wave-function must be interpreted as an incomplete description. In this case the wave-function is to be taken as the description, not of a singular system, but of an ideal ensemble of systems. In this case one is driven to the conviction that a complete description of a single system should, after all, be possible; but for such complete description there is no room in the conceptual world of statistical quantum theory.
— Albert Einstein, quoted from Paul Schilpp’s “Einstein: Philosopher-Scientist”
Eugene Wigner had built upon this in his “Wigner’s friend” thought experiment. Imagine there are two observers, Alice and Bob, and Alice is doing some experiment where they are repeatedly placing qubits into indeterminate states then measuring them. Now, let’s also imagine Bob is watching Alice carry out the experiment, but is not looking at Alice’s measurement results.
From Alice’s perspective, the qubits are only in an indeterminate state for a short period of time before she measures them then they “collapse” into a determinate state. From Bob’s perspective, however, because he does not yet see the measurement results, he will have to describe Alice herself in an indeterminate state that is correlated with the state of the qubits, and that this state does not become determinate until Alice reports the results of her experiment to Bob at the very end.
As Einstein showed, not only does this Copenhagenist viewpoint force someone to accept that an objective reality cannot be described independent of an observer, but Wigner had showed that it even further implies that two observers may experience a different reality.
In fact, a particular version of the Wigner’s friend thought experiment known as the extended Wigner’s friend thought experiment is actually testable. Not only this, but the tests have been carried out and have indeed concluded that quantum mechanics is “observer dependent.”
So, what does this mean? Have scientists disproved objective reality? Is realism false? Is materialism false? Were idealists right all the time? What is even the point of the scientific method if there is no objectively real world? What would be the subject of scientific inquiry if there is no objective nature?
Nope. Recall how we started with the assumption that indeterminate states can actually exist, yet it was already shown in the previous section that this is not a valid conclusion to draw from Bell’s theorem. If one simply does not assume that indeterminate states exist, none of these problems come into play.
These indeterminate states are sometimes called superposition. A qubit, for example, can be described not just in the 0 and 1 state, but a superposition of both states. When the qubit is in a superposition of states, its value is, at least according to most physicists, indeterminate, and it only becomes determined when it is measured.
This is the origin of the so-called Schrodinger’s thought cat thought experiment. The dogma of superposition is discussed by Chad Orzel in the video below.
Imagine placing a qubit into a box with a cat. The qubit will be placed into a superposition, and that superposition would then the result of this qubit’s value will be used to release poison to kill the cat or not, depending on whether it is 1 or 0. Because the qubit’s state is indeterminate, as long as you keep the box closed and do not open it and look at the cat, the cat’s state must also be indeterminate, so it must both be simultaneously dead and alive at the same time, in a superposition of both.
Correctly, the author does point out that the purpose of the Schrodinger’s cat thought experiment was to demonstrate that believing in indeterminate states is ridiculously absurd and forces you to believe that cats can exist in indeterminate states. However, he then, right after that, goes on to make a completely absurd claim.
Common sense suggests that the cat is either alive or dead, but Schrödinger pointed out that according to quantum physics, at the instant before the box is opened, the cat is equal parts alive and dead, at the same time. It’s only when the box is opened that we see a single definite state. Until then, the cat is a blur of probability, half one thing and half the other. This seems absurd, which was Schrödinger’s point. He found quantum physics so philosophically disturbing, that he abandoned the theory he had helped make and turned to writing about biology. As absurd as it may seem, though, Schrödinger’s cat is very real. In fact, it’s essential. If it weren’t possible for quantum objects to be in two states at once, the computer you’re using to watch this couldn’t exist.
According to this physicist, it is “essential” for the cat to be both dead and alive in order to build computers. What’s their justification for this? They refer to the double-slit experiment.
In the double-slit experiment, if two particles, such as electrons are photons, are fired through two slits towards a screen where they can be observed, they will at first appear to land on the screen somewhat randomly. However, over time, as the particles accumulate on the screen, they will converge towards a pattern on the screen that looks like what one would expect if they behaved like waves. You can imagine two waves diffracting out of both slits, interfering with each other, and then hitting the screen.
This happens even if one photon or electron is fired at a single time. Given interference is a property of waves, yet this seems to occur even if individual particles are fired through the slit, this naturally might lead one to conclude somehow the particle is passing through both slits simultaneously in a superposition of both paths, interfere with itself, and then once it hits the screen, this indeterminate state “collapses” to a determinate state.
One could imagine that the particle somehow degenerates into a wave that passes through the two slits, and once it hits the screen, it regenerates back into a particle. The wave “collapses” to a random point on the screen. This would seem like a simple intuitive explanation how even a single particle fired at a time could form interference patterns.
This shows that the pattern is a result of each electron going through both slits at the same time. A single electron isn’t choosing to go left or right but left and right simultaneously. This superposition of states also leads to modern technology.
Let’s begin to break down why this conclusion is entirely a non-sequitur, and the constant references to “modern technology” are intellectually dishonest and are a method by the author to attempt to conflate their dogma with the mathematics, to pretend that their dogmatic viewpoints are somehow indiscernible it the mathematics and you could not build modern technology without believing in blurred out cats that are both dead and alive.
All this will be explained and discussed over the next few sections.
Superposition
First, if it was true that these superposition states that can take multiple paths actually exist, then it would be convenient to observe them. Imagine an experiment is conducted where a photon is fired through a single slit so it diffracts and hits a random location on the screen. We can say that the photon is fired at t=0, and it hits the screen at t=1.
If this randomness is explained by the photon entering a superposition that “spreads out” until it hits the screen and “collapses” back into a single point when it hits the screen, it would be easily proven by just moving the measuring device to t=0.5 and trying to measure the photon there. Then, the measuring device could be moved to t=0.25, t=0.125, so on and so forth.
However, if this experiment is actually conducted, what one finds is that the photon always has a definite position. It is never observed in a superposition of states. Particles always seem to have a definitive state no matter where you measure them across their journey in any experiment, including the double-slit experiment.
The physicist has to explain this away, they have to claim that somehow the superposition evolves in a very specific way so that it cannot be observed. The Copenhagenist states that the particle only ever spreads out into this wave-like superposition precisely when you aren’t looking at it, and the moment you do look, it conveniently “collapses” into a definite position, so it always happens right outside what we are allowed to confirm.
There is another view, the third most popular view among physicists as shown in the previous poll, called the Many Worlds Interpretation. The MWI was invented by a mystic who liked the Sci-Fi idea of a grand multiverse, and so he tried to come up with a way to think about quantum mechanics in those terms. He was such a mystic that he even convinced himself this grand multiverse depended on his consciousness and thus granted him immortality.
Everett firmly believed that his many-worlds theory guaranteed him immortality: His consciousness, he argued, is bound at each branching to follow whatever path does not lead to death — and so on ad infinitum. (Sadly, Everett’s daughter Liz, in her later suicide note, said she was going to a parallel universe to be with her father.
— Eugene Shikhovtsev, “BIOGRAPHICAL SKETCH of HUGH EVERETT, III”
This MWI states that the superpositions actually represent parallel universes in a grand multiverse. The cat that is in an indeterminate state of alive and dead actually branches off into a multiverse where in one branch it is dead, and one it is alive. In the case of the double-slit experiment, the particle branches off into a multiverse where in one branch the particle passes through one slit, and one in another.
The MWI cannot tell us how this actual branching occurs, and conveniently, it is impossible to actually observe. So whenever you go to look at the location of a photon, you only ever find it in one location, because the others branched off into a grand multiverse you can never reach, and you just happen to find yourself on one of those particular branches, for some reason.
The MWI, like the Copenhagen interpretation, is also a psi-ontic view. What ties psi-ontic views together is their belief that these indeterminate states represented described by a law of physics called the Schrodinger equation in some way really physically exist. The Schrodinger equation describes the evolution of something with a wave-shape.
In the Copenhagen view, the particle degenerated into this wave when it is not being looked at only to regenerate back into it when it is being observed. In the MWI, the wave instead describes the shape of the branching multiverse.
Nobody ever witnesses this wave function for an individual particle. particles are always observed to only take a single path, so each of these psi-ontic views have to fine-tuned so that the particles do take all paths but in a convenient way which you will never observe.
This is essentially a trivial feature known to any experimentalist, and it needs to be mentioned only because it is stated in many textbooks on quantum mechanics that the wave function is a characteristic of the state of a single particle. If this were so, it would be of interest to perform such a measurement on a single particle (say an electron) which would allow us to determine its own individual wave function. No such measurement is possible.
— Dmitrii Blokhintsev, “The Philosophy of Quantum Mechanics”
Second, if these superpositions really did exist, then a single qubit could hold an infinite amount of information. A single bit can only hold, well, a single bit of information, either 0 or 1. But a qubit supposedly can be in a superposition between 0 and 1, which there are an infinite number of real numbers in between 0 and 1.
Imagine, for example, you wanted to send the number 1337 to a friend. You could just package it into a qubit so it is in a superposition of 13.37% 1 and the rest percentage is a 0. Then, you could transmit that qubit, and the friend could extract it and retrieve the message.
This would prove that superpositions are real. Do you know how many bits of information you can transmit per qubit? One. If you have N qubits, you can only transmit N bits of information with it. This is proven by Holevo’s theorem, and the limitation is called Holevo’s bound.
There is some misconception that you can pack more information into qubits due to the algorithm called the quantum superdense coding. This algorithm allows a person to transmit 2 bits of information through a single qubit if the two people communicating already share a Bell pair. That means the total number of qubits is 2. When the recipient receives the qubit, they read it off not the qubit they received but the qubit plus the other qubit they have in the Bell pair.
While this seems interesting, it does not violate Holevo’s theorem. It still requires two qubits to send two bits of information in total, and at no point is two bits of information ever acquired from a single qubit. You have to measure both qubits to retrieve the two bits. It does, again, imply there are some nonlocal effects going on as applying transformations to the qubit and its Bell pair before sending encodes the two bits that are being transmitted both in the other copy of the Bell pair and the qubit to be transmitted. However, it does not demonstrate a single qubit ever carrying more than 1 bit of information.
No one has ever observed a qubit hold more information than 1 bit. You can only encode 1 bit of information into a qubit, and read off 1 bit per qubit. This means that psi-ontic beliefs have to somehow fine-tune their ideas to explain this. Why is it that supposedly a qubit can carry an infinite amount of information, but it just so happens to work out where we can only ever observe a qubit carrying 1 bit?
For the Copenhagenist, that’s because, conveniently, all the information is lost the moment you make the measurement and the wave function “collapses.” For the MWI proponent, that information is carried off into other branches of the grand multiverse which you will never be able to access.
This is akin to Last Thursdayism, the notion that the entire universe was created last Thursday, but fine-tuned in such a way that it just so happens to appear exactly as it should if it was ~14 billion years old. Those who believe in psi-ontic views of quantum mechanics
Again, nature behaves exactly as one would expect if superposition did not actually exist, but just so happens to be fine-tuned in such a precise way that it does all this mystical superposition stuff behind our backs, exactly where we can’t see it.
Psi-Epistemicism
In all fields of science outside of quantum mechanics, probability distributions are treated as something epistemic, meaning, it relates to the observer’s knowledge. Let’s give an example.
Imagine a medical trial that, over a large sample size, gives the patients a new medication and then sends them home for a week. After they return, they are examined, and as a result, it is discovered that it cures the disease 85% of the time. However, nobody really knows how it works, but the trial clearly demonstrates its effects.
The fact the observer does not know how it works means, for every individual patient they give the medication to in the future, they cannot be sure if it will actually cure them or not. They know the effect seen over large sample sizes, but are not entirely certain of the cause in how the medicine specifically interacts with a person’s body.
The probability distribution here of 85%/15% is understood as a prediction based on the observer’s knowledge and it derives from data collected over large sample sizes. It is therefore not directly applicable to any specific individual but can at best be used to make a prediction for a specific individual which could be wrong. If the medicine is given to a large number of individuals, however, a similar distribution would be expected to show up.
This is exactly how probabilities work in quantum mechanics. When a qubit is said to be in a superposition, let’s say, one where it has a 50% chance of being a 0 or a 50% chance of being a 1, no one has ever observed the qubit in both states simultaneously. Rather, this distribution can only ever be observed over very large sample sizes. If you have thousands of qubits in the same superposition and measure them all, roughly half of them will be measured as a 1 and half as 0.
The probability distributions in quantum mechanics, derived from the wave function through the Born rule, behave exactly as they should in every other scientific field. They are not deterministic when making predictions for singular systems, but are deterministic when making predictions for very large sample sizes. These very large sample sizes are sometimes called “ensembles.” A quantum ensemble is the behavior of a particular particle in an idealized experiment where it can be repeated an infinite number of times.
(From D.I. Blokhintsev’s The Philosophy of Quantum Mechanics)
In the medical trial example, a psi-ontic interpretation would be equivalent to claiming that when the person receives the medicine and goes home for the week, they split into a superposition of 85% healed and 15% ill. Only when they return and receive their exam does this “wave of probability” then “collapse” into a definitive outcome. Or, as the MWI proponents might say, the patient, the moment they take the medicine, splits off into a grand multiverse where in some branches they are healed and others they remain ill, and when you carry out the exam only then do you realize what branch of the grand multiverse you are on.
In every other field of science, people would look at you funny if you said this. Yet in quantum mechanics, it is actually taken seriously. There is no reason to believe just because you have a probability distribution for results, that all the outcomes described by the probability distribution acutely happen. The probability distribution describes a trend that one would observe over very large sample sizes and does not describe the behavior of any individual sample. The only reason probabilities are used is due to the observer’s lack of knowledge.
Once you recognize this, then there is no reason to posit cats that are both dead and alive, two people who might disagree over physical reality, particles that can take multiple paths simultaneously, so on and so forth. All this bizarre nonsense disappears.
“The attempt to conceive the quantum-theoretical description as the complete description of the individual systems leads to unnatural theoretical interpretations, which become immediately unnecessary if one accepts the interpretation that the description refers to ensembles of systems and not to individual systems.”
— Albert Einstein, quoted from Paul Schilpp’s “Einstein: Philosopher-Scientist”
In the example Einstein gave with the decaying atom, it does indeed have a definite time it decays, you just don’t know it yet. In the “Wigner’s friend” thought experiment, the two observers are not disagreeing on physical reality, but disagreeing on their knowledge about physical reality from their different perspectives. In the “Schrodinger’s cat” thought experiment, that cat is not in some indeterminate state of dead or alive, but is either dead or alive, but the observer just does not know which one yet.
This is what the mathematics underlying quantum mechanics is really describing. It is not describing some fairy tale of “probability waves” or a grand multiverse, but the observer’s knowledge. The mathematics underlying quantum mechanics describes an effect which the observer does not know the cause. Like in the example of the medical trial, the effect is clear in the data over large sample sizes, and so an understanding of the behavior of this effect can be put to good use, such as by recommending people the medicine. However, due to the lack of understanding of an underlying cause, the medical doctor has no way for certain how to predict the outcome the medicine will have on any individual patient.
Quantum mechanics is therefore an incomplete theory. It, in reality, describes an effect with an unknown cause, and thus is not a complete picture of how the universe works. It, instead, is an approximation for the universe. This arises from imprecision in measurements. Every experiment has error bars, and it is simply not possible to measure things as small as individual particles without the error bars being larger than what you are measuring. It is not possible to fire a photon in the exact same trajectory every time.
Ever so slight imperfections in the experiment make it always behave slightly differently, and these imperfections make it, at least currently, impossible for large scale systems like people to ever entirely isolate all variables in a small scale system like a single individual electron. It is, at least currently, seemingly impossible to separate these effects caused by imperfections of large scale apparatuses in an experiment from the very miniscule particles that are the object of study.
“What is actually the nature of the quantum ensemble? The essence of the body lies in the fact that in nature there is no absolute division into micro- and macrocosm. The mini-world phenomena take place inside the macrocosm, and if one mentally distinguishes a micro phenomenon, it still remains in reality connected with the world, one may say, with macroscopic bodies. The isolation of microsystems, which was considered as fundamentally possible from the point of view of classical concepts, in reality, due to the finiteness of interactions, turns out to be unrealizable.”
— Dmitrii Blokhintsev, “Критика идеалистического понимания квантовой теории”
Hence, quantum mechanics currently merely represents a theory of knowledge regarding some known effect, but is not a complete description of how nature works as the actual cause is unknown, and maybe will not ever be known. So, currently, you can only make statistical predictions based on idealized ensembles and cannot predict how single individual particles will behave with great precision.
The dogma of superposition therefore is unfounded, at least, if it is interpeted to mean that particles can really be in multiple states at once, or take multiple paths at once. Rather, these multiple paths do not actually exist, but just represent the observer’s uncertainty about which path it will take precisely due to their lack of knowledge on the precise underlying causes of the particle’s behavior.
This resolves the mystery of why a particle is only ever observed to take one path, because it only ever took one path. There is no “collapse” of wave functions or an infinite branching multiverse. The particles always take a single path. When you observe which path it takes, there is no “collapse,” you are just coming to gain the knowledge of which path it actually took, so you update the wave function, which represents your prediction, accordingly.
Interference and the Double-slit
One reason that the psi-ontic interpretations are popular is due to the double-slit experiment. Recall that the double-slit experiment shows that light can behave like a wave and interfere with itself. The wave function predicts the overall shape of this wave.
This can be repeated even if a single photon is fired at a time. It therefore, at first, might seem intuitive to then ascribe the wave function not to the whole wave, the ensemble of particles, but even to each individual particle, so you can then claim each individual particle is interfering with itself.
In fact, that video linked prior says it is necessary to do this or else it is impossible to have quantum mechanics. Therefore, quantum mechanics is not a theory regarding an effect of an unknown cause, but that the cause is very well understood, it is caused by the particle turning into a wave and turning back into a particle in a superposition.
We already discussed some problems why this idea is fine-tuned nonsense, but the claim of the video is not even true. Not only is it possible to explain the double-slit experiment without positing a psi-ontic interpretation of the wave function, you do not even have to posit the kind of nonlocality shown in Bell’s theorem.
The double-slit experiment is in fact entirely classical. It is not even a quantum phenomon.
One feature that is often described to be at the heart of quantum mechanics is the notion of contextuality. Contextuality arises from an additional notion of communiting variables. A communiting variable when a system may have two internal variables where knowing both at once is not possible, that measuring one of the variables causes knowledge about the state of the other one to be lost. Contextuality is a system where different objects, let’s say, qubits, have their communiting variables depend on one another. This would mean that the final state of those qubits would not be predictable and would all depend on one another.
The nonlocality in Bell’s theorem really is just a form of contextuality. The two qubits have values that become dependent on one another. If the two qubits are separated with great distance, then this dependence is clearly observed as nonlocal.
Despite the common claim that contextuality is at the heart of quantum mechanics, in reality, contextuality can also be part of classical probability theory. Within an epistemic model specifically, contextual behavior with communiting variables can be replicated.
“Contextuality lays at the heart of quantum mechanics. In the prevailing opinion it is considered as a signature of ‘quantumness’ that classical theories lack. However, this assertion is only partially justified. Although contextuality is certainly true of quantum mechanics, it cannot be taken by itself as discriminating against classical theories…contextual effects have their analogues within classical models with epistemic constraints such as limited information gain and measurement disturbance.”
— Pawel Blasiak, “Classical systems can be contextual too: Analogue of the Mermin-Peres square”
It turns out, in fact, that this is all that is needed to replicate the dual-slit experiment. A simple toy epistimic model of quantum mechanics that assigns commuting hidden variables that get disturbed with measurement can be used to reproduce the affects shown in the double-slit experiment.
In the toy model shown in the video above, treating quantum mechanics like an epistemic theory with hidden communicating variables can be used not to just replicate the kind of interference shown in the double-slit experiment.
A solution to the double-slit experiment, as shown above, also tends to bring along with it solutions to other experiments. For example, it also explains, entirely classically, the so-called “quantum eraser” experiment which lvoes to be misrepresented by physicists as “rewriting the past.”
It even explains the Elitzur–Vaidman bomb tester experiment, despite even those physicists who sometimes push back against quantum mysticism still claiming is fundamentally “weird.” In reality, it is the same kind of experiment as the double-slit, and so any solution applies to both.
Such mystical claims regarding all three of these experiments can be explained entirely classically, which casts doubt on all this quantum woo.
The toy model is called Spekkens toy model and is a series of models designed explicitly for the purpose of distinguishing between what parts of quantum mechanics are inherently “quantum” and which are classical. Given the toy model is entirely classical, anything that can be explained in it could be explained without having to resort to any quantum woo, and all these supposedly “difficult” experiments such as the double-slit experiment can have the interference explained without positing the photon actually takes both paths and interferes with itself.
There is one thing the toy model cannot replicate, however: the nonlocal correlations shown in Bell tests. This demonstrates that the nonlocal correlations in Bell tests really are the heart of what makes quantum physics different from classical physics, and all these other tests are just misleading, with almost mystical interpretations read into them by relying on psi-ontic interpretations of quantum mechanics.
To go into more detail on how these “weird” experiments can be explained classically, I have already mathematically demonstrated here how photons are arbitrarily treated different than other quantum states, an arbitrary inconsistency that can be shown to lead to incoherent outcomes, such as assuming interaction-free measurements are possible. When this arbitrary inconsistency is removed, then all these “weird” effects other than Bell inequalities go away.
If photons are instead treated consistently like any other quantum state, then one has no choice but to to conclude that it is possible for a photon in the 0 state to also carry information about its phase. That means there necessarily has to exist a sort of photon that propagate through the electromagnetic field, but rather than having a single degree of freedom for the excitation, there are two degrees of freedom, and in this second degree of freedom, phase-related information to the photon can propagate through the electromagnetic field and interfere with other photons, or undergo decoherence when interacting with a measuring device.
This is not even a major assumption, it merely falls out of the math of choosing to treat photons, when used as an subjects of quantum computation, like any other subjects of quantum computation. What I have stated is already true and agreed upon for other things, such as, if electrons are used where the spin up is assigned the bit value 1 and a spin down of 0, and this is used as a qubit in quantum computers. It is already agreed upon that qubits can carry a bit related to its phase.
The implication of this is that when a photon is measured as a 0, it cannot necessarily be interpreted as the non-existence of a photon. It could also be interpreted as the presence of a photon that is in the 0 state and carrying only phase-related information. Simply by treating photons consistently with anything else, it necessarily falls out classical explanations for the double-slit experiment, the so-called “interaction-free” bomb tester experiment, the delayed-choice experiments, so on and so forth.
There is no “interaction-free” measurement going on, but an interaction with a photon in the 0 state that then undergoes decoherence. Again, if you replace the photon with something like electrons, this is blatantly obvious. You would be measuring an electron in the 0 state, i.e. a spin down state, and that causes the electron to undergo decoherence. The seeming “interaction-free” measurement is an assumption deriving from belief that a 0 state photon really means the complete non-existence of anything at that point of measurement, which obviously does not apply if we are using electrons and not photons.
If we drop the assumption about the photons and quit treating them differently, then these difficult problems are suddenly easily resolved, and in entirely classical terms.
This does not, immediately, call into question nonlocality, because even though these sort of interference-based experiments are easy to solve with such an assumption, it is not so straight-forward to then explain away the nonlocal correlations with such an assumption. Various no-go theorems actually do make this mathematically rather difficult, as there are a lot of rules one would have to tiptoe around to make it work.
Although, it does make it seem like there is a possible opening. One of the main assumptions behind all no-go theorems is that quantum mechanics is correct. If a more fundamental theory of quantum mechanics is ever discovered, it could only ever be proven if it is falsifiable, meaning, it must contradict with quantum mechanics somewhere. Hence, if we do ever discover a new theory, we would have to drop the assumption that quantum mechanics is correct.
One might assume that if a photon being detected as a 0 does not necessarily mean that photon doesn’t exist, then this would violate the fair sampling assumption and open a loophole regarding Bell inequalities. However, it is not so simple, as there have been Bell tests actually conducted using electron spin, which of course such an argument would not apply to.
Towards a Real Theory?
A common attempt to dismiss epistemic approaches is to claim that the so-called “PBR Theorem” rules out the possibility of an epistemic interpretation. It is bizarre that this paper presents itself with such strong claims, but its very opening page renders all the further claims it makes incredibly dubious.
The argument depends on few assumptions. One is that a system has a “real physical state” — not necessarily completely described by quantum theory, but objective and independent of the observer…The other main assumption is that systems that are prepared independently have independent physical states.
Recall that Bell’s theorem had already demonstrated that there are seeming nonlocal effects between particles. There is no reason to assume that somehow, if particles can interact with each other nonlocally, that this would always be localized very specifically only to the particles you are studying. This is an incredibly wild assumption to begin the first page with in light of Bell’s theorem.
Any assumptions that usewords like “independent” and “isolated” have to be called into question. There is no reason to assume any two states set up independently would be independent of one another, and there is no reason to even assume the observer is independent of what they are trying to observe, as the observer, too, is made up of particles, and so is their measuring device.
Bell’s Theorem if analyzed deeply points to important general distinctions that need to be made in discussing quantum mechanics at its core…With super-luminal interactions considered as integrated into the core of quantum mechanics, there is no reason to suppose that one can independently prepare the systems without a lot of care. Indeed, if the action is non-local in the sense of instantaneous, it would be natural to expect that no amount of care could accomplish an independent preparation. The latter would pose a principled block to PBR’s conclusions, the former would put an effective block.
— Anthony Rizzi, “Does the PBR Theorem Rule out a Statistical Understanding of QM?”
Another problem with these no-go theorems is that they alway rely on another assumption, the assumption that quantum mechanics is correct. If the cause underlying quantum effects was ever discovered, it could only be proven if it could be falsified, i.e. it would ahve to make different predictions than quantum mechanics itself.
Hence, it is nonsensical to have a no-go theorem that relies on assumptions that quantum mechanics is correct to rule out a potential future theory that could only be demonstrated to be true by contradicting with the rules described by quantum mechanics. The specific way in which it contradicts with quantum mechanics could open up even further loopholes.
Clearly, this attempt to make them seem impossible is incredibly weak. If one did exist, what might it look like? There has been basically zero effort towards these ends in the actual sciences, and in no way could a complete picture be laid out here, and so this question cannot be answered here.
However, Spekkens toy model could be slightly modified to add nonlocality into it. This would provide a simple model for what an actual scientific theory of nature might look like. This would require adding the same kind of contextuality seen in quantum systems to the toy model, so that it could more accurately be used to model quantum systems. This paper for example explores the question of adding this kind of contextuality to quantum Spekkens toy model.
I am not particularly interested in actually presenting a model here since that is not the point. The point is not that I am some genius who can replace quantum theory with something better, the point is that people need to stop pretending quantum mechanics is a complete theory while filling in the gaps with mystical ideas about cats being simultaneously dead and alive or some grand multiverse. There is no need to believe any of this, it’s just a dogma, it’s not necessary for the math, and if we ever do discover a deeper theory than quantum mechanics and quantum field theory, it will likely make all these mystical ideas look silly.
The Measurement Problem
A claim often made by those who push the psi-ontic dogma is that is justified simply due to the fact that it can explain quantum phenomena well. For them, this means that a quest for a better theory is not necessary because their dogma already explains everything there is to know.
While, on face value, it may seem like ideas like the Copenhagenist “probability wave” explains quantum mechanics without needing to introduce more complexity, these interpretations always fail to actually be complete. Quantum mechanics is, again, an approximation for something we don’t fully understand, so it is an incomplete theory. More than this, every attempt to simply make up an answer always ends up being incomplete as well, just in a different way.
The Copenhagen interpretation, for example, does not explain what an observer actually is, leading to various open questions which, if not resolved, causes the interpretation to degenerate into nonrealism and complete idealism. While some physicists people might take nonrealism seriously, those who do should be mocked and riducled by the public for it. No public funding should ever be provided to “physicists” who legitimately claim objective reality does not exist. Physics is about studying nature, if you reject the existence of nature then you are not in the right field.
The solution to this would require a strict defining of what qualifies as an observation. For example, if the definition is cleverly defined in an appropriate way, it could prevent paradoxes like Schrodinger’s cat from ever occurring, and at least restrain the absurdity only to individual particles.
This, however, implies a theory of “collapse” is needed. This has led some physicists, particularly Roger Penrose, to then begin investigating objective collapse theories to fix this problem. Hence, even assuming these “probability waves” in the Copenhagen interpretation are ontologically real entities, you still arrive at the conclusion that quantum mechanics is an incomplete theory.
This problem of defining the “collapse” comes from the psi-ontic assumptions. If you never assume particles can degenerate into a “wave” that can “take all possible paths,” then you never need to explain how it can regenerate back into a particle at a specific path. There is no measurement problem in the psi-epistemic interpretation because there is no assumption particles take more than one path.
This is, in fact, despite what the charlatans will tell you, also a problem with the Many Worlds Interpretation. It really is a problem in any interpretation that is psi-ontic. In MWI, the particle also spreads out through all possible paths, so somehow, the MWI needs to explain why it is we only observe on our measuring devices the outcome of a single path.
They, of course, often explain this by saying you are kicked off onto some branch on the grand multiverse. But how does this branching process work? What decides which branch we are pushed onto? It is the same kind of problem with Copenhagen interpretation. Any viewpoint which attempts claim all possible paths actually happen struggle to provide convincing explanation for how the many can be observed as the one.
No Dissent
This isn’t just an accident that reasonable interpretation of quantum mechanics have been ignored. There has been a real, legitimate attempt to push out logic and reason from academia.
The recent 2022 Nobel Prize winner, John Clauser, had brought up his intention to test the foundations of quantum mechanics to Richard Feynman. Feynman responded by kicking him out of his office, and his colleagues would go on to insist to him it would ruin his career.
“Leading physicist Richard Feynman, who won his own physics Nobel in 1965, ‘kind of threw me out of his office,’ Clauser said. ‘He was very offended that I should even be considering the possibility that quantum mechanics might not give the correct predictions.’ But Clauser said he was having fun working on these experiments and thought they were important — ‘even though everybody told me I was crazy and was going to ruin my career by doing it.’”
— You’re a winner: Listening in on ‘the call’ for Nobel Prize
This reflects a general problem in academia of trying to silence any criticism of quantum theory and insist upon studying the foundations of physics to develop a deeper theory.
“If you’re a physicist and you are interested in these foundational questions, it will kill your career…there was certainly a time where literally you could not get a job if you said I’m interested in the foundations of quantum mechanics.”
To be clear, the only thing in quantum mechanics that actually has strong evidence backing it separating it from classical mechanics is nonlocal correlations, or at least, something with the appearance of nonlocal correlations. All the other claims around quantum mechanics are direct attempts by physicists to lie to, misinform, and mislead the public.
Some of this is likely motivated by money. A flashy headline saying that a physicist “disproved objective reality” might attract more funding. A large part of this cult may have been created from a sort of feedback loop caused by funding encouraging physicists to embrace the psi-ontic religion, which then leads them to teach other students these views, who then graduate only to find those views useful to get funding, and the loop starts over again.